TPM stores the encryption key against secure boot. That way, if attacker disables/alters secure boot then TPM won’t unseal the key. I use clevis to decrypt the drive.
TPM stores the encryption key against secure boot. That way, if attacker disables/alters secure boot then TPM won’t unseal the key. I use clevis to decrypt the drive.
Thank you… I had to learn kubernetes for work and it was around 2 weeks of time investment and then I figured out I could use it to fix my docker-compose pains at home.
If you run a lot of services, I can attest that kubernetes is definitely not overkill, it is a good tool for managing complexity. I have 8 services on a single-node kubernetes and I like how I can manage configuration for each service independent of each other and also the underlying infrastructure.
don’t create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.
This was the setup I had, but now I am already using kubernetes with no intention to switch back.
I was writing my own compose files, but see my response to a sibling comment for the issue I had.
If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine’s nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine’s DB instead of its own DB.
I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.
Nothing will ever top “Galaxy Note 7”. Super fun in planes, especially if they’re flying.
As someone who is operating kubernetes for 2 years in my home server, using containers is much more maintainable compared to installing everything directly on the server.
I tried using docker-compose first to manage my services. It works well for 2-3 services, but as the number of services grew they started to interfere with each other, at that point I switched to kubernetes.
I just uploaded to Github: https://github.com/akashrawal/nsd4k
I only made it for myself, so expect very rough edges in there.
I run a crude automation on top of OpenSSL CA. It checks for certain labels attached to kubernetes services. Based on that it creates kubernetes secrets containing the generated certificates.
It might be a failing fan. I have an Intel nuc whose fan started sounding like an air raid siren, so I took the fan out, drilled a hole into its bearing and added coconut oil into it. It is working fine till this date, but buying a new fan is probably better.
For me the value of podman is how easily it works without root. Just install and run, no need for sudo or adding myself to docker group.
I use it for testing and dev work, not for running any services.