• 5 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle






  • My step-up from Pi was to ebay HP 800 G1 minis then G2’s. They are really well made, there’s full repair manuals available, and they are just a pleasure to swap bits in and out. I’ve heard good things about, and expect similar build quality from the 1 liter Lenovos.

    I agree that RAM is a likely constraint rather than processor for self-hosting workloads. Particularly in my case as I’m on Proxmox and run all my docker containers in separate LXCs. I run 32GB in the G2’s which was a straightforward upgrade (they take laptop like memory). One some of them I’ve upgraded the SSDs, or if not, I’ve added M.2 NVME drives (that the G2’s have a slot for).





  • I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So -

    • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
    • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
    • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
    • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
    • Yearly: visit the remotes and have a proper check/clean up/updates


  • My ‘good reason’ is just that it’s super convenient - for backups and painlessly moving apps around between nodes with all their data.

    I would run plain LXCs if people nicely packaged up their web apps as LXC templates and made them available on LXCHub for me to run with lxc compose up, but they generally don’t.

    I guess another alternate future would be if Proxmox added docker container supervision to their web interface, but you’re still not going to have the self-contained neat snapshot system that includes the data.

    In theory you should be able to convert an OCI container layer by layer into an LXC, so I bet there’s projects out there that attempt this.






  • I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you’re learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it’s a more common experience so you’ll get more helpful advice when you ask a question like this.



  • Yes, in a shallow tourist mine in Australia. Apparently coal starts to flake easily once it’s been exposed to air for a bit, so they kept a big chunk in a large jar of water that you could take out and handle. It felt like a light wet rock.

    The sample, and the coal at the workface of the mine was stereotypicaly black. We wore hats with lights on, and when we emerged back out to the daylight I had an overwhelming urge to speak in a Monty Python type Yorkshire accent and go home and have my back scrubbed clean of the coal dust by my swarthy tired looking wife while I sat in a tub in front of the fire in the kitchen and our urchins played in the street.

    I don’t want to give the impression I’m a big fossil fuel tourist, but I’ve also seen blobs of crude oil on beaches near Mediterranean sea oil terminals.

    Sadly, I didn’t try to set fire to them on either of these occasions, which I now regret.



  • Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.

    My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.

    Obviously there’s some overhead, but it doesn’t seem to be significant.

    On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.

    I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that’s something I’d like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I’ve got a bit on inertia.