I’m still running a 6th-generation Intel CPU (i5-6600k) on my media server, with 64GB of RAM and a Quadro P1000 for the rare 1080p transcoding needs. Windows 10 is still my OS from when it was a gaming PC and I want to switch to Linux. I’m a casual user on my personal machine, as well as with OpenWRT on my network hardware.

Here are the few features I need:

  • MergerFS with a RAID option for drive redundancy. I use multiple 12GB drives right now and have my media types separated between each. I’d like to have one pool that I can be flexible with space between each share.
  • Docker for *arr/media downloaders/RSS feed reader/various FOSS tools and gizmos.
  • I’d like to start working with Home Assistant. Installing with WSL hasn’t worked for me, so switching to Linux seems like the best option for this.

Guides like Perfect Media Server say that Proxmox is better than a traditional distro like Debian/Ubuntu, but I’m concerned about performance on my 6600k. Will LXCs and/or a VM for Docker push my CPU to its limits? Or should I do standard Debian or even OpenMediaVault?

I’m comfortable learning Proxmox and its intricacies, especially if I can move my Windows 10 install into a VM as a failsafe while building a storage pool with new drives.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    11 days ago

    Yeah I’m not saying everybody has to go and delete their infra, I just think that all new production environments should be k8s by default.

    The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I’m a bit surprised that Kubernetes wouldn’t have enough availability to be able to co-locate your general workloads and your observability stack, but that’s totally fair to segment those workloads.

    I’ve heard the argument that “kubernetes has more moving parts” a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example, I don’t run manual updates for minor versions in my homelab. I have a k8s CronJob that runs renovate, which goes and updates my Deployments in git, and ArgoCD automatically deploys the changes. Technically that’s a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.

    I’m not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.

    Yeah, apparmor/selinux isn’t very popular in the k8s space. I think it’s easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.