I’m looking for experiences and opinions on kubernetes storage.

I want to create a highly available homelab that spans 3 locations where the pods have a preferred locations but can move if necessary.

I’ve looked at linstore or seaweedfs/garage with juicefs but I’m not sure how well the performance of those options is across the internet and how well they last in long term operation. Is anyone else hosting k3s across the internet in their homelab?

Edit: fixed wording

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    14 days ago

    That isn’t how you would normally do it

    You don’t want to try and span locations on a Container/hypervisor level. The problem is that there is likely to much latency between the sites which will screw with things. Instead, set up replicated data types where it is necessary.

    What are you trying to accomplish from this?

    • InnerScientist@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      The problem is that I want failover to work if a site goes offline, this happens quite a bit with private ISP where I live and instead of waiting for the connection to be restored my idea was that kubernetes would see the failed node and replace it.

      Most data will be transfered locally (with node affinity) and only on failure would the pods spread out. The problem that remained in this was storage which is why I’m here looking for options.

  • ragingHungryPanda@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    One thing I recently found out is that ceph wants whole drives. I could not get it to work with partitions. I got it to work with longhorn, though I’m still setting things up.

  • F04118F@feddit.nl
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    14 days ago

    I tried Longhorn, and ended up concluding that it would not work reliably with Volsync. Volsync (for automatic volume restore on cluster rebuild) is a must for me.

    I plan on installing Rook-Ceph. I’m also on 1Gb/s network, so it won’t be fast, but many fellow K8s home opsers are confident it will work.

    Rook-ceph does need SSDs with Power Loss Protection (PLP), or it will get extremelly slow (latency). Bandwidth is not as much of an issue. Find some used Samsung PM or SM models, they aren’t expensive.

    Longhorn isn’t fussy about consumer SSDs and has its own built-in backup system. It’s not good at ReadWriteMany volumes, but it sounds like you won’t need ReadWriteMany. I suggest you don’t bother with Rook-Ceph yet, as it’s very complex.

    Also, join the Home Operations community if you have a Discord account, it’s full of k8s homelabbers.

  • notfromhere@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    I know Ceph would work for this use case, but it’s not a lighthearted choice, kind of an investment and a steep learning curve (at least it was, and still is, for me).

    • InnerScientist@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      I heard that ceph lives and dies with the network hardware. Is a slow internet connection even usable when the docs want 10 gbit/s networking between nodes?

      • notfromhere@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        14 days ago

        I’m really not sure. I’ve heard of people using Ceph across datacenters. Presumably that’s with a fast-ish connection, and it’s like joining separate clusters, so you’d likely need local ceph cluster at each site then replicate between datacenters. Probably not what you’re looking for.

        I’ve heard good things about Garbage S3 and that it’s usable across the internet on slow-ish connections. Combined with JuiceFS is what I was looking at using before I landed on Ceph.