• 2 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: March 26th, 2024

help-circle


  • I’m a little curious what you are using for a hypervisor. I’m using Apache Cloudstack. Apache Cloudstack had a lot of the same features as AWS and Azure. Basically, I have 1000 vlans prepared to stand up virtual networking. Cloudstack uses Centos to stand up virtual firewalls for the ones in use. These firewalls not only handle firewall rules, but can also do load balancing which I use for k8s. You can also make the networks HA by just checking a box when you stand it up. This runs a second firewall that only kicks in if the main one stops responding. The very reason I used Cloudstack was because of how easy it is to setup a k8s cluster. Biggest cluster I’ve stood up is 2 control nodes and 25 worker nodes, it took 12 minutes to deploy.




  • I’m curious where you are from and what hardware for self hosting you have. I also want to know what you are interested in self-hosting or learning.

    For me, my home lab started with networking. Yours doesn’t have to. For me, I had already achieved System Administration and was working to become a network engineer. Where are you on your path? In truth, starting with the network is not the best, mine required dedicated equipment: a firewall(UDM), switching(ubiquiti), and access points. This is expensive, so perhaps not the best place to stay.

    I would say that a good place to stay is with virtualization and a hypervisor. A hypervisor is intended to run virtual machines. I think starting with a hypervisor is a good idea because once you have a hypervisor, you can experiment with just about anything you want. Windows, Linux, docker, wherever your exploration takes you.

    Now, I would say the cheapest way to do this kinda depends on you. Do you have a .edu email address? If so, you should be able to receive free licensing for Windows Server through Microsoft imagine (previously called dreamspark). If not, do you have Windows 10/11 pro edition? I would say that Windows server may require dedicated hardware, but if you are already running Windows pro, then your daily driver pc will be capable of running hyper-v.

    If you have an old spare computer, you can make it a dedicated hypervisor with either the Windows Server option, or in my opinion the preferable Proxmox. Proxmox may take a little time to get acclimated to since it is Linux command line, but you already have experience with that on the pihole.

    Those are my recommended next steps to take. Though, there is plenty more that you can do. As others have said docker is a cool way to make some of this happen. I personally hate docker on Windows(it’s weird and I just want the command line not a UI). But you should easily be able to spin up Windows Subsystem for Linux, install docker and docker compose and get started there without needing any additional hardware. You could also do the same using hyper-v if you prefer and have a pro license.

    Regardless of what direction you choose to go, you can go far, you can succeed, and you can thrive. And if you run into any issues, post them here. Selfhosted has your back, and we are all rooting for you.

    Side Note: Hyper-v used to only be available on Windows Pro, but if someone knows for sure that it is available on home please let me know and I will update my post.



  • Here is the exact issue that I’m having. I’ve included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don’t know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can’t do that with scsi disks, then that advantage is limited.

    Windows Terminal for the win, btw.

    Live CD:

    Ubuntu 24.04 Installed:


  • Well… I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn’t actually look it up until I was questioned about it. So I appreciate being kept honest.

    While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn’t gone, I can lose two. So overall, this setup is less redundant than raidz2.

    Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but zpool status rpool will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don’t face this issue, it would result in a more reliable ZFS installation for them.






  • Interesting… Though I know nothing about your particular setup, or migrating existing data, I have a similar project in the works. This project is to automatically setup a ZFS RAID 10 on Ubuntu 24.04.

    If you are interested in seeing how I am doing it, I used the openzfs root on Debian/Ubuntu guides.

    Debian

    Ubuntu

    For the code, take a look at this git hub: https://github.com/Reddimes/ubuntu-zfsraid10/

    One thing to note is this runs two zpools, one for / and one for /boot. It is also specifcally UEFI and if you need legacy you need to change the partitioning a little bit(see init.sh)

    BE WARNED THAT THIS SCRUBS ALL FILESYSTEMS AND DELETES ALL PARTITIONS

    To run it, load up a ubuntu-server live cd and run the following:

    git clone --depth 1 https://github.com/Reddimes/ubuntu-zfsraid10.git
    cd ubuntu-zfsraid10
    chmod +x *.sh
    vim init.sh    # Change all disks to be relevant to your setup.
    vim chroot.sh    # Same thing here.
    sudo ./init.sh
    

    On first login, there are a few things I have not scripted yet:

    apt update && apt full-upgrade
    dpkg-reconfigure grub-efi-amd64
    

    There are two parts to automating this, either I need to create a runonce.d service(here). Or I need to add a script to the users profile.d directory which goes ahead and deletes itself. And also I need to include a proper netplan configuration. I’m simply not there yet.

    I imagine in your case you could start a new pool and use zfs send to copy over the data from the old pool. Then remove the old pool entirely and add the old disks to the new pool. I certainly have never done this though and I suspect there may be an issue. The other option you have (if you have room for one more drive) is to configure it into a ZFS RAID 10. Then you don’t need to migrate the data, but just need to add an additional vdev mirror with the additional drive and resilver.

    One thing I tried to do was to make the scripts easily customizable. It still is not yet ready for that, though. You could simply change the zpool commands in the init.sh.