Hello,
I have a little homelab that contains a 3 node k3s cluster which im pretty happy about but i got some questions regarding ingress.
Right now i use nginx as ingress controller and i have the IP of one of the nodes defined under externalIPs. All the nodes are behind the router my ISP gave me so this is nothing special, in this router i configured it to forward port 443 to port 443 of that ip. This all works as excpected im able to access the ingress resources that i want.
But i wanna make some improvements to this setup and im honestly not really sure how i could implement this.
- Highly available ingress. When the node which contains the IP of the ingress controller goes down im unable to reach my clusters ingress since my router cant forward the traffic. Whats the best way to configure all 3 nodes to be able to receive ingress traffic? (If needed im able to put it behind something like openwrt or opnsense but not sure if this is needed)
- Some ingres resources i only want to expose on my local network. I read online that i can use
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24
but this doesn’t work i think because since the ingress doesn’t receive the clients actual ip rather it receives an internal k3s ip. Or is their another way to only allow certain ips to access an ingress resource?
Could someone point my in the right direction for these improvements i wanna make? If you need more information you can always ask!
Thanks for your time and have a great day!
metallb sounds like what you need, basicall you give it a range in your subnet (excluded from dhcp/Router!) and it assigns those ips to your loadbalancer services, it broadcasts this IP over Arp or bgp which makes automatic failover work.
I’m a little curious what you are using for a hypervisor. I’m using Apache Cloudstack. Apache Cloudstack had a lot of the same features as AWS and Azure. Basically, I have 1000 vlans prepared to stand up virtual networking. Cloudstack uses Centos to stand up virtual firewalls for the ones in use. These firewalls not only handle firewall rules, but can also do load balancing which I use for k8s. You can also make the networks HA by just checking a box when you stand it up. This runs a second firewall that only kicks in if the main one stops responding. The very reason I used Cloudstack was because of how easy it is to setup a k8s cluster. Biggest cluster I’ve stood up is 2 control nodes and 25 worker nodes, it took 12 minutes to deploy.
Im not using any hypervisor (yet), but in the feature im probably going to look at proxmox.
Never heard of cloudstack before but what i just read and what you described sounds really intresting!
27 nodes in 12 minutes sounds insane :)
Yeah my storage was beefed up at the time. Zfs raid 10. But I’ve changed to using ceph for shared redundant storage.
Ceph is really cool, i also wanna use it in the future but i need way more disks for that :). Are those 25 worker nodes virtual machines? How did you attatch the disks to the ceph nodes?