Homelab – Build a High-Availability Kubernetes Cluster with Rancher on Proxmox – Part 2

Welcome back to the Adventures in Failure homelab series. In Part 1, we talked a bit about the end result of this series and saw a diagram of the devices in my network.

In this installment, I’ll detail the hardware I’m running and visit ways to get the same effect with a smaller setup.

Hardware

Network

  • Router/firewall: Sophos UTM 220 Network Security Appliance. This is an 8-port appliance. I currently only use 3 – 1 for the WAN connection to the Internet, 1 for the LAN connection to my “production” network, and 1 for the LAN connection to my lab network. Purchased on eBay for ~$50
  • Switches:
    • TP-Link TL-SG116E 16-port Smart Managed switch. This switch is used for the production network and physically separated from the lab, but rules are defined in the firewall to allow devices to communicate across networks. Purchased new for ~$100
    • D-Link DGS-1100-24V2 Gigabit Ethernet Switch. This switch is used for the lab and contains the additional VLANs for the cluster and storage networks. Purchased new for about $150

Infrastructure

  • Virtualization: Supermicro X8DTT-F 2-Node Server Xeon E5640 2.67GHz (x4) 96GB 2x 1TB HDDs. A fantastic deal I picked up on eBay, This is 2 full servers in a single 1u chassis. Each server has dual 4-core CPUs for a total of 16 threads each with Hyperthreading enabled, and 48GB RAM. Each server board supports 192 GB of DDR3 ECC RAM, but 48 suits my needs for now. eBay has good deals on RAM frequently if I ever decide to upgrade. Both servers have 1TB 7200 RPM hard drives installed to hold the Proxmox Operating System and some local storage, but VM storage is managed with my NAS.
  • Network-Attached Storage: Custom-built PC that was once my daily driver. I’ve long since upgraded, but it’s still a solid machine for storage duties. This box contains a Gigabyte GA-78LMT-S2P motherboard running an AMD FX(tm)-6100 Six-Core Processor with 16GB DDR3 RAM. I’ve outfitted it with 2 4TB 5400 RPM WD Red NAS drives, in a striped configuration for performance. Once I’ve got a handle on how much storage I really need, I’ll upgrade or add additional drives and move to RAID 10
  • Primary Domain Controller: Another custom-built box once used as my main PC, this server is the only physical server I own, apart from my Proxmox hosts. This server holds a MSI 785GM-P45 motherboard with an AMD Athlon II X4 635 CPU and 24GB RAM. I run Windows Active Directory on this box to manage users and permissions in the house, as well as DHCP for the lab and internal DNS for both the lab and production networks

Software

Firewall/Router: The UTM 220 listed above is a Sophos device, intended to run Sophos UTM, and it doe so quite capably. I reimaged it with Opnsense because Sophos home licenses don’t work on their hardware appliances and it didn’t make sense to me to pay for one for a home lab. The Opnsense software runs fabulously on this device, and Opnsense even has a plugin to make use of the display on the Sophos box, though the display isn’t quite as nice as Sophos creates. I’ve seen that it may be possible to tweak this, and will create a post detailing that failure incredible win if I get the time to investigate.

Virtualization: I run Proxmox 7.2 in the lab currently. In the past, I’ve set up VMWare ESXi, Microsoft Hyper-V, and even XCP-NG, but keep coming back to Proxmox for a couple reasons:

  • VMWare ESXi is free for the hypervisor, but to enable high-availability and VM load-balancing requires VCenter Server. VCenter is not free, but you can get renewable 365-day trials with a $200 VMUG membership. This is something I’ve considered, but I don’t work with VMWare for my day job and I don’t like that VCenter requires another VM that uses a minimum of 10GB RAM just to manage the cluster – I’d rather use that RAM for additional service workloads of my own.
    This is not intended to knock VMWare in any way – they make fantastic products and I ran ESXi for years before discovering Proxmox. If you use VMWare for work or want to break into virtualization as a profession, I highly recommend it – most companies, especially the larger ones, run VMWare for virtualization
  • Microsoft Hyper-V: Hyper-V is Microsoft’s entry to the virtualization space, and it’s a great one. Azure runs on it, and it’s been available on Windows servers since the Windows Server 2008. Like VMWare, managing a high-availability Hyper-V cluster well requires additional software in the form of Microsoft System Center Virtual Machine Manager (this can be done through the standard Windows Clustering feature as well, but it’s not as full-featured). The free version of Hyper-V doesn’t require additional licensing, but can’t be managed in a cluster configuration without VMM or Windows Clustering on a Windows server, so there are additional costs to consider here too. Definitely a contender if you want to work in enterprise virtualization – this is the 2nd-largest installed base of virtualization software in enterprises.
  • XCP-NG: I give this one an honorable mention. The team has come a long way with this solution, but for reasons unknown to even me, I just couldn’t get behind it when I tested in my lab

NAS

Possibly the most contentious of all virtualization and homelab discussions is the storage mechanism to hold all these delicious bits. It’s possible that I may have tried every NAS solution out there by now, from TruNAS (a fan favorite at Reddit) to Synology Diskstation to OpenFiler to OpenMediaVault. After years of switching between TrueNAS and OpenFiler, I’ve finally landed on XigmaNAS. I like XigmaNas because it’s dead-easy to set up, has a clean interface and supports SMB, NFS, and iSCSI. TrueNAS supports all of these as well, and has a great interface, but my current storage server doesn’t like something about it and became unstable after just a few weeks running. Rather than troubleshoot the issue, I thought I’d look for an alternative and found XigmaNAS. I like it so far, but check back to see if I continue with it.

Continue to Part 3 to start putting this all together…

3 thoughts on “Homelab – Build a High-Availability Kubernetes Cluster with Rancher on Proxmox – Part 2

  1. I like what you guys are up also. Such intelligent work and reporting! Keep up the excellent works guys I抳e incorporated you guys to my blogroll. I think it will improve the value of my website 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *