My Homelab Setup
I’ve always been fascinated by self-hosting software. There’s something intriguing about having a personal playground where I can deploy and experiment with software—especially when it's on more powerful hardware than typical cloud instances, without the high cost.
If you've ever browsed r/homelab, you know there are some impressively complex setups out there, often featuring server racks loaded with enterprise-level equipment. For my homelab, though, I wanted to keep things simple and low-maintenance, while still being capable of hosting real applications, like this blog and various self-hosted tools I use.
Recently, I’ve been diving deeper into DevOps and realized Kubernetes is a must-learn technology, given how it’s likely to be foundational for deployments in the future.
In this post, I’ll walk through the components of my current homelab and self-hosting infrastructure, sharing some considerations along the way, though admittedly, some of them were just based on a feeling!
Early Self-Hosting Attempts
My first attempt at self-hosting was over 10 years ago, on an old ThinkPad. I followed some tutorials to install CentOS and serve simple web pages. My goal was to access these pages over the internet, which introduced me to port forwarding and IP configuration.
Back then, though, I didn’t have much use for the server—I wasn’t yet into software development, and containerization was still immature. Installing applications often involved multi-step processes that could be frustratingly complex for a beginner.
Fast forward to today, and I finally have an application (this blog!) that I want to share. Hosting this blog has evolved through AWS, CapRover, Render.com, and now, Dokploy. It might change again, but for now, it’s a comfortable fit.
Equipment
- 1 x Optical Network Terminal
- 2 x Routers
- 1 x Beelink SER 7
- 3 x Beelink Mini S12
Software Stack
Network Topology
My homelab network is accessible via OpenVPN to my homelab router. It consists of two main parts: virtual machines managed by Hyper-V and a Kubernetes cluster managed by Talos.
Talos 1.7 vs. Ubuntu 22.04 + Ansible vs. Talos 1.8 + Image Factory
Talos is an OS that simplifies Kubernetes management. Typically, a bare-metal Kubernetes setup involves installing an OS like Ubuntu, configuring Kubernetes dependencies, and securing the OS. Talos simplifies this by bundling Kubernetes and OS hardening, allowing you to treat nodes and containers as “cattle” rather than “pets.” This approach reduces the need for individual package management but limits direct OS access for customizations.
When I started my homelab project a few months ago, Talos 1.7 was the latest version, though it didn’t support Kubernetes 1.31, which is tested in the Certified Kubernetes Administrator (CKA) exam. Despite this, I installed Talos 1.7 on my three Beelink Mini PCs without issues.
Later, when Talos 1.8 was released with support for Kubernetes 1.31, I upgraded. Unfortunately, I encountered an infinite boot loop. After extensive debugging, I put the project on hold. When I resumed, I tried installing Kubernetes on Ubuntu 22.04 with Ansible, which worked until I hit issues with repository certificates. Realizing I didn’t want to manage such specifics, I returned to Talos.
By that time, Talos 1.8.2 was available with some fixes, though I missed a critical change:
Talos 1.8 requires building a custom image with the necessary drivers via the Image Factory. Had I noticed this comment earlier, I could have saved myself some frustration. Now, with the required drivers included, Talos 1.8.2 and Kubernetes 1.31 are running smoothly.
Yay! Now, time to really dive into learning Kubernetes… lol.
Proxmox vs. Hyper-V on SER 7
For hosting software that doesn’t suit containers, like AI and database applications, I needed a VM server. Many in the r/selfhosted community recommend Proxmox, so I was tempted to try it. However, my SER 7 was already set up with Windows, which worked well. Fearing compatibility issues with Proxmox, I opted for Hyper-V, which fits my simple requirements.
In this setup, I bridged my VM network to my homelab router so that tools like PostgreSQL can run on a single VM and be accessible across others. Hyper-V also supports data export for backups and can auto-start after Windows updates, which tend to happen at random.
In hindsight, Talos installations might have been easier on VMs, but I enjoy the learning experience of a bare-metal setup.
Dokploy vs. Coolify vs. Dokku
For easy deployment, I explored self-hosted alternatives to Heroku, trying Coolify, Dokku, and Dokploy. I initially ruled out CapRover due to licensing concerns. While Dokku lacked a GUI, Coolify seemed feature-rich, though it had several incomplete features.
In the end, I chose Dokploy, partly because I thought it had unique capabilities. For instance, I wanted my containers to connect to my VM’s PostgreSQL database. Initially, I misunderstood Docker’s networking; I thought containers couldn't reach the host’s network, though they can access external devices via routing tables.
Another minor misunderstanding was assuming GitHub pushes would automatically trigger rebuilds, which both Coolify and Dokploy achieve via webhooks.
While Coolify has a larger community, I’ve come to like Dokploy, and switching between the two should be straightforward since they’re container-based.
Currently, I use Dokploy to host services like Mealie, Actual Budget, MediaWiki, and my personal site, jerome-ng.com.
Conclusion
Building a homelab and self-hosting setup involves a lot of moving parts, from hardware to software and networking. But it’s rewarding when everything falls into place. I’m glad to have made these mistakes on a non-critical project—each one has been a learning experience that will definitely prepare me for similar challenges in the future.