How I Built a Homelab That Runs Everything

A couple of years ago I was paying for a VPS, a managed database, and three different SaaS tools that each did one thing. The monthly bill wasn’t obscene, but it bothered me that I was renting compute to run stuff I could run myself. So I bought a mini PC, installed Proxmox, and now everything I build runs on a box under my desk.

The Hardware

Nothing flashy. It’s a single node — an Intel NUC-style mini PC with 64GB RAM and a 2TB NVMe. That’s it. No rack, no UPS (yet), no second node. For what I run, it’s more than enough. Proxmox barely touches the CPU most of the time, and I’ve still got headroom on RAM.

I did briefly consider TrueNAS for storage, but I didn’t want to manage a separate box. All my media sits on the NVMe and gets backed up to Backblaze B2 on a cron. Simple, boring, works.

Proxmox and the Container Strategy

Proxmox VE 8.x is the hypervisor. I run 4 VMs and 14 LXC containers. The split is deliberate — LXC containers are lighter than VMs and boot in seconds, so anything that doesn’t need a full kernel gets an LXC.

Each container gets its own YAML doc so I can remember what’s running where six months later.

Spinning up a new container is one command, then pct exec and you’re in. I use pct exec for everything — quick commands, installing packages, debugging. It’s basically SSH without the SSH.

The Service Stack

Here’s what’s running across those containers:

Media — Jellyfin for streaming, with Sonarr, Radarr, Lidarr, and Prowlarr handling acquisition. Prowlarr indexes, the *arr apps manage requests, and Jellyfin serves it all up. Standard setup, nothing custom.

Automation — n8n handles workflow automation (webhook triggers, API chains, database writes). Node-RED picks up the IoT and Home Assistant integration side. They overlap a bit, but n8n is better for complex multi-step workflows and Node-RED is better for event-driven device stuff.

Infrastructure — AdGuard Home for DNS-level ad blocking across the network. Vaultwarden for password management (Bitwarden-compatible, fraction of the resources). PostgreSQL on its own container handles databases for multiple projects.

Apps — Immich for photo backup (Google Photos replacement that actually works). FoundryVTT for D&D sessions. Portainer for managing Docker stacks where I’m too lazy to write systemd units.

Projects — This is where it gets fun. Every side project I build gets deployed here first. QuiverDM (a D&D campaign manager), Gruntr (a sales toolkit), this blog — they all run on the same box. Develop locally, push, deploy to a container.

Networking: pfSense + Caddy + Cloudflare Tunnel

The network stack took the most iteration to get right.

pfSense sits on a dedicated box as the router/firewall. It handles DHCP, VPN (WireGuard), and firewall rules. Every container gets a static IP in the 192.168.1.x range.

Caddy runs in CT 202 as the reverse proxy. It terminates TLS and routes subdomains to the right container/port.

Cloudflare Tunnel is how external traffic reaches the homelab without exposing ports to the internet. No port forwarding, no dynamic DNS. The tunnel terminates inside the network and Caddy takes it from there. I use *.nerdt.au for internal services and *.blakewales.au for public-facing stuff.

This setup means I can spin up a new project, add three lines to the Caddyfile, point a DNS record at the tunnel, and it’s live. Deploy time for a new service is measured in minutes.

Deploying Projects

Most of my projects deploy with a simple bash script. No CI/CD pipeline, no Kubernetes, no container registry. Build locally (or on the Proxmox host), push the files into the container, done. For side projects that I’m the only user of, this is the right level of complexity.

For Docker-based services, it’s even simpler — docker compose up -d inside the container and Portainer keeps an eye on it.

What I’d Do Differently

Start with documentation. I didn’t write container docs until I had 10+ containers and couldn’t remember which IP ran what. The YAML docs per container were a late addition that should have been there from day one.

Backups earlier. I lost a PostgreSQL database once because I assumed “it’s local, it’s fine”. Now everything important gets backed up off-site.

Don’t over-containerise. Early on I gave every tiny service its own LXC. Some of those could easily share a container. More containers means more to update and monitor.

What’s Next

The homelab keeps growing. Current plans include GPU passthrough for a Windows VM (gaming + AI inference), a dedicated build server for CI, and maybe a second node for redundancy. But honestly, a single box running Proxmox has taken me surprisingly far.

If you’re thinking about building a homelab — just start. Buy whatever hardware you can afford, install Proxmox, and start containerising things. You’ll learn more about networking, Linux, and infrastructure in a month than any course will teach you.