I've been self-hosting services at home for about three years now. What started as a single Raspberry Pi running Pi-hole has evolved into a proper homelab running 20+ services across multiple VMs. Here's a practical, opinionated guide to building your own.
Why Self-Host?
The motivations are part practical, part philosophical:
- Privacy — Your data stays on hardware you control. No third-party scanning your files or training models on your documents.
- Learning — There is no better way to learn networking, Linux administration, and distributed systems than running real services with real traffic.
- Cost — After the initial hardware investment, running your own cloud storage, password manager, and media server costs essentially nothing beyond electricity.
- Resilience — When a SaaS provider has an outage or changes their pricing, you're unaffected.
Hardware
My current setup lives in a compact 10-inch rack in my study. In Singapore, space and power efficiency matter — electricity isn't cheap, and HDB flats aren't exactly spacious.
The Server
- Chassis: Minisforum MS-01 (mini workstation form factor)
- CPU: Intel i9-13900H (14 cores, 20 threads)
- RAM: 64GB DDR5 (2× 32GB SO-DIMM)
- Storage: 2TB NVMe (OS + VMs) + 4TB SATA SSD (data)
- Power draw: ~25W idle, ~65W under typical load
The MS-01 is an excellent homelab platform. It has dual 2.5GbE NICs, a PCIe 4.0 slot, and enough grunt to run a dozen VMs comfortably — all while staying near-silent.
Networking
- Router/Firewall: OPNsense running on a dedicated mini PC
- Switch: Netgear 8-port managed switch with VLAN support
- Access points: 2× TP-Link EAP series, managed via Omada controller
Proxmox Setup
Proxmox VE is the hypervisor layer. It's Debian-based, free, and rock-solid for homelab use. I run three main VMs:
- docker-host — Ubuntu Server 24.04, 8 vCPUs, 32GB RAM. This is where all containerised services live.
- truenas — TrueNAS Scale for file storage, sharing, and snapshots. Manages the 4TB data drive via passthrough.
- dev — A lightweight Ubuntu Desktop VM for remote development via RDP when I'm away from home.
Docker Compose Stack
All services on the docker-host VM are managed via Docker Compose. I organise them into logical groups, each with its own docker-compose.yml.
Core Infrastructure
# core/docker-compose.yml
services:
traefik:
image: traefik:v3
ports:
- "443:443"
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik.yml:/etc/traefik/traefik.yml:ro
- ./certs:/certs
pihole:
image: pihole/pihole:latest
environment:
- TZ=Asia/Singapore
dns:
- 127.0.0.1
- 1.1.1.1
uptime-kuma:
image: louislam/uptime-kuma:1
volumes:
- ./uptime-data:/app/data
Traefik handles reverse proxying and automatic TLS certificates for all services. Pi-hole provides DNS-level ad blocking for the entire network. Uptime Kuma monitors everything and sends alerts via Telegram.
Productivity
- Nextcloud — Cloud storage replacement. Syncs files across all devices, handles contacts and calendars.
- Vaultwarden — Bitwarden-compatible password manager. Lightweight, full-featured, self-hosted.
- Paperless-ngx — Document management with OCR. Every piece of mail gets scanned and auto-tagged.
- Mealie — Recipe manager. My partner's favourite service in the stack.
Development
- Gitea — Self-hosted Git with CI/CD via Gitea Actions. Mirrors my GitHub repos.
- Drone CI — Runs builds and tests for personal projects.
- PostgreSQL — Shared database instance for all self-hosted apps that need one.
- Redis — Shared cache.
Monitoring
# monitoring/docker-compose.yml
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana:latest
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASS}
node-exporter:
image: prom/node-exporter:latest
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
Networking & Access
Remote access is handled through WireGuard. I run a WireGuard server on the OPNsense router, and my phone and laptop connect directly into the home network when I'm out. No Cloudflare tunnels, no exposed ports — just a single UDP port for the VPN.
Internally, services are accessed via *.home.lab domains resolved by Pi-hole. Traefik handles TLS termination with locally-issued certificates via a private CA.
Backup Strategy
Backups follow the 3-2-1 rule:
- Primary: Data lives on the TrueNAS ZFS pool with hourly snapshots.
- Local backup: Daily replication to an external USB drive via
zfs send. - Offsite backup: Critical data (documents, photos, password vault) is encrypted with
resticand pushed to Backblaze B2 weekly. At ~$5/month for 500GB, it's cheap insurance.
Power & Cost
Running a homelab in Singapore means dealing with electricity costs. At about S$0.33/kWh, here's my monthly breakdown:
- Server: ~30W average × 730 hours = ~22 kWh → S$7.30
- Networking: ~15W → S$3.60
- Total: ~S$11/month
Compare that to equivalent cloud services (Dropbox, 1Password, monitoring, CI/CD, DNS filtering) which would easily cost S$50+/month in subscriptions. The hardware paid for itself within a year.
Lessons Learned
- Start small. You don't need a rack server on day one. A Raspberry Pi or old laptop is enough to learn the fundamentals.
- Document everything. Future-you will not remember why you configured that firewall rule. Use a Git repo for all config files.
- Automate updates. Watchtower or Renovate for Docker images. Unattended-upgrades for the host OS. Manual updates don't scale.
- Monitor from the start. Uptime Kuma takes 5 minutes to set up and will save you hours of debugging later.
- Accept imperfection. Your homelab will break. That's the point — every failure is a learning opportunity.
If you're on the fence about building a homelab, just start. Spin up a single container on an old machine and see where it takes you. Three years in, it's easily one of my most rewarding technical projects.