Homelab: What's New
The original post covered the stack as it stood when I first got everything running. A few things have changed since then, mostly in the direction of “the GPU is finally doing more than just Jellyfin”.
Local LLMs: Ollama and Open WebUI
This was the obvious next step and it turned out to be straightforward. Ollama runs as a container with runtime: nvidia and NVIDIA_VISIBLE_DEVICES=all — the same passthrough pattern Jellyfin already used — and handles model management and inference. Open WebUI sits in front of it as the chat interface.
Models live at /mnt/cache/ollama/models. The RTX 2080 has 8 GB of VRAM, which is enough for capable quantised models. Running a 7B or 8B model at a sensible quantisation level is fast and responsive, and nothing leaves the house.
The more interesting part is Open WebUI’s Tools support. All my services share a nas_network bridge, which means Open WebUI can reach Gitea, Portainer, Jellyfin, Kavita, Healthchecks, and others directly by container name — no reverse proxy, no Authelia in the way. Each service has its own API auth (tokens, session keys), but the routing is just internal Docker networking. The result is a local LLM that can actually query my infrastructure: check what’s running, browse the media library, look at repo contents, see which health checks are down.
It started as a novelty. It has become genuinely useful for things I’d otherwise open three tabs for.
Portainer
Portainer isn’t actually new — it’s been running for a while. What’s new is that the compose file is now committed to the repository, which feels somewhat overdue for the service responsible for managing everything else.
It holds a special position in the stack: it’s the only service deployed directly with docker compose up on the host. Every other service runs as a Portainer stack — compose files managed and deployed through the Portainer UI itself. That means Portainer bootstraps the rest of the stack, and the rest of the stack lives inside Portainer.
It runs on nas_network so Open WebUI can reach its API as a Tool, and it sits behind Authelia with two-factor auth — the right call for something with Docker socket access. Practically, it’s most useful for a quick health check without SSHing in: spot a stopped container, tail logs for something misbehaving, restart a service. Nothing you couldn’t do from the command line, but faster for the common cases.
What’s Still in Progress
Homarr (a dashboard/homepage for all the services) and Vaultwarden (a self-hosted Bitwarden-compatible password manager) both have compose files committed but aren’t running yet. They’ll come.
The self-hosted mail idea came up because Google flagged that my storage was running low, which prompted a full Google Takeout and a moment of thinking about how many things depend on a Gmail address. I started looking at self-hosted mail stacks. I got about as far as committing the compose config before concluding that running your own mail server in 2026, with all the deliverability headaches and SPF/DKIM/DMARC maintenance that entails, is not a trade I actually want to make. The takeout is sitting on /mnt/storage now, which is probably the better outcome anyway.
What’s the Same
Everything from the original post is still running: Jellyfin, PhotoPrism, Kavita, Pi-hole, Unbound, Authelia, Nginx Proxy Manager, Gitea, the act runner, the private registry, Beszel, Healthchecks. The one-compose-file-per-service structure held up fine as the stack grew. No monolithic compose files, no cross-service restarts from an unrelated update.
The stack is in a good place. The GPU is busier than it was, and that feels right.