How do y’all manage all these Docker compose apps?
First I installed Jellyfin natively on Debian, which was nice because everything just worked with the normal package manager and systemd.
Then, Navidrome wasn’t in the repos, but it’s a simple Go binary and provides a systemd unit file, so that was not so bad just downloading a new binary every now and then.
Then… Immich came… and forced me to use Docker compose… :|
Now I’m looking at Frigate… and it also requires Docker compose… :|
Looking through the docs, looks like Jellyfin, Navidrome, Immich, and Frigate all require/support Docker compose…
At this point, I’m wondering if I should switch everything to Docker compose so I can keep everything straight.
But, how do folks manage this mess? Is there an analogue to apt update
, apt upgrade
, systemctl restart
, journalctl
for all these Docker compose apps? Or do I have to individually manage each app? I guess I could write a bash script… but… is this what other people do?
Yeah, I have everything as compose.yaml stacks and those stacks + their config files are in a git repo.
Each app has a folder and then I have a bash script that runs
Docker compose up -d
In each folder of my containers to update them. It is crude and will break something at some stage but meh jellyseer or TickDone being offline for a bit is fine while I debug.
docker compose CLI.
KISS, never did me wrong.
Strongly recommend komodo. I tried dockge and portainer but komodo was easy to install and has great features from scheduled updates and using compose files from a git repo. Also you can migrate existing apps to it without too much work
I just made the switch to Komodo last week. Komodo lxc managing 4 VMs across two proxmox hosts. Easily added all the existing servers and it just worked. I think any of these systems are probably overkill for my needs but Komodo had the nicest “fresh out the box, find the important stuff right away” feel to it. My two cents.
I run Akkoma, Navidrome, Searx, valutwarden, RomM, Forgejo, wireguard, RDP, and a few other things all via docker. Honestly I just keep everything in their own dir and just have Yazi on my server to make it easier to manage. I don’t auto update anything, it’s all manual updates.
I’m probably going to slap Watchtower in there to just make things easier. don’t really need to over think it in all honesty.
Docker compose pull; docker compose down;docker compose up -d
Pulls an update for the container, stops the container and then restarts it in the background. I’ve been told that you don’t need to bring it down, but I do it so that even if there isn’t an update, it still restarts the container.
You need to do it in each container’s folder, but it’s pretty easy to set an alias and just walk your running containers, or just script that process for each directory. If you’re smarter than I am, you could get the list from running containers (docker ps), but I didn’t name my service folders the same as the service name.
What commands do you have to run after you update
docker-compose.yml
or.env
files? I updated one of those files once bad things happened… I haven’t had to update the configs in a long time.I usually just do
Docker compose down Docker compose up -d
As I would with any service restart. The up -d command is supposed to reload it as well, but I prefer knowing for certain that the service restarted.
Out of curiosity, what did you update and what broke? I had that happen a lot when I was first getting started with docker, and is part of how I learned. Once you have a basic template (or have dec supplies example files), it makes spinning up new services less of a hassle.
Though I still get yelled at about the version entry in my fines because I haven’t touched mine in forever
My breakage happened a while ago, so I don’t quite remember all of the details…
but what I think happened was I updated
/etc/immich/.env
. I updated the path forDB_DATA_LOCATION
and/orUPLOAD_LOCATION
and then I randocker compose up -d
I think. But nothing changed…
I use k3s and argocd to keep everything synced to the configuration checked into a git repo. But I wouldn’t recommend that to someone just getting started with containers; kubernetes is a whole new beast to wrestle with.
Kubernetes is the Arch of Containers except way more confusing
I use it for work so it felt natural to do it at home too. If anyone has time to learn it as a hobby and doesn’t mind a challenge, I recommend it. But IMO you need to already be familiar with a lot of containerization concepts
So I have a git repo with all my compose files in it, some of the stacks are version pinned some are latest. With the git repo I get versioning of the files and a way to get compose files on remote servers, in the repo is a readme with all the steps needed to start each stack and in what order.
I use portainer to keep an eye on things and check logs, but starting is always from the cli
I’d suggest to put the compose stacks in git and then clone them either manually or with some tool.
For fully automated gitops with docker compose, check this blogpost
I use dockge. 1-2 years ago i started to selfhost everything with Docker. I have now 30+ Container and Dockge is absolut fantastic. I host all my stuff on a root Server from Hetzner and if they reveal a cheaper Server i switch. Since all my Stuff is hosted on Docker i can simple copy it to the new Server and start the Docker Containers and it runs.
I didn’t see ansible as a solution here, which I use. I run docker compose only. Each environment is backed up nightly and monitored. If a docker compose pull/up and then image clean breaks a service, I restore from a backup that works and see what went wrong.
I have finally had to switch to using docker for several things I use to just install manually (ttrss being the main one). It sure feels dirty when i use to just apt update and know everything was updated.
I can see the draw for docker but feel it’s way over used right now.
Just replace Apt update with docker pull 🤷♂️
Docker pull lacks a lot of the automation apt has if your using multiple images, and needing to restart them.
I’m an old man set in my ways… i see the benefit of docker and having set images, but also see so much waste having multi installs of the same thing for different docker images.
Idk most of the time I just dcpull dcup (aliases ftw)
Ofc had some stuff break occasionally if there’s a breaking change but the same could happen through apt no?
I prefer it to dependency hell personally lol
Don’t auto update. Read the release notes before you update things. Sometimes you have to do some things manually to keep from breaking things.
Autoupdate is fine for personal stuff. Just set a specific date so that you know if something breaks. Rollbacks are easy and very rarely needed.
Politically correct of course.
But from my own experience using Watchtower for over 7 years is that I can count on one hand when it actually broke something. Most of the time it was database related.
But you can put apps on the watchtower ignore list (looking a you Immich!), which clear that out fairly quick.
And if you roll all your dockers on ZFS as datasets + sanoid you can just rollback to the last snapshot, if that ever does happen.
Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.
For people living with others it might not be a choice though. The lights not working for a day the way they normally do is all it takes for someone to lose all faith in automation. It’s easier when you plan for a specific time and day to update things, as long as you are not exposed to the internet, slightly out of date apps are not a big worry
I use quadlets instead - it’s part of podman and lets you manage containers as systemd services. Supports automatic image updates and gives you all the benefits of systemd service management. There’s a tool out there that will convert a docker compose config into quadlet unit files giving you a quick start, but I just write them by hand.
I use Portainer (portainer.io) - it’s a prett nice WebUI which lets me manage the whole set of services and easily add new ones as you can edit the compose yaml right in the browser.
There’s no substitute for knowing all the docker commands to help you get around but if you are looking for something to help with management then this might be the way to go.
Watchtower also recommended is probably a good shout just be warned about auto upstating past your config - it’s super easy for the next image you pull to just break your setup because of new ENV or other dependency you’re not aware of.