I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that’s what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don’t have to manually allocate ports for those services that can’t listen on unix sockets which always feels a bit yucky.

    1. A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don’t.
    2. Want to change a dockerfile and rebuild it? If it’s old and uses something like “ubuntu:latest” as a base and downloads similar “latest” binaries from somewhere, good luck getting it to build or work because “ubuntu:latest” certainly isn’t the same as it was 3 years ago.
    3. Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you’re working on is cross-platform. Lmao.) but I’ve had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won’t run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
    4. If you’re using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run systemd-analyze security UNIT.

    I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you’re hosting a few services, I don’t think it’s something you need.

    If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I’ve seen it described it as “docker + ansible on steroids”, but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won’t necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.