Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    5 days ago

    All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

    1. Peertube
    2. GoToSocial + client
    3. RSS
    4. search engine
    5. A number of custom sites
    6. backups
    7. Matrix server/client
    8. and a whole lot more

    Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

    I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        Couple of custom bash scripts for the backups. Ive used ansible at work. Its awesome, but my own stuff doesn’t require any robustness.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 days ago

      Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

          • mesa@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

            It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        …ok so Im lying about my system for…some reason?

        Synapse looks like its using 200M right now. It jumps to 1 GB when being heavily used, but I only use it for piefed and a couple of other local rooms. Honestly its not doing so much for us so we were thinking of getting rid of it. Its irritating to keep having to set up new devices and no one is really using it.

        Peertube is much bigger running around 500MB just doing its thing.

        Its a single family instance.

        # ps -eo user,pid,ppid,cmd,pmem,rss --no-headers --sort=-rss | awk '{if ($2 ~ /^[0-9]+$/ && $6/1024 >= 1) {printf "PID: %s, PPID: %s, Memory consumed (RSS): %.2f MB, Command: ", $2, $3, $6/1024; for (i=4; i<=NF; i++) printf "%s ", $i; printf "\n"}}'  
        PID: 2231, PPID: 1, Memory consumed (RSS): 576.67 MB, Command: peertube 3.6 590508 
        PID: 2228, PPID: 1, Memory consumed (RSS): 378.87 MB, Command: /var/www/gotosocial/gotosoc 2.3 387964 
        PID: 2394, PPID: 1, Memory consumed (RSS): 189.16 MB, Command: /var/www/synapse/venv/bin/p 1.1 193704 
        PID: 678, PPID: 1, Memory consumed (RSS): 52.15 MB, Command: /var/www/synapse/livekit/li 0.3 53404 
        PID: 1917, PPID: 645, Memory consumed (RSS): 45.59 MB, Command: /var/www/fastapi/venv/bin/p 0.2 46680 
        
      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        Freshrss. Sips resources.

        The dd when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.

  • yessikg@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    It’s so simple that it takes so much less time, one day I may move to Podman but I need to have the time to learn. I host Jellyfin

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    5 days ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      5 days ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

  • 51dusty@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    my two bare metal servers are the file server and music server. I have other services in a pi cluster.

    file server because I can’t think of why I would need to use a container.

    the music software is proprietary and requires additional complications to get it to work properly…or at all, in a container. it also does not like sharing resources and is CPU heavy when playing to multiple sources.

    if either of these machines die, a temporary replacement can be sourced very easily(e.g. the back of my server closet) and recreated from backups while I purchase new or fix/rebuild the broken one.

    IMO the only reliable method for containers is a cluster because if you’re running several containers on a device and it fails you’ve lost several services.

      • 51dusty@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I followed one of the many guides for installing proxmox on Rpis. 3node, 4gb rpi4s

        I use the cluster for lighter services like Trilium, FreshRss, secondary DNS, a jumpbox… and something else I forget. I’m going to try immich and see how it performs.

        my recent goto for cheap($200-300) servers are Debian + old Intel Macbook pros. I have two Minecraft bedrock servers on MBPs… one an i5, the other an i7.

        I also use a Lenovo laptop to host some industrial control software for work.

  • otacon239@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    After many failures, I eventually landed on OMV + Docker. It has a plugin that puts the Docker management into a web UI and for the few simple services I need, it’s very straightforward to maintain. I don’t cloud host because I want complete control of my data and I keep an automatic incremental backup alongside a physically disconnected one that I manually update.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Cool, how are you managing your disks? Are you overall happy with OMV?

      • otacon239@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Very happy with OMV. It’s not crazy customizable, so if you have something specialized, you might run into quirks trying to stick to the Web UI, but it’s just Debian under the hood, so it’s pretty manageable. 4x1TB drives RAID 5 for media/critical data, OS drive, and a Service data drive (databases, etc). Then an external 4TB for the incremental and another external 4TB for the disconnected backup.

          • otacon239@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Haven’t had to do a full OS upgrade yet, but standard packages can be updated and installed right in the web UI as well.

  • SailorFuzz@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    Mainly that I don’t understand how to use containers… or VMs that well… I have and old MyCloud NAS and little pucky PC that I wanted to run simple QoL services on… HomeAssistant, JellyFin etc…

    I got Proxmox installed on it, I can access it… I don’t know what the fuck I’m doing… There was a website that allowed you to just run scripts on shell to install a lot of things… but now none of those work becuase it says my version of Proxmox is wrong (when it’s not?)… so those don’t work…

    And at least VMs are easy(ish) to understand. Fake computer with OS… easy. I’ve built PCs before, I get it… Containers just never want to work, or I don’t understand wtf to do to make them work.

    I wanted to run a Zulip or Rocket.chat for internal messaging around the house (wife and I both work at home, kid does home/virtualschool)… wanted to use a container because a service that simple doesn’t feel like it needs a whole VM… but it won’t work…

    • ChapulinColorado@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

      Pay attention to when people say things can be improved (secrets/passwords, rootless/podman, backups), etc. And come back to them later.

      Just don’t expose things to the internet until you understand the risks and don’t check in secrets to a public git repo and go from there. It is a lot more manageable and feels like a hobby vs feeling like I’m still at work trying to get high availability, concurrency and all this other stuff that does not matter for a home setup.

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        I would give docker compose a try instead. I found Proxmox to be too much, when a simple yaml file (that can be checked into a repo) can do the job.

        Proxmox and Docker serve different purposes. They aren’t mutually exclusive. I have 4 separate VMs in my Proxmox cluster dedicated specifically to Docker; all running Dockge, too, so the stacks can all be managed from one interface.

        • ChapulinColorado@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I get that, but the services listed by the other comment run just fine in docker with less hassle by throwing in some bind mounts.

          The 4 VMs dedicated dockge instances is exactly the kind of thing I had in mind for people that want to avoid something that sounds more like work than a hobby when starting out. Building the knowledge takes time and each product introduced reduces the likelihood of it being completed anytime soon.

          • Lka1988@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 days ago

            Fair point. I’m 12 years into my own self-hosting journey, I guess it’s easy to forget that haha.

            When I started dicking around with Docker, I initially used Portainer for a while, but that just had way too much going on and the licensing was confusing. Dockge is way easier to deal with, and stupid simple to set up.

  • OnfireNFS@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

    It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer

    Probably doesn’t work for everyone but it works for me

  • StrawberryPigtails@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Depends on the application. My NAS is bare metal. That box does exactly one thing and one thing only, and it’s something that is trivial to setup and maintain.

    Nextcloud is running in docker (AIO image) on bare metal (Proxmox OS) to balance performance with ease of maintenance. Backups go to the NAS.

    Everything else is running on in a VM which makes backups and restores simpler for me.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    3
    ·
    5 days ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      5 days ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • Andres@social.ridetrans.it
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    @kiol I mean, I use both. If something has a Debian package and is well-maintained, I’ll happily use that. For example, prosody is packaged nicely, there’s no need for a container there. I also don’t want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples’ mailboxes. Since I’m still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.

    • Andres@social.ridetrans.it
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      @kiol On the other hand, for doing builds (debian packages and random other stuff), I’ll use podman containers. I’ve got a self-built build environment that I trust (debootstrap’d), and it’s pretty simple to create a new build env container for some package, and wipe it when it gets too messy over time and create a new one. And for building larger packages I’ve got ccache, which doesn’t get wiped by each different build; I’ve got multiple chromium build containers w/ ccache, llvm build env, etc

      • Andres@social.ridetrans.it
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        @kiol And then there’s the stuff that’s not packaged in Debian, like navidrome. I use a container for that for simplicity, and because if it breaks it’s not a big deal - temporary downtime of email is bad, temporary downtime of my streaming flac server means I just re-listen to the stuff that my subsonic clients have cached locally.

        • Andres@social.ridetrans.it
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          @kiol Syncthing? Restic? All packaged nicely in Debian, no need for containers. I do use Ansible (rather than backups) for ensuring if a drive dies, I can reproduce the configuration. That’s still very much a work-in-progress though, as there’s stuff I set up before I started using Ansible…

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    I run my NAS and Home Assistant on bare metal.

    • NAS: OMV on a Mac mini with a separate drive case
    • Home Assistant: HAOS on a Lenovo M710q, since 1) it has a USB zigbee adapter and 2) HAOS on bare metal is more flexible

    Both of those are much easier to manage on bare metal. Everything else runs virtualized on my Proxmox cluster, whether it’s Docker stacks on a dedicated VM, an application that I want to run separately in an LXC, or something heavier in its own VM.

    • a1studmuffin@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I’m curious why you feel these are easier to run on bare metal? I only ask as I’ve just built my first proxmox PC with the intent to run TrueNAS and Home Assistant OS as VMs, with 8x SAS enterprise drives on an HBA passed through to the TrueNAS VM.

      Is it mostly about separation of concerns, or is there some other dragon awaiting me (aside from the power bills after I switch over)?

      • Lka1988@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        Anything I run on Proxmox, per my own requirements, needs to be hardware-agnostic. I have a 3-node cluster set up to be a “playground” of sorts, and I like being able to migrate VMs/LXCs between different nodes as I see fit (maintenance reasons or whatever).

        Some services I want to run on their own hardware, like Home Assistant, because it offers more granular control. The Lenovo M710q Tiny that my HA system runs on, even with its i7-7700T, pulls a whopping 10W on average. I’ll probably change it to the Pentium G4560T that’s currently sitting on my desk, and repurpose the i7-7700T for another machine that could use the horsepower.

        My NAS is where Im more concerned about separation of duties. I want my NAS to only be a NAS. OMV is pretty simple to manage, has a great dashboard, spits out SMART data, and also runs a weekly rsync backup command on my RAID to a separate 8TB backup drive. I’m currently in the process of building a “new” NAS inside a gutted HP server case from 2003 to replace the Mac mini/USB 4-bay drive enclosure. New NAS will have a proper HBA to handle drives.

        or is there some other dragon awaiting me (aside from the power bills after I switch over)?

        My entire homelab runs about 90-130W. It’s pulled a total of ~482kWh since February (when I started monitoring it). That’s 3x tiny/mini/micro PCs (HP 800 G3 i7, HP 800 G4 i7, Lenovo M710q i7), an SFF (Optiplex 7050 i7), 2014 Mac mini (i5)/loaded 4-bay HDD enclosure/8TB USB HDD, Raspberry Pi 0W, and an 8-port switch.

        • a1studmuffin@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Wow, thanks so much for the detailed rundown of your setup, I really appreciate it! That’s given me a lot to think about.

          One area that took me by surprise a little bit with the HBA/SAS drive approach I’ve taken (and it sounds like you’re considering) is the power draw. I just built my new server PC (i5-8500T, 64GB RAM, Adaptec HBA + 8x 6TB 12GB SAS drives) and initial tests show on its own it idles at ~150W.

          I’m fairly sure most of that is the HBA and drives, though I need to do a little more testing. That’s higher than I was expecting, especially since my entire previous setup (Synology 4-bay NAS + 4x SATA drives, external 8TB drive, Raspberry Pi, switch, Mikrotik router, UPS) idles at around 80W!

          I’m wondering if it may have been overkill going for the SAS drives, and a proxmox cluster of lower spec machines might have been more efficient.

          Food for thought anyway… I can tell this will be a setup I’m constantly tinkering with.

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

  • bizarroland@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    I’m running a TrueNAS server on bare metal with a handful of hard drives. I have virtualized it in the past, but meh, I’m also using TrueNAS’s internal features to host a jellyfin server and a couple of other easy to deploy containers.

      • bizarroland@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Yeah, the more recent versions basically have a form of Docker as part of its setup.

        I believe it’s now running on Debian instead of free BSD, which probably simplified the containers set up.

  • erock@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Here’s my homelab journey: https://bower.sh/homelab

    Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet