So I have rebuilt my Production rack with very little in terms of an actual software plan.
I host mostly docker contained services (Forgejo, Ghost Blog, OpenWebUI, Outline) and I was previously hosting each one in their own Ubuntu Server VM on Proxmox thus defeating the purpose.
So I was going to run a VM on each of these Thinkcentres that worked as a Kubernetes Cluster and then ran everything on that. But that also feels silly since these PCs are already Clustered through Proxmox 9.
I was thinking about using LXC but part of the point of the Kubernetes cluster was to learn a new skill that might be useful in my career and I don’t know how this will work with Cloudflared Tunnels which is my preferred means of exposing services to the internet.
I’m willing to take a class or follow a whole bunch of “how-to” videos, but I’m a little frazzled on my options. Any suggestions are welcome.
Damn that’s a good looking mini rack! Great job!
I don’t have much experience or advice about Proxmox, just wanted to show appreciation ✌️
Just gotta say that looks like a really nice setup. Mine looks like a small rats nest!
Just for fun, and for a layman’s benefit (me)…
Can you eli5?
Yeah!
So i am running these three computers in a set up that let’s me manage virtual machines on them from a website with Proxmox.
I want to play with a tool that let’s me run Docker Containers. Containers being a way to host services like websites and web apps without having to make a Virtual machine for each app.
This has a lot of advantages but I’m trying to use the High Availability feature when you run these on a cluster of computers.
My problem is that I know I can use the Built In container software in the already clustered Proxmox computers called LXC Linux Containers. However, I want to use a container software called Kubernetes but I would have to build Virtual machines on my servers and then cluster those virtual machines.
Its a little confusing because I have three physical computers clustered together and I’m trying to then build three virtual computers on them and cluster those. Its an odd thing to do and that’s the problem.
This is pretty rad! Thanks for sharing. I went down the same road with learning k3s on about 7 Raspberry Pis and pivoted over to Proxmox/Ceph on a few old gaming PCs / Ethereum miners. Now I am trying to optimize the space and looking at how to rack mount my ATX machines with GPUs lol… I was able to get a RTX 3070 to fit in a 2U rack mount enclosure but having some heat issues… going to look at 4U cases with better airflow for the RTX 3090 and various RX480s.
I am planning to set up Talos VMs (one per Proxmox host) and bootstrap k8s with Traefik and others. If you’re learning, you might want to start with a batteries-included k8s distro like k3s.
Apartment is too small and my partner is too noise sensitive to get away with a rack. So my localLLM and Jellyfin encoder plus my NAS exists like this this summer. Temps have been very good once the panels came off.
Running the k8s in their own VM will allow you to hedge against mistakes and keep some separation between infra and kube.
I personally don’t use proxmox anymore, but I deploy with ansible and roles, not k8s anymore.
Ansible is next on my list of things to learn.
I don’t think I’ll need to dedicate all of my compute space to K8s probably just half for now.
Ansible is next on my list of things to learn.
Ansible is y2k tech brought to you in 2010. Its workarounds for its many problems bring problems of their own. I’d recommend mgmtconfig, but it’s a deep pool if you’re just getting into it. Try Chef(cinc.sh) or saltstack, but keep mgmtconfig on the radar when you want to switch from 2010 tech to 2020 tech.
Wow, huge disagree on saltstack and chef being ahead of Ansible. I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.
Ansible is just so much lower overhead and so much easier to understand and make changes to. It’s dominating the configuration management space for a reason. And nearly all of the self hosted/homelab space is active in Ansible and have tons of well baked playbooks.
I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.
Popular isn’t always better. See: Betamax/VHS, Blu-ray vs HDDVD, skype/MSSkype, everything vs Teams, everything vs Outlook, everything vs Azure. Ansible is accessible like DUPLO is accessible, man, and with the payola like Blu-ray got and the pressuring like what shot systemd into the frame, of course it would appeal to the C-suite.
Throwing a few-thousand at Ansible/AAP and the jagged edges pop out – and we have a team of three that is dedicated to Nagios and AAP. And it’s never not glacially slow – orders of magnitude slower than absolutely everything.
My issue with mgmt.config is that it bills itself as an api-driven “modern” orchestrator, but as soon as you don’t have systemd on clients, it becomes insanely complicated to blast out simple changes.
Mgmt.config also claims to be “easy”, but you have to learn MCL’s weird syntax, which the issue I have with chef and its use of ruby.
Yes, ansible is relatively simple, but it runs on anything (including being supported on actual arm64) and I daresay that layering roles and modules makes ansible quite powerful.
It’s kind of like nagios… Nagios sucks. But it has such a massive library of monitoring tricks and tools that it will be around forever.
have to learn MCL’s weird syntax
You skewer two apps for syntax, but not Ansible’s fucking YAML? Dood. I’m building out a layered declarative config at the day-job, and it’s just page after page with python’s indentation fixation and powershell’s bipolar expressions. This is better for you?