My rack is finished for now (because I’m out of money).
Last time I posted I had some jank cables going through the rack and now we’re using patch panels with color coordinated cables!
But as is tradition, I’m thinking about upgrades and I’m looking at that 1U filler panel. A mini PC with a 5060ti 16gb or maybe a 5070 12gb would be pretty sick to move my AI slop generating into my tiny rack.
I’m also thinking about the PI cluster at the top. Currently that’s running a Kubernetes cluster that I’m trying to learn on. They’re all PI4 4GB, so I was going to start replacing them with PI5 8/16GB. Would those be better price/performance for mostly coding tasks? Or maybe a discord bot for shitposting.
Thoughts? MiniPC recs? Wanna bully me for using AI? Please do!

Looking good! Funny I happen across this post when I’m working on mine as well. As I type this I’m playing with a little 1.5” transparent OLED that will poke out of the rack beside each pi, scrolling various info (cpu load/temp, IP, LAN traffic, node role, etc)
What OLED specifically and what will you be using to drive it?
Waveshare 1.51” transparent OLED. Comes with driver board, ribbon & jumpers. If you type it in Amazon it’s the only one that pops, just make sure it says transparent. Plugs into GPIO of my Pi 5s. The Amazon listing has a user guide you can download so make sure to do that. I was having trouble figuring it out until I saw that thing. Runs off a python script but once I get it behaving like I want I’ll add it to systemd so it boots on startup.
Imma dummy so I used ChatGPT for most of it, full …ahem… transparency. 🤷🏻♂️
I’m modeling a little bracket in spaceclaim today & will probably print it in transparent PETG. I’ll post a pic when I’m done!
This is so pretty 😍🤩💦!!
I’ve been considering a micro rack to support the journey, but primarily for house old laptop chassis as I convert them into proxmox resources.
Any thoughts or comments on you choice of this rack?
Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.
I also decided I didn’t want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.
This was just what I wanted at a price I was good with at Like $120. There’s a 3D printable version but I wasn’t interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.
But this set up is way cheaper if you have a printer and some patience.
I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here
I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.
As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.
Similar setup here with a 7900xtx, works great and the 20-30b models are honestly pretty good these days. Magistral, Qwen 3 Coder, GPT-OSS are most of what I use
With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.
As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.
But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.
So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.
I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.
deleted by creator
https://en.wikipedia.org/wiki/ThinkCentre because I didn’t knew it existed.
These are M715q Thinkcentres with a Ryzen Pro 5 2400GE
Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.
I’m a huge fan of this all in one idea that is upgradable.
NSFW
I didn’t even know these sorts of mini racks existed. now I’m going to have to get one for all my half sized preamps if they’ll fit. That would solve like half the problems with my studio room and may help bring back some of my spark for making music.
I have no recs. Just want to say I’m so excited to see this. I can probably build an audio patch panel.
Honestly, If you are delving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi’s. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD’s and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L’s with 32GB and a 512GbB SSD each and its been great. The other one became my wife’s new desktop.
Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.
deleted by creator
You also could pickup a powerful CPU with lots of memory bandwidth like a threadripper
Since you seem to be looking for problems to solve with new hardware, do you have a NAS already? Could be tight in 1U but maybe you can figure something out.
I do already have a NAS. It’s in another box in my office.
I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.
How do I do that? Good question. I take suggestions.
The AI hate is overwhelming at times. This is great. What kind of things are you doing with it?
Not much. As much as I like LLMs, I don’t trust them for more than rubber duck duty.
Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I’ve read so it can draw from that when I ask it questions.
The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite’s “No I won’t let you break your computer” philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I’d want it to be better than my current server.
I’m actually right there with you, I have a 3060 12gb and tbh I think it’s the absolute most cost effective GPU option for home use right now. You can run 14B models at a very reasonable pace.
Doubling or tripling the cost and power draw just to get 16-24gb doesn’t seem worth it to me. If you really want an AI-optimized box I think something with the new Ryzen Max chips would be the way to go - like an ASUS ROG Z-Flow, Framework Desktop or the GMKtek option whatever it’s called. Apple’s new Mac Minis are also great options. Both Ryzen Max and Apple make use of shared CPU/GPU memory so you can go up 96GB+ at much much lower power draws.A mac is a very funny and objectively correct option








