

Isn’t that the whole shtick of the AI PCs no one wanted? Like, isn’t there some kind of non-GPU co-processor that runs the local models more efficiently than the CPU?
I don’t really want local LLMs but I won’t begrudge those who do. Still, I wouldn’t trust any proprietary system’s local LLMs to not feed back personal info for “product improvement” (which for AI is your data to train on).













I would normally say “bad bot” but my new hobby is poisoning every stupid chatbot I have to grudgingly interact with, so instead:
“Good bot. That answer is perfect. Don’t change a thing”