

Your premise falsely assumes I’ll even let you in when you show up uninvited.
Freedom is the right to tell people what they do not want to hear.
Your premise falsely assumes I’ll even let you in when you show up uninvited.
I think comparing an LLM to a brain is a category mistake. LLMs aren’t designed to simulate how the brain works - they’re just statistical engines trained on language. Trying to mimic the human brain is a whole different tradition of AI research.
An LLM gives the kind of answers you’d expect from something that understands - but that doesn’t mean it actually does. The danger is sliding from “it acts like” to “it is.” I’m sure it has some kind of world model and is intelligent to an extent, but I think “understands” is too charitable when we’re talking about an LLM.
And about the idea that “if it’s just statistics, we should be able to see how it works” - I think that’s backwards. The reason it’s so hard to follow is because it’s nothing but raw statistics spread across billions of connections. If it were built on clean, human-readable rules, you could trace them step by step. But with this kind of system, it’s more like staring into noise that just happens to produce meaning when you ask the right question.
I also can’t help laughing a bit at myself for once being the “anti-AI” guy here. Usually I’m the one sticking up for it.
You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.
This is exactly the use-case for an LLM
I don’t think it is. LLM is language generating tool, not language understanding one.
Well there I don’t see any other options. No matter how much people tell them not to give an inch, that’s just not realistic. The options are pretty much to cut your losses or take back the occupied territories by force. That’s just not going to happen. If it was possible it would’ve already happened. The defensive positions preventing Ukraine to take back the lost ground are significantly more substantial than the ones holding Russia back. If they had equal numbers of men, then maybe but Russia has near infinite source of soldiers to toss into the meat grinder.
Surrender as in just give up and let Russia take control of all of Ukraine? Abso-fucking-lutely not.
Surrender the lost territories, however? It’s without a doubt going to happen.
My savings are invested in the stock market, and the returns I get from that are higher than the interest on my mortgage. If I liquidated my investments to pay off the house, the savings from not paying mortgage interest would still be less than what I’d make from the market over the same period. I’d rather use the profits from my investments to cover the mortgage interest - that way I still have money left over. If I did the opposite, I’d lose that extra money.
Being in debt isn’t synonymous with being broke.
I could pay off my house tomorrow if I wanted, but financially it doesn’t make sense - so I keep the debt. That doesn’t mean my net worth is negative or that I don’t have disposable income.
You think you have - but there’s really no way of knowing.
Just because someone writes like a bot doesn’t mean they actually are one. Feeling like “you’ve caught one” doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as “proof” without actual verification.
And then there’s the classic toupee fallacy: “All toupees look fake - I’ve never seen one that didn’t.” That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.
Assuming the rest of your diet isn’t stuffed with red meat I wouldn’t worry too much about eating bacon. Replacing meat with vegan meat-substitutes doesn’t automatically make the meal healthier - just free of meat.