Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 1 Post
  • 11 Comments
Joined 2 months ago
cake
Cake day: July 17th, 2025

help-circle


  • I think comparing an LLM to a brain is a category mistake. LLMs aren’t designed to simulate how the brain works - they’re just statistical engines trained on language. Trying to mimic the human brain is a whole different tradition of AI research.

    An LLM gives the kind of answers you’d expect from something that understands - but that doesn’t mean it actually does. The danger is sliding from “it acts like” to “it is.” I’m sure it has some kind of world model and is intelligent to an extent, but I think “understands” is too charitable when we’re talking about an LLM.

    And about the idea that “if it’s just statistics, we should be able to see how it works” - I think that’s backwards. The reason it’s so hard to follow is because it’s nothing but raw statistics spread across billions of connections. If it were built on clean, human-readable rules, you could trace them step by step. But with this kind of system, it’s more like staring into noise that just happens to produce meaning when you ask the right question.

    I also can’t help laughing a bit at myself for once being the “anti-AI” guy here. Usually I’m the one sticking up for it.


  • You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.

    But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.

    So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.




  • Well there I don’t see any other options. No matter how much people tell them not to give an inch, that’s just not realistic. The options are pretty much to cut your losses or take back the occupied territories by force. That’s just not going to happen. If it was possible it would’ve already happened. The defensive positions preventing Ukraine to take back the lost ground are significantly more substantial than the ones holding Russia back. If they had equal numbers of men, then maybe but Russia has near infinite source of soldiers to toss into the meat grinder.





  • You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like “you’ve caught one” doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as “proof” without actual verification.

    And then there’s the classic toupee fallacy: “All toupees look fake - I’ve never seen one that didn’t.” That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.