A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
- Naval Ravikant
- 1 Post
- 1 Comment
Joined 8 months ago
Cake day: January 30th, 2025
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.
The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.