• mabeledo@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 hours ago

    No local models will be as good as those offered by big corporations, ever. It’s just not physically possible. Even worse, you don’t seem to understand that running a model is not the issue, training it is.

    Regardless, even if any of this wasn’t true, running LLMs on prem is something that’s only achievable by very few people worldwide. It would take generations for poorer countries to catch up, once again, so this AI race is effectively another attempt at exacerbating inequality, and frankly, it’s giving some strong “war for oil” vibes, people in richer countries happily ignoring what’s going on elsewhere because they are getting nicer things.

    • SabinStargem@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 hours ago

      I have 128gb of DDR4 RAM, a 4090, and a 3060. While certainly not weak, my computer is some generations behind. People, real people, can run a model inside their homes. Provided you limit the context and get a midrange quantization, you can run a Qwen3.6 35b on a midrange gaming PC.

      Given time, we will someday run DOOM Eternal in our pockets, and be able to talk with the demons.

      • mabeledo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 hours ago

        This is exactly the kind of cluelessness I was talking about. Again, training is way more expensive than running models, and very obviously a rig that costs several thousands of dollars is something not many people have access to.