• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    The pathological need to find something to use LLMs for is so bizzare.

    It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.

    But this:

    AI-enhanced tab groups. Powered by a local AI model, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.

    Take out the word AI.

    Enhanced tab groups. Powered by a local algorithm, these groups identify related tabs and suggest names for them. There is even a “Suggest more tabs for group” button that users can click to get recommendations.

    If this feature took, say, a gigabyte of RAM and a bunch of CPU, it would be laughed out. But somehow it ships because it has the word AI in it? That makes no sense.

    I am a massive local LLM advocate. I like “generative” ML, within reason and ethics. But this is just stupid.

    • DaddleDew@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      When I’m browsing around with multiple tabs open, the last thing I want is something to start moving them around and messing my flow up. This is a solution looking for a problem.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Yup

        Auto naming functionality is neat in some cases, like the AI chat UI itself

        • It’s convenient to have names when toggling between a few recent chats or searching through 10s or 100s of chats later on
        • I spawn new chats often and it’s tedious to name them all
        • I don’t have a strong preference for what the title is as long as it’s clear what the chat was about

        Tab groups don’t hit those points at all

        • I’ll have a handful of tab groups
        • I don’t make them often
        • I have a strong preference for what it’s called, and the AI will have trouble figuring out exactly what I’m using those sites for
    • Saleh@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I agree with you on almost everything.

      It’s like the opposite of classic ML, relatively tiny special purpose models trained for something critical, out of desperation, because it just can’t be done well conventionally.

      Here i disagree. ML is using high dimensional statistics. There exist many problems, which are by their nature problems of high dimensional statistics.

      If you have for an example an engineering problem, it can make sense to use an ML approach, to find patterns in the relationship between input conditions and output results. Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.

      Another example for “generative AI” i have seen is creating models of hearts. So by feeding it the MRI scans of hundreds of real hearts, millions of models for probable heart shapes can be created and the interaction with medical equipment can be studied on them. This isn’t a “desperate” approach. It is a smart approach.

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Based on this patterns you have an idea, where you need to focus in the physical theory for understanding and optimizing it.

        How do you tell what the patterns are, or how to interpret them?

        • Saleh@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          The recognition of the pattern is done by the machine learning. That is the core concept of machine learning.

          For the interpretation you need to use your domain knowledge. Machine learning together with knowledge in the domain analyzed can be a very powerful combination.

          Another example in research i have heard about recently, is detection of brain tumors before they occur. MRIs are analyzed of people who later developed brain tumors to see if patterns can be detected in the people who developed the tumors that are absent in the people who didn’t develop tumors. This knowledge of a correlation between certain patterns and later tumor development could help specialists to further their understanding of how tumors develop as they can analyze these specific patterns.

          What we see with ChatGPT and other LLMs is kind of doing the opposite by detaching the algorithm from any specific knowledge. Subsequently the algorithm can make predictions on anything and they are worth nothing.

    • Godort@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      The pathological need to find something to use LLMs for is so bizzare.

      Venture capital dumped so much money into the tech without understanding the full scope of what it was capable of. Now they’re so in so deep that they desperately NEED to find something profitable it can do, otherwise they’ll lose the farm.