• turboSnail@piefed.europe.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    1 day ago

    How about asking it to write a short political speech on climate change. Then, just count the number of rhetoric devices and em-dashes. A human dev wouldn’t be bothered to write anything fancy or impactful when they just want to submit a bug fix. It would be simple, poorly written, and filled with typos. LLMs try to make it way too impressive and impactful.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      19 hours ago

      The funnier thing is when you try to get an LLM to do like, a report on its creators.

      You can keep feeding them articles detailing the BS their company is up to, and it will usually just keep reverting to the company line, despite a preponderance of evidence that said company line is horseshit.

      Like uh, try to get an LLM to give you an exact number of uh, how much will this conversation we are having, how much will that increase RAM prices in a 3 month period?

      What do you think about ~95% of companies implementing ‘AI’ into their business processes reporting a 0 to negative boost to productivity?

      What are the net economic damages of this malinvestment?

      Give it a bunch of economic data, reports, etc.

      Results are usually what I would describe as ‘comical’.

      • turboSnail@piefed.europe.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        “Don’t Bite The Hand That Feeds You”. LLMs seem to have internalized this rule pretty well. I can imagine that this idea can also be taken much further. Basically like trying to search “Tiananmen Square massacre” on the wrong side of the Great Firewall of China.

        Well, what if LLMs were instructed to not talk about “sensitive topics” like that? After all, more and more people are already using an LLM as a search engine replacement, so it’s only natural that Microsoft and OpenAI might receive some interesting letters about implementing very specific limitations.

      • turboSnail@piefed.europe.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        No need to add any more than you usually do. Just leave the ones you are unable to see. Besides, LLMs tend to write in overly grand style, whereas humans can’t be bothered to use every trick in the book. Humans just get to the point and skip all the high-impact language that LLMs seem to love.