• tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    176
    ·
    edit-2
    14 hours ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      62
      ·
      12 hours ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        37
        arrow-down
        2
        ·
        8 hours ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      132
      arrow-down
      6
      ·
      14 hours ago

      I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

      AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

      Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”

          • k0e3@lemmy.ca
            link
            fedilink
            English
            arrow-up
            11
            ·
            9 hours ago

            Yeah but then their Facebook accounts will keep producing slop even after they’re gone.

        • Tyrq@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 hours ago

          the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      88
      ·
      13 hours ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          10 hours ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          43 minutes ago

          that’s the annoying part.

          LLM code can range to “doesn’t even compile” to “it actually works as requested”.

          The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.

          The requested feature might actually work. 100%.

          It’s just very possible that it either broke other stuff, or made the codebase less maintainable.

          That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase