So much for DARE program. (TikTok screencap)

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    1
    ·
    12 hours ago

    I have figured out where my intense dislike of using AI comes from.

    My need for control/understanding stuff I create.

    I find the idea of giving up knowledge and skills in favor of statistics based chatbot to be repulsive and extremely dangerous.

    I am no systems developer, I am an IT guy, I write some Powershell from time to time to automate stuff, I understand the code, if I didn’t, I would not run it on prod.

    A huge amount of why normal office workers like using AI, in my opinion, is that they never learned how make even simple scripts which would have helped them in a lot of their tasks.

    And now they use AI to make a computer kinda guess what they wanted, but with no way to verify the code before running it.

    I honestly can’t understand why companies/governments/institutions believe that a chatbot is better than a skilled and knowledgable developer.

    • Buddahriffic@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 minutes ago

      They don’t really understand how they work and get misled by how AI can get to a correct solution (or correct looking one).

      Like I was in a meeting where people were presenting their Claude skills (which are just text files describing processes that it can add to the context) and one manager mentioned doing regression testing on added skills to make sure they don’t break the functionality of existing ones. From my pov, he was both on the right track but also missing the point entirely because they won’t be able to consistently pass regression tests even without new skills. Because something being in the context window only has a chance of affecting the output. If the code being modified has comments that look like instructions, they might override the actual instructions.

      Or it might try solving non-existent problems for you. Like a skill I was “developing” for making a particular modification to tests basically just outright said “make a test that inherits from the target test and add these parameters”. Dead simple step. First test I use to test it on, I see it’s missing one of the arguments. I mention it and the AI says that because of the start of the name being “<name of section>” and the test didn’t target that section, it decided that the argument wasn’t necessary, so I had to add instructions to not just add that argument but to not decdide to just leave it out for arbitrary reasons.

      I can’t say for sure any of the AI tasks I’ve done saved any time by being AI. But the mental load is lower and they really want us using AI, so I’ll keep doing it, but the unreliability is going to cause more problems than it solves in the long run IMO.

    • KSP Atlas@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 hour ago

      100%

      Programming languages are already languages, languages that do exactly what I request, languages which are free/cheap to use, languages which just make sense

      Why would I defer that to another language which is imprecise, and which I don’t have control over?

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      35
      ·
      9 hours ago

      It not being consistent also adds to this.

      “Same exact input in same exact situation next time can have completely different output” is just unacceptable in automation.

      • criss_cross@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        20 minutes ago

        I feel like a gambler trying to get a big win and 50% of the time digging myself out of a bigger hole.

    • KatherinaReichelt@feddit.org
      link
      fedilink
      arrow-up
      11
      ·
      8 hours ago

      A huge amount of why normal office workers like using AI, in my opinion, is that they never learned how make even simple scripts which would have helped them in a lot of their tasks.

      Most office workers do not have the access rights and programs to run scripts. I know how to write them, but there is no way that local IT would allow me to run python on my work machine.

      My need for control/understanding stuff I create

      And that is really important from a mental health perspective: People are being held responsible for their work. If something does break that they’ve built, their boss & coworkers expect that they are able to fix it and that the error will not happen again. If you understand your system, you are able to do that. If you are responsible for some kind of AI-driven house of cards that you do not understand and can’t fix, that is really bad. It’s triggering some kind of imposter syndrome

    • dbtng@eviltoast.org
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      edit-2
      11 hours ago

      Um … NORPs don’t use AI to code. They use it to think for them. And mostly they don’t think about code.

      If you exported your support ticket database and your customer feedback to CSV files, upload that to a chatbot, cross-reference it, ask it the right questions, structure the output, it could tell you everything going right and wrong with your customer base. Backed up with data. And it could do that in like 10 minutes. Instead of weeks of research.

      Problem is, you (or whoever reads tickets), are going to miss the details. You won’t read that one heartbreaking story. You won’t see the success. You’ll have an insightful (if you are good with prompts) summary. That’s it.

      Secondary (or perhaps this is the big one) problem is that the more you let the machine think for you, the less thinking you do. It makes you dumb.

    • youcantreadthis@quokk.au
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      11 hours ago

      Better information control no human to leak it no Hunan in the loop more human labor fungibility its the best