A techbro? Do you think I work for some big company? I am a PhD student motherfucker.
- 0 Posts
- 15 Comments
well that settles it then! you’re apparently such an authority.
I am someone who is paid to research uses and abuses of AI and LLMs in a specific field. So compared to randos on the internet like you, yeah I could be considered an authority. Chances are though you don’t actually care about any of this. You just want an excuse to hate on something you don’t like and don’t understand and blame it for already well established problems. How about instead you actually take some responsibility for the state of your fellow human beings and do something helpful instead of being a Luddite.
NotANumber@lemmy.dbzer0.comto
Technology@lemmy.world•Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of CodeEnglish
1·2 maanden geledenI am sure your right. Please tell me where I am wrong. I could always do to learn more about systems engineering.
NotANumber@lemmy.dbzer0.comtoPolitical Memes@lemmy.world•Grok says Charlie Kirk shot is a meme video
31·2 maanden geledenWe call animals he or she all the time. Ships in naval tradition are always called she. So probably it’s best to call grok she.
NotANumber@lemmy.dbzer0.comto
Technology@lemmy.world•Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of CodeEnglish
22·2 maanden geledenThere is something I never understood about people who talk about scaling. Surely the best way to scale something is simply to have multiple instances with so many users on each one. You can then load balance between them. Why people feel the need to make a single instance scale to the moon I have no idea.
It’s like how you don’t need to worry about MS Word scaling because everyone has a copy on their own machine. You could very much do the same thing for cloud services.
I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
The 50s? Did LLMs exist in the 50s?
This is why safety mechanisms are being put in place, and AIs are being programmed that act less like sycophants.
So far there have been about two instances of this happening from two different companies. Already there is a push for better safety by these companies and AIs that act less like sycophants. So this isn’t the huge issue you are making it out to be. Unless you have more reports of this happening?
Ultimately crazy people gonna be crazy. If most humans are as you say then we have a more serious problem than anything an AI has done.
What graph? What are you talking about now?
There are to my knowledge two instances of this happening. One involving openai the other involving character.ai. Only one of these involved an autistic person. Unless you know of more?
I also think it’s too soon to blame the AI here. Suicide rates in autistic people are ridiculously high. Something like 70% of autistic people experience suicidal ideation. No one really cared about this before AI. It’s almost like we are being used as a moral argument once again. It’s like think of the children but for disabled people.
I think they were talking specifically about character.ai and one particular instance that involved an autistic person.
OpenAI and character.ai are two different things. I believe character.ai uses there own model, but I could be wrong.
Are there records of this happening? Did someone prompt it into doing this?
What has happened exactly?

What movie is this?