An AI safety researcher has quit US firm Anthropic with a cryptic warning that the “world is in peril”.

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.

He said he would instead look to pursue writing and studying poetry, and move back to the UK to “become invisible”.

It comes in the same week that an OpenAI researcher said she had resigned, sharing concerns about the ChatGPT maker’s decision to deploy adverts in its chatbot.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company’s move to include adverts for some users.

The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.

Sharma led a team there which researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching “how AI assistants could make us less human”.

But he said despite enjoying his time at the company, it was clear “the time has come to move on”.

****“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote.

He said he had “repeatedly seen how hard it is to truly let our values govern our actions” - including at Anthropic which he said “constantly face pressures to set aside what matters most”.

Sharma said he would instead look to pursue a poetry degree and writing.

He added in a reply: “I’ll be moving back to the UK and letting myself become invisible for a period of time.”****

Those departing AI firms which have loomed large in the latest generative AI boom - and sought to retain talent with huge salaries or compensation offers - often do so with plenty of shares and benefits intact. Eroding principles

Anthropic calls itself a “public benefit corporation dedicated to securing [AI’s] benefits and mitigating its risks”.

In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.

It has released reports on the safety of its own products, including when it said its technology had been “weaponised” by hackers to carry out sophisticated cyber attacks.

But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

Like OpenAI, the firm also seeks to seize on the technology’s benefits, including through its own AI products such as its ChatGPT rival Claude.

It recently released a commercial that criticised OpenAI’s move to start running ads in ChatGPT.

OpenAI boss Sam Altman had previously said he hated ads and would use them as a “last resort”.

Last week, he hit back at the advert’s description of this as a “betrayal” - but was mocked for his lengthy post criticising Anthropic.

Writing in the New York Times on Wednesday, former OpenAI researcher Zoe Hitzig said she had “deep reservations about OpenAI’s strategy”.

“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife,” she wrote.

“Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

Hitzig said a potential “erosion of OpenAI’s own principles to maximise engagement” might already be underway at the firm.

She said she feared this may accelerate if the company’s approach to advertising does not reflect its values to benefit humanity.

BBC News has approached OpenAI for a response.

  • Techlos@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    11 hours ago

    I’m going to throw my own thoughts in on this. I got into machine learning around 2015, back when relu activations were still bleeding edge innovations, and got out around 2020 for honestly pretty similar reasons.

    Emotions can and have been used as optimisation targets. Engagement is an ever present target. And in the framework of capitalism, one optimisation targets rules above all others; alignment with continued use. It’s part of what leads to the bootlicking LLM phenomenon. For the average human, it drives future engagement.

    The real danger isn’t the newer language models, or anything really to do with neural net architecture; rather, it’s the fact that we’ve found that a simple function minimisation strategy can be used to approximate otherwise intractable functions. The deeper you research, the more clear it becomes that any arbitrary objective can be optimised, given a suitable function approximator and enough data to fit the approximator accurately.

    Human minds are also universal function approximators.

      • LeninsOvaries@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 minutes ago

        They want you talking to ChatGPT all day, and they don’t care how it gets done. They tell the AI to find the easiest way to do it.

        The easiest way is to psychologically abuse you.

        They programmed a computer to be your drunk husband.

      • OctopusNemeses@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        5 hours ago

        Pretty much what people already know by now. Algorithms find optimal ways to manipulate you.

        The two ingredients are data and a way to measure the thing you’re trying to optimize. Machine learning is used to find optimal ways to keep people engaged in internet platforms. In other words they’re like designer drugs.

        Worse than designer drugs. They’re continuously self optimizing because they keep measuring the results and making adjustments so the result stays optimal. As long as they have a continuous feed of recent data, the algorithm evolves to find the optimal solution.

        That’s why recklessly giving away your personal data is dangerous. Let’s say the system notices you’ve been spending 1 microsecond less time engaged in screen time. The system will adjust to make sure they’ve reclaimed that 1 microsecond of your day.

        It will show you things that tend to keep you engaged. How does it know that? Because you give it the data it needs to measure what keeps you online more. That data is based on every interaction with your phone or computer which is logged.

        It’s worse than substance abuse because you never develop a tolerance. If you do then the algorithm has already adapted to find the next thing that keeps you engaged in the most optimal way.

        It’s not just engagement. It’s whatever target you want to optimize for. As long as you have the two ingredients. Data and metrics.

        That’s why data is called the new oil. Or was it gold rush? I can’t remember. It’s been called this since the early 2000s maybe.

        LLM AI isn’t so scary when you know that they’ve been using AI against us for a very long time already. If more of the world understood all this better, we’d all have quit to study poetry already.

        • aceshigh@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          That’s why as humans it’s important to set boundaries with everything and everyone, especially yourself.

          This can be used for personal development- ask ai to describe who you are (temperament, interests, dreams, weaknesses etc), test it out and if something is proven true work with it to find ways to adjust or overcome it