

1·
2 days agomy original comment before editing read something like “they specifically asked chatgpt not to produce bomb manuals when they trained it” but i didn’t want people to think I was anthropomorphizing the llm.
my original comment before editing read something like “they specifically asked chatgpt not to produce bomb manuals when they trained it” but i didn’t want people to think I was anthropomorphizing the llm.
hey that’s pretty fuckin good lol
well, yes, but the point is they specifically trained chatgpt not to produce bomb manuals when asked. or thought they did; evidently that’s not what they actually did. like, you can probably find people convincing other people to kill themselves on 4chan, but we don’t want chatgpt offering assistance writing a suicide note, right?
this, like an open casket funeral, remains to be seen.