Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?
I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.
So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?
I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?
Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.