God damn it. I've literally been warning FOR YEARS that LLMs will cause someone to commit suicide. I use this example in all my talks on why we need more research on safe NLP systems. The example I literally use is that a chatbot will reinforce someone's suicide ideation and they will act on it. Now it's happened. Now it's real.
"Belgian man dies by suicide following exchanges with chatbot"
https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt
@Riedl This is terrible. It is quite scary to think about the power that LLMs can have in convincing people to think certain ways, with very limited oversight in what is appropriate. This makes me appreciate OpenAI’s stance on securing their models. I really really hope that this at least acts as an important case for future AI research and regulations.
The thing that scares me is that these models can already be used for manipulation of vulnerable people en masse. How do we even stop that?