sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

594
active users

Mark Riedl

God damn it. I've literally been warning FOR YEARS that LLMs will cause someone to commit suicide. I use this example in all my talks on why we need more research on safe NLP systems. The example I literally use is that a chatbot will reinforce someone's suicide ideation and they will act on it. Now it's happened. Now it's real.

"Belgian man dies by suicide following exchanges with chatbot"

brusselstimes.com/430098/belgi

www.brusselstimes.comBelgian man dies by suicide following exchanges with chatbotBy The Brussels Times

@Riedl This is terrible. It is quite scary to think about the power that LLMs can have in convincing people to think certain ways, with very limited oversight in what is appropriate. This makes me appreciate OpenAI’s stance on securing their models. I really really hope that this at least acts as an important case for future AI research and regulations.

The thing that scares me is that these models can already be used for manipulation of vulnerable people en masse. How do we even stop that?