sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

594
active users

Maria Antoniak

New blog post ✍️ by Lucy Li, Maarten Sap, Luca Soldaini, and me about large language models and how to use them with care.

We discuss:
- 10 current risks ⚠️ posed by chatbots and writing assistants to their users
- 7 questions 🤔 to ask yourself before using these tools

Our goal with this post is to increase transparency for everyday users 💻🔍

blog.allenai.org/using-large-l

AI2 BlogUsing Large Language Models With Care - AI2 BlogBy Maria Antoniak

Risk #1: LLMs can produce factually incorrect text.

Risk #2: LLMs can produce untrustworthy explanations.

Risk #3: LLMs can persuade and influence, and they can provide unhealthy advice.

Risk #4: LLMs can simulate feelings, personality, and relationships.

Risk #5: LLMs can change their outputs dramatically based on tiny changes in a conversation.

Risk #6: LLMs store your conversations and can use them as training data.

Risk #7: LLMs cannot attribute sources for the text they produce.

Risk #8: LLMs can produce unethical or hateful text.

Risk #9: LLMs can mirror and exacerbate social biases and inequality.

Risk #10: LLMs can mimic real people, news outlets, governments, etc.

People are using chatbots and writing assistants to write emails, do homework, write code, get mental health advice, and more, without necessarily being informed of the risks involved.

On a personal note, I went looking for a resource like this last month, after observing my own friends and family using LLMs in unexpected ways. I couldn't find anything written for everyday users that clearly laid out immediate risks, and so this blog post was born.

(Previously advertised on Twitter, but maybe I can reach a more diverse audience here!)

@maria_antoniak with regard to risk #7, but tools like perplexity.ai (that use LLMs) can ?

@ErikJonker They generate citations but they’re not trustworthy.

@maria_antoniak
True, however it makes it easier to check the generated content and make it usable (opinion)

@ErikJonker Maybe, maybe not. I'd proceed with great caution because unless you're an expert on the topic you're querying about, it will be very difficult to know what's missing. Having some fake and/or incomplete citations might be worse than just a blanket admission that the citations don't currently work.