We're in Nature with an opinion piece on how researchers should respond to #ChatGPT and conversational AI technology more generally!
It's been an interesting experience to reach consensus in an interdisciplinary team of scholars (2 psychologists, 1 computer scientist, 1 philosopher and me, an NLP-er).
We list 5 priorities:
1. Hold on to human verification
2. Develop rules for accountability
3. Invest in truly open LLMs
4. Embrace the benefits of AI
5. Widen the debate
"... defies today’s binary definitions of authorship, plagiarism and sources, in which someone is either an author, or not, and a source has either been used, or not. Policies will have to adapt, but full transparency will always be key. ... the community will also need to work out who holds the rights to the texts. Is it the individual who wrote the text that the AI system was trained with, the corporations who produced the AI or the scientists who used the system to guide their writing?"