How do we wake up regulators about the risks of the upcoming 'conversational' search engine war?
I have, as many in NLP worldwide in the last 2 months, talked to many journalists and academics from other fields about #ChatGPT. Many of us have warned rigth away (e.g., [1]) that the commercial interests & competition in big tech will lead to this technology being unrolled prematurely, without proper independent risk assessment. Is anyone doing anything about it?
I worry, however, that many don't take those risks very seriously: a little bit of plagiarism, some misinformation, a bit of misogyny -- nothing new, right? That's a mistake when we talk about search engines: they are the entry point to the internet. When Facebook, another key entry point, changed its policies several times in the last decade, this had massive direct (e.g., on internet traffic to newspapers) & indirect effects. When Bing/Perplexity/You & others now integrate search with
#chatgpt-like techniques & force Google to the same, we don't know the consequences, but accidents are bound to happen: companies go bankrupt, crucial information does not reach key people, while misinformation does. It's as if a company controlling a major highway suddenly redirects all traffic to a secret road; surely, governments would want a say? I'm no expert on regulations but I wish regulators would hit a pause button & create an opportunity for some proper auditing of this technology.
I've sometimes disagreed with @garymarcus's wide ranging critique of deep learning, but this piece is again really good and a powerful warning. It ends with a wake-up call for regulators (as my thread above): "And, bonus, there is little if any government regulation in place to do much about this. The possibilities are now endless for propaganda, troll farms, and rings of fake websites that degrade trust across the internet.".
https://garymarcus.substack.com/p/inside-the-heart-of-chatgpts-darkness