sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

588
active users

How do we wake up regulators about the risks of the upcoming 'conversational' search engine war?

I have, as many in NLP worldwide in the last 2 months, talked to many journalists and academics from other fields about . Many of us have warned rigth away (e.g., [1]) that the commercial interests & competition in big tech will lead to this technology being unrolled prematurely, without proper independent risk assessment. Is anyone doing anything about it?

[1] volkskrant.nl/nieuws-achtergro

de VolkskrantEen moordscène à la Nicci French gemaakt met kunstmatige intelligentie, én meer: ‘Iedereen is verbijsterd over de kwaliteit’Een doortimmerd marketingplan maken, een column voor de krant schrijven of een bestaande computercode verbeteren: het taalprogramma ChatGPT verbaas...

I worry, however, that many don't take those risks very seriously: a little bit of plagiarism, some misinformation, a bit of misogyny -- nothing new, right? That's a mistake when we talk about search engines: they are the entry point to the internet. When Facebook, another key entry point, changed its policies several times in the last decade, this had massive direct (e.g., on internet traffic to newspapers) & indirect effects. When Bing/Perplexity/You & others now integrate search with

-like techniques & force Google to the same, we don't know the consequences, but accidents are bound to happen: companies go bankrupt, crucial information does not reach key people, while misinformation does. It's as if a company controlling a major highway suddenly redirects all traffic to a secret road; surely, governments would want a say? I'm no expert on regulations but I wish regulators would hit a pause button & create an opportunity for some proper auditing of this technology.

Jelle Zuidema

I've sometimes disagreed with @garymarcus's wide ranging critique of deep learning, but this piece is again really good and a powerful warning. It ends with a wake-up call for regulators (as my thread above): "And, bonus, there is little if any government regulation in place to do much about this. The possibilities are now endless for propaganda, troll farms, and rings of fake websites that degrade trust across the internet.".

garymarcus.substack.com/p/insi

The Road to AI We Can TrustInside the Heart of ChatGPT’s DarknessBy The Road to AI We Can Trust