sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

594
active users

Chris Vitalos

If true, cast serious doubt on whether the end goal of can be achieved with today’s architectures and training methods.

While ongoing research explores and hybrid models and inference techniques, no implementation to date has fully eliminated flawed reasoning.

What consumer would trust mission-critical decisions if an AGI is known to confidently state falsehoods?

newscientist.com/article/24795

New Scientist · AI hallucinations are getting worse – and they're here to stayBy Jeremy Hsu

@chrisvitalos haven't read the article yet - but don't actual humans also do this all the time? don't we kinda 'hallucinate' a model of reality?

@bazkie

I think we humans have biases and cognitive considerations that influence our reasoning and the conclusions we make.

@bazkie

Reflecting on this more, humans can evaluate the benefits and risks of hallucinating in the unique human social context (impact to one's credibility for example).

Consider cases of embarrassed and discredited lawyers who submitted legal briefs riddled with factual errors.

AI, in its current state, doesn't value the outcomes in this context -- because to it there is no gain or cost for it to do so.

I'd imagine AGI would need this human like ability.

apnews.com/article/artificial-

AP News · Lawyers submitted bogus case law created by ChatGPT. A judge fined them $5,000By LARRY NEUMEISTER

@chrisvitalos I wanted to read the articles and think about it some more but I'm really tired, so I don't have anything useful to say about it now, sorry! thanks for the insights tho :)