If true, #hallucinations cast serious doubt on whether the end goal of #AGI can be achieved with today’s #LLM architectures and training methods.
While ongoing research explores #RAG and hybrid models and inference techniques, no implementation to date has fully eliminated flawed reasoning.
What consumer would trust mission-critical decisions if an AGI is known to confidently state falsehoods?
@chrisvitalos haven't read the article yet - but don't actual humans also do this all the time? don't we kinda 'hallucinate' a model of reality?
I think we humans have biases and cognitive considerations that influence our reasoning and the conclusions we make.
Reflecting on this more, humans can evaluate the benefits and risks of hallucinating in the unique human social context (impact to one's credibility for example).
Consider cases of embarrassed and discredited lawyers who submitted legal briefs riddled with factual errors.
AI, in its current state, doesn't value the outcomes in this context -- because to it there is no gain or cost for it to do so.
I'd imagine AGI would need this human like ability.
@chrisvitalos I wanted to read the articles and think about it some more but I'm really tired, so I don't have anything useful to say about it now, sorry! thanks for the insights tho :)