sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

580
active users

Conjecture that is likely true, and damning for large language models presuming that it is: An LLM trained strictly on truth will still confabulate, because it will break the bindings in what it saw, in the interpolation process, and continue to fabricate.

Further conjecture; if the above is (as I suspect true), will we never see fully honest LLMs (though they might be used as components in larger systems)

#gpt3#llm#language

@garymarcus “will we never see fully honest LLMs”

Doesn’t being honest or dishonest imply intention?

Can a greedy likelihood maximizing algorithm (LLM next token prediction) be dishonest?

Gary Marcus

@sbraun didn’t really mean to ascribe intention. more careful would have been “i never expect to see an llm that sticks only to things that are derivable from the ground truth it was trained on”