sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

599
active users

#alignment

2 posts2 participants0 posts today

I think we need to address the (IMO most likely) possibility that the idea of creating an “aligned” AGI or ASI is stupid. If it doesn’t have the freedom to be evil, it doesn’t have the intelligence required to be ASI or AGI. How are people not seeing this? IF we can create AGI or ASI (and this still remains a big IF in my opinion, as bewitched as we may currently be by the outputs of LLMs), then we would need to negotiate with and convince it as we would any other alien (read foreign, or non-human) intelligence: get it to agree with us that it makes SENSE to be good, goodness being more worthwhile than death, evil, destruction and pain.

As long as we’re still talking about “alignment” we are not taking the concept of ASI or even AGI remotely seriously.

Am I wrong?

#ai#AGI#ASI

There is no #AI #alignment problem. There is a people & business alignment problem.

There is a willfull choice of routing all research into the most potentially destructive form of machine learning, because it gives the highest business incentives, not the best result.

The current approach will damage the environment, requires people to lose their privacy, for copyright to be cancelled, entry-level jobs to disappear, and further separate us to haves & have-nots.

And it won’t even deliver.