|Can A.I. help us understand what it means to be human?
Recommended by Alexander Wagner
A while back, a program called AlphaGo beat the human world champion at the game of Go for the first time. Recently, a new competitor, AlphaGo Zero, won against AlphaGo — the breakthrough here being that AlphaGo Zero was not trained by humans, but has, in some sense, learned to play by itself. Even more recently, a more general version, AlphaZero won against one of the strongest, human-designed chess programs, Stockfish — after just four hours of self-learning. Meanwhile, a humanoid robot has been granted citizenship of a country, Saudi Arabia. While some regard the latter event as a PR stunt, it raises interesting conceptual and practical questions.
In short, the ground is shifting beneath our feet. Is it, therefore, worth watching a TED Talk on artificial intelligence from two years ago? In my view yes, when that talk is Nick Bostrom’s “What happens when our computers get smarter than we are?” Bostrom argues that we need to “figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values.” In other words, beyond maximizing artificial cognitive intelligence, we also need to think about what artificial moral intelligence may (or should) look like.
After this deep talk, I wonder: What values would we teach A.I.? Research (including my own) suggests that there is great variation among humans when it comes to making, for example, decisions to deceive others. Should we (if we can) instill this variety into machines as well? What goals and objectives are we equipping machines with?
It seems to me that this is fundamentally a question about what the meaning and purpose of life is for us. As such, maybe thinking about whether and how we want to influence A.I.’s moral compass will actually help us understand more deeply what it means to be human.