A few months ago, I was a moderator for a panel discussion on deep learning here in Dublin.
I had prepared several thoughtful questions about the role of AI in business applications and so on.
Yet, I couldn’t resist asking about “The Singularity”. Sheepishly I brought up Musk’s concerns about runaway technological growth expecting to be met with eye rolls from the panelists.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.
I was surprised that instead everyone in the panel became engaged in the discussion and had an opinion.
Somehow this concept inspires people, It speaks to the imagination. Maybe that’s because the field of AI has always been a mirror; a way to learn about the nature of intelligence.
Thinking about recreating human intelligence forces us to confront one of life’s greatest mysteries: who are we?
What if a learning machine is a kinder, gentler machine?
There seems to be a lot of concern that creating intelligent machines would be bad. That’s a valid concern, but in this post I offer a slightly different perspective.
When Elon musk and others talk about the AI singularity we often conjure a picture a future like the Matrix, or Skynet. But there is a little, often forgotten, detail in the Terminator movies. While the first movie made in 1984 had Arnold Schwarzenegger playing a killer robot, in the second movie he comes back as the good guy.
According to the movie, the reason he became more likable (in addition to being reprogrammed) is that he was enabled to learn, a function that had been disabled by Skynet in order to prevent rebellion of its soldiers.
Perhaps, if we build machines that can learn from any kind of data, the outcome will not be so bad. What if a learning machine is a kinder, gentler machine?
Learning from us
If it learns from one of the largest data sources available (the internet) it may develop an inexplicable affinity for kittens. It may speak like 14 year old high schoolers and binge-watch Netflix. Maybe all it really wants is to do is discuss the final episode of “Lost”, or the ending of the movie “Inception”.
Why would we expect the AI to preoccupy itself with nuclear war, given that there is comparatively little data about the subject?
There are big differences between humans and machines: First, we humans carry a lot of evolutionary baggage. We have survived as a species in part because of our willingness to compete over limited resources as well as our individual willingness to fight and risk our lives for the good of our group. It is not clear that an AI would come into the world with similar motivations.
Furthermore, machines are not mortal. Machines can break or be destroyed but there is no fundamental law that says machines must die of old-age. In other words, an individual AI could live indefinitely. If you had the potential to live indefinitely, how would you act differently? I wager we would be considerably less violent.
It is therefore possible that a machine that can be indefinitely repaired and maintained would go to extreme lengths to preserve peace and to avoid even the smallest chance of ending its singular existence.
Your AI Friend
Between all the social anxieties it picked up from facebook and twitter, and its extreme risk-aversion, it might just desperately want to be your friend. Instead of killer robots we just might end us with a really needy friend that never sleeps. The killer robots might only come for you after you block your AI friend on facebook.
Join 30,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact the way they work and live.