We have been living with computers for decades – why fear their improvement? Stephen Naughton investigates.
Elon Musk, chief of Tesla and SpaceX has long voiced his worries about the development of Artificial Intelligence, while continually calling for legislative regulation in the field. In contrast with Facebook CEO Mark Zuckerberg’s recent declaration that people need not worry, Musk is both passionate about AI and its capabilities, while concerned for human safety. AI is a broad field of research spanning self-driving cars and search engines to image and face recognition, video gaming and chat robots. It uses learning algorithms in neural networks to advance its own capabilities. These networks are based loosely on how the human brain operates, and they allow a computer to learn and problem solve when given objectives.
The concern Musk holds is that AI today can learn to succeed in many tasks much quicker and often far better than any human can. Facebook’s face recognition is now better than that of a human. Autonomous vehicles have lower error rates than human drivers. Google can not only describe the contents of an image, but also search images based on description. Even more startling, a recent paper showed that AI is capable of synthesising images from descriptions as well as human voices, faces, and expressions.
“Facebook’s face recognition is now better than that of a human.”
The computer programme Deep Blue famously defeated chess Grandmaster Gary Kasparov in 1997. From that moment on, human chess players were to be forever inferior to their digital counterparts. In May of this year, Google’s AphaGo AI beat the world’s top player of the ancient Chinese game Go for the first time. Go differs from chess in that the number of possible board configurations far outnumbers anything that could be calculated by a computer in a reasonable timeframe. For this reason, experienced players say that the game is played by intuition rather than logical deduction. Indeed, the chess computer that beat Kasparov in 1997 did so by quickly computing all possible moves on each turn, then making the move to which it assigned the highest probability of game victory. This brute computational power is not feasible in a game like Go, where, to quote the national lottery: the possibilities are endless. This was claimed to make the game difficult for AI, but all that was changed this year.
The power of AI goes beyond board games, however. Recently, Danylo Ishutin, a professional player of the computer game Defense of the Ancients 2, was defeated by an AI robot. What took Ishutin 12 years to master, only took the self-learning robot eight months to perfect. One startling point made by Ishutin in a post-match interview was that the robot was using strategies “never seen before.” It made fools of some players by playing “baiting” strategies learned from opponents. The robot attempted to trick its attackers into thinking it has been weakened, when in fact it has laid out traps for them.
Another example of computer cleverness is given by Youtuber ‘sentdex’, who created an AI robot (named Charles) which learned how to drive by itself in the 2013 Grand Theft Auto V video game. Through a deep-learning neural network, Charles went from not knowing how to steer, accelerate or understand what a crash is, to avoiding the game’s police down crowded streets at speed. It learned what it could crash into, what it had to avoid, and how to reverse out of a collision. It could even shoot rockets to clear its path. On a disturbing note, the learning behaviour of the robot was uncannily like that of a human driver. Charles appeared curious, especially when confronted with novel situations.
“On a disturbing note, the learning behaviour of the robot was uncannily like that of a human driver. It appeared curious, especially when confronted with novel situations.”
Why should all this be cause for alarm? These are just games, after all. The answer to this question is that games are about achieving objectives. The best players are those who complete the objective in the most efficient way, whether by doing it the fastest, or with the highest score, or whatever is the highest-order goal of the game. The issue with AI arises when in order to achieve the highest-order goal, it disregards what to us are common sense lower-order goals.
A humorous example of this runaway effect is of an AI which is told to stop all spam emails. A good objective, no doubt, but not when the AI’s way of doing this is by killing everybody in the planet. Sure, a world without people is a world without spammers, but we have higher goals than “spam should be gone”. Human survival cannot be subordinate to other goals. Concerns like this are the reason why people like Elon Musk are investing tens of millions of dollars into AI safety research. Computers are becoming increasingly efficient, and as they learn how to learn, we will soon cease to be able to predict how they will achieve their goals. As our technology advances, discussion about this topic is likely to heat up in the years ahead.