Will artificial intelligence (AI) take over the world, or will it help to bring about an unprecedentedly advanced human civilization? That’s the debate that has (quite rightfully) been raging for some time, with people like Elon Musk warning about its dangers, and others like Mark Zuckerberg focusing on all the advancements it could bring us.
Stephen Hawking has previously voiced concerns about AI, and just recently, he’s repeated them. During an interview with Wired, he said: “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”
This is a fair thing to worry about, and it certainly taps into something inherently fearful. So much science fiction across a range of media revolves around the idea of AI becoming independent from its human creators – in a hostile way, or simply in an attempt to peacefully branch away.
Generally speaking, this fear comes from the notion that we are weak and replaceable, and that an AI that can outlive us will ultimately replace us. On a more immediate, visceral level, we fear that an AI may actively seek to harm us.
In either sense, they are concerns worth noting and taking into consideration. They shouldn’t, however, eclipse all other opinions about the future of AI, particularly the positive aspects. After all, many other experts, including Bill Gates, consider AI to be the Next Big Thing, the technological renaissance that will transform our society.
Studies have already shown that AI is better at recognizing patterns than humans. Whether its board and video games, or things as complex as IVF treatment and breast cancer diagnoses, machines are already surpassing us. This may sound frightening to some, but all it means is that certain enigmas could be solved more readily with the help of the machines.
At the same time, entire cities are being partly managed by AIs. One notable experiment in China has revealed that traffic and crime rates are down thanks to a similar type of pattern recognition software.
Sure, AI controlled by a dangerous person could be used maliciously. As Bill Nye points out, though, “If we can build a computer smart enough to figure out that it needs to kill us, we can unplug it.”