When someone uses the term “Artificial Intelligence,” the image evoked in most people’s minds is more like The Terminator than a real-world computer program. Our culture has heavily fictionalized artificial intelligence (AI), to the point that most people don’t consider it a reality. However, Artificial Super Intelligence may not be too far out of reach.

 

Many researchers predict that humans could develop advanced AI as quickly as the next 25 years. So, the question becomes, should we develop it? There are many potential benefits of more advanced AI. The medical field, for example, would greatly benefit from having robotic doctors as proficient as humans.

Transportation could be expedited if everything was run by competent machines. But, there are risks involved. Namely, if an intelligence program infinitely smarter than us became malevolent, there would be virtually nothing we could do to stop it. While it could provide societal benefits, a malevolent Artificial Super Intelligence (ASI) program could terminate mankind and should not be created or developed.

Before going further into the potential risks and rewards, it is important to understand AI. Artificial Intelligence can be defined as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and translation between languages.”[11]

In other words, AI attempts to replicate things that humans do naturally, such as seeing images and interpreting them, recognizing speech and so on. The advantage of using AI is that it can do the same tasks as humans but at a much faster rate, with fewer mistakes. There are basically three forms of modern AI: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).

ANI refers to a computer’s ability to perform a single task, like playing chess. AGI is only theoretical, although very likely to be developed soon. “Artificial General Intelligence (AGI) is an emerging field aiming at the building of ‘thinking machines;’ that is, general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence).”[3] AGI would be able to recognize shapes and objects, fully understand the meaning of words, and use complex thought processes that currently only human brains can do.

Artificial Super Intelligence, (ASI), refers to AI that surpasses human intellect.[14] Of the three categories, ASI is the one that poses a threat to mankind, although it does not yet exist.

ANI, the only existing AI today, is very prevalent in most people’s daily lives. Transportation applications like Uber and Lyft use AI to determine where people are most likely going to need rides and, therefore, where to deploy their drivers. E-mail uses AI to analyze messages and sort them into different categories (e.g., work, personal or spam). Banking and finance apps now use AI to decipher handwriting so that checks can be cashed remotely. Social networking websites rely heavily on AI, using features such as facial recognition and algorithms that help connect people that may know each other. Suffice it to say, most people encounter AI frequently in their lives.[8]

While AI is a recent development in technology, the theory behind it has existed for thousands of years. Aristotle first invented logic during the 4th century B.C. Logic is the backbone behind computers and computer programs. During the 13th century, Ramon Lull invented machines that discovered nonmathematical truths. Although this machine was much simpler than computers we have today, the concept of AI was the same. The 15th century saw the invention of the first typing machine and the first clock.

Computer technology made many advances during the 17th century. The first calculator was invented, as well as advanced mathematics, and several books were written about the computer theories. As time progressed, more and more machines were invented until eventually, John McCarthy a computer scientist created the term Artificial Intelligence in 1956. That same year, the first AI program was created by a team of scientists from what is now Carnegie Mellon University.[2]

While the implementation of advanced AI has been recent, the theory behind it has been evolving for centuries. One might think that, because it has taken so long to get this far, humanity can expect another couple centuries before AI might become supremely intelligent. Unfortunately, according to the Law of Accelerating Returns, technological advances aren’t linear.

The Law of Accelerating Returns, created by futurist Ray Kurzeil, says that the more advanced a society is, the faster its technology will advance. Consider the technological advance humanity has made from 1750 to 2015. Back in 1750, long-distance communication was virtually nonexistent. By 2015, however, most people had portable devices that could connect them to anyone, anywhere.

If someone from 1750 were to be transported to 2015, they might die of shock from all the advances in technology. This is referred to as a Die Level of Progress. The Die Level of Progress (DPU) is how far forward in time someone would have to travel to be so shocked by their surroundings that they die. This term, however, is not precise, but it helps to illustrate how humanity has progressed. As time moves forward, the DPU gets progressively smaller. If someone from 1750 wanted to bring someone else forward in time to kill that person of shock, they would have to go all the way back to 12,000 BC. In other words, society has made about the same amount of progress from 12,000 BC to 1750 as it has from 1750 to 2015. This is because the rate of advancements has drastically increased.

As for AI, even though the theory has been in development for hundreds of years, a program as intelligent as the entire human race combined could be created as soon as a couple decades, according to the law of accelerating returns.[16] Yet people today do not seem to be aware of its existence, let alone the potential danger, so there is not nearly enough conversation or thought about it.

The reason people are so ignorant to the issue is due in part to the fictionalization of AI. AI has been villainized in a multitude of films. One of the oldest examples is the 1968 film 2001: A Space Odyssey. The plot of the movie is that a few astronauts and an intelligent computer, Hal 9000, set off on a quest into space. The machine eventually takes over the ship and kills all the humans inside it.

AI was further villainized in The Terminator series. In it, a sentient AI program called Skynet creates an evil robot army and attempts to take over the world. The most prominent example of AI in film is The Matrix. Unlike many films where AI is the antagonist that the heroes defeat, The Matrix depicts a grave future where AI has already taken over. Not only have humans lost the war, they are now being farmed and used as batteries. Humans are plugged into a false reality (called the Matrix), entirely oblivious to their predicament.

All these films were hugely successful and well known among fans and the public alike. Films like these are part of the reason people don’t recognize AI as a real issue. Movies make AI seem fictitious. And parts of the way they’re depicted is fictitious, but not in a good way. For example, in The Terminator, Skynet builds an evil robot army to wipe out reality. A real-world AI program would never need to go to such lengths. It could hack into every nation’s missile silos and launch every nuclear weapon on the planet instantly or hack into every computerized car and create mass pandemonium. It wouldn’t need to do something as complicated as creating an army. That would take far too long.

The AI itself isn’t fictitious, but its ineffective attempts at destroying humanity are. Of all the films involving AI, 2001: A Space Odyssey contained the most realistic depiction of AI. The Hal 9000 machine understood humans perfectly and could outthink them easily. It waited until the most opportune time, then took over the spacecraft and rendered the human astronauts defenseless.

While it is obvious that a sentient AI program could do any number of things to kill mankind, the question is whether it is possible for such a program to be created. The first obstacle on the road to AI is the raw computer power required. The human brain is incredibly complex and not easy to replicate. Computers can think faster than people, complete tasks more efficiently than people, and store much more data than people can remember. Right now, however, computers are ultimately just taking orders from humans. Any task that a computer executes is because a human told it to do that.

What the next phase of AI, Artificial General Intelligence, seeks to do is make a computer think like a human being. The computer would be given a broad task, and the program would do everything a human would do to complete it. Unlike now, where humans must program every step a computer takes, an AGI program would create its own steps to complete its mission. But modern computers are not powerful enough to do such complicated calculations. That could change soon, according to futurist Ray Kurzweil. Due to the law of accelerating returns, by 2024, computers should be as intelligent as a single human. By 2049, they should be as intelligent as the entire human race.[10]

The necessary hardware to run an ASI program will probably exist in the next 20-30 years, but the software is the real challenge. Sentient AI is mostly theoretical at this point, so there is no hard evidence for or against the possibility of development. The website AI Impacts set out to determine what the consensus is among experts. According to their surveys, which were taken by AI experts, the median estimate is that there is a 10% chance of human-level AI by the 2020s, and a 50% chance of human level AI between 2035 and 2050. In addition, many estimate that there will definitely be human-level AI by 2085.[4] Many scientific and technology experts agree with these estimations. Elon Musk, a Canadian businessman, engineer and inventor, has spoken out very strongly against the development of AI.

 “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So, we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon.”[12] 

Bill Gates elaborated on Musk’s point, saying “I don’t understand why some people are not concerned with AI.”[12]

Scientist Stephen Hawkings has a very similar opinion. During an interview with the BBC in 2014 he discussed it in depth. Hawking is very confident about the possibility of ASI. He said, "I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence — and exceed it."[7]

These are all just estimates, but the fact that most experts believe it will happen in the next 100 years means people ought to prepare for the implications and consequences. The worst consequence, of course, would be that the AI gets out of human control.

It may seem that AI getting out of control wouldn’t be a problem and if push came to shove, the programmers who created the AI could just shut it down. This would be much more difficult than it seems. Before explaining this further, it is helpful to understand how a machine can learn.

A programmer named Seth Bling created a program called Mar/Io to demonstrate what AI can do. The program was designed to play Super Mario Bros, a classic video game where the goal is to get through a game level without dying. The Mar/Io program was only told that it was supposed to get as far to the right as possible (where the finish line in Mario games tends to be). At first, it attempts to learn the controls by hitting all the buttons on the controller. As it goes on, however, Mar/Io begins to understand patterns about what it should and shouldn’t do. In 23 hours, it went from understanding nothing about the game or how the controllers work to being able to play Mario better than humans can.[6]

Twenty-three hours may not seem like an impressive time to complete a single video-game level, until one considers what the AI knew how to do. Unlike people, who usually have at least a baseline understanding of how controllers work, this program knew literally nothing about the game or the controller it was given. All it knew was that it had to go to the right, and it figured out everything else on its own. At the beginning of the process, Mar/Io was the computer equivalent of a 3-year-old being handed a controller and told to beat the level. Given those circumstances, the AI would greatly outperform its human counterpart. It could do this using neural evolution.[6]

Neural evolution is when a machine can train itself using algorithms. What the program could do, using an algorithm the developer wrote called Neuro Evolution of Augmenting Topologies (NEAT), was rate its “fitness” based on how far across the screen it got. Then, Mar/Io “bred” more data and could progress further through the game.[6]

This is a very simple example of how an AI program can learn and perfect things much more quickly than humans can. Imagine now that instead of playing a video game, an extremely intelligent program’s goal was to terminate humans. If it was truly ASI, it would simply be able to outthink its creators. It would understand humans perfectly and know that the humans would try to stop it soon. It would then copy itself to a different computer where it could continue its goal safely. Or, it would use some other method that humans would be entirely unable to stop. Humans couldn’t simply shut it off because it would be too smart for that.

Unlike in the movies, where the AI always has some glaring weakness that allows the protagonists to prevail in the end, a true ASI program would literally be unstoppable to humans. It could think infinitely faster, make infinitely more calculations, and gather infinitely more resources. Any solution humans came up with, it would have already considered and prevented.

Assuming ASI was created, one might ask, why would it want to terminate humanity? There are several possibilities, the first being that it was simply programmed to do so. As an act of war, another nation could unleash a program whose intention was to wipe out the U.S., or vice versa. This program could get out of control and wipe out humanity, or it could decimate a country. Another possibility, however, is that it could be programmed for something beneficial and then become destructive.[5] Daniel Dewey, a futurist researcher, explained his concerns.

"The basic problem is that the strong realization of most motivations is incompatible with human existence. An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we go to construct a building. In other words, while the AI may not be programmed specifically to eliminate humanity, it may view humans as an obstacle to whatever its final goal may be and terminate humans for that reason.”[9]

Dewey continues, “You could give it a benevolent goal — something cuddly and utilitarian, like maximizing human happiness. But AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with nonlethal doses of heroin is the best way to maximize your happiness.” In this scenario, the program would be completing its task of making us happy, even though drugging everybody was not what the programmers wanted.[9]

Another concern that is much closer to reality is machine-controlled military. University of London professor Mark Bishop explained, “I am particularly concerned by the potential military deployment of robotic weapons systems – systems that can take a decision to militarily engage without human intervention – precisely because current AI is not very good and can all-too-easily force situations to escalate with potentially terrifying consequences.”[9]

This is a very real scenario. The U.S. government already uses Unmanned Aerial Vehicles, or drones, in combat situations around the world. If we continue to rely on machines to fight, the situation Bishop describes could very well occur.

Yet another possibility is the Grey Goo Scenario. It suggests that if robots are able to reproduce rapidly they would eliminate humans as a means of freeing up more space on the planet.[9] Suffice it to say that there are many reasons why AI may terminate us. People imagine AI being an evil program like Skynet that just wants to kill humans because it can. However, the AI may not be evil, simply accomplishing its goal in the most efficient way possible.

No one can deny that there are potential benefits to ASI. It could advance technology infinitely faster than people, it could make medical advancements and it could save millions of people. If, however, there is the potential that such a program could wipe out all of humanity, then it clearly must not be created. People could try to put safeguards in place to ensure that it doesn’t get out of control. Then, humanity would be relying on a program being perfect, which isn’t feasible.

Almost all programs have bugs or glitches, and one oversight could unleash the ASI. If the ASI decided to terminate humanity, we would be powerless to stop it. Indeed, when weighing the infinite risk against the finite reward, it is clear an Artificial Super Intelligence program is too dangerous and should not be developed by humans.

 

References

  1. 2001: A Space Odyssey, Dir. Stanley Kubrick. Perf. Kier Dullea, Gary Lockwood, William Sylvester, 1968, Warner Bros
  2.  “AI Topics," A Brief History of AI, 12 May 2017
  3. "AGI Society," AGI, Web. 16 May 2017
  4. "AI Timeline Surveys," AI Impacts, Web. 18 May 2017
  5. "Benefits & Risks of Artificial Intelligence," Future of Life Institute, 16 May 2017
  6. Bling, Seth. "MarI/O - Machine Learning for Video Games," YouTube.com, 13 June 2015
  7. Cellan-Jones, Rory, "Stephen Hawking - will AI kill or save humankind?," BBC News, 2016
  8. Faggella, Daniel, "Everyday Examples of Artificial Intelligence and Machine Learning," TechEmergence.com, 31 Mar. 2017, Web. 16 May 2017
  9. Feinberg, Ashley, “How AI Could Ruin Humanity, According to Smart Humans,” Gizmodo, Gizmodo.com, 12 Jan. 2015, 17 May 2017
  10.  Ghose, Tia, "Intelligent Robots Will Overtake Humans by 2100, Experts Say," Live Science, 07 May 2013, Web. 16 May 2017
  11.  "Google," Dictionary.com, Dictionary.com, n.d., Web. 20 May 2017
  12.  Holley, Peter, "Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’," The Washington Post, WP Company, 29 Jan. 2015, Web. 17 May 2017
  13.  Matrix The, Dir. Lilly Wachowski and Lana Wachowski, 1999, Warner Bros
  14.  Solutions, Astute, "Artificial Narrow Intelligence and the Customer Experience," Astute Solutions, 13 May 2017
  15.  Terminator, The, Dir. James Cameron., 1984
  16.  Urban, Tim. "The Artificial Intelligence Revolution: Part 1 & 2," Wait But Why, 04 Feb. 2017, 14 May 2017