The idea of artificial intelligence is polarizing. It either repulses or excites people in equal measure. Will it bring a type of paradise or a technological hell? No matter your personal feelings or beliefs surrounding AI, there’s no going back. Our lives are already influenced by artificially intelligent machines, daily. New technology is continuously being brainstormed, designed, developed, and brought to market.

There are bold theories involving two distinctly different futures. To understand the future implications, though, we need to learn more about the present. For this purpose, it’s best to divide it into three categories: narrow intelligence, general intelligence, and superintelligence.

AI is Everywhere

Artificial narrow intelligence (ANI), what some people call weak AI, is all around you. Like most Americans, you probably interface with this category of AI daily. It is the type of AI that uses data analysis and complex algorithms to beat you at chess or intuitively arrange your Facebook timeline.

Weak AI has intelligence focused on a specific area. While only a small amount of AI at this level can pass the Turing test, our lives are already dependent on it. Without ANI, we wouldn’t have Siri, self-driving cars, or targeted advertising. Nor would our lives be as efficient due to the millions of simple automated tasks that run, unnoticed, in computing, finance, and industry.

There is much that narrow AI is not yet great at. For example, AI cannot yet simulate all aspects of human brain function, chiefly –reasoning, empathy, and common sense. AI cannot yet solve the problems humans haven’t already solved. It seems to falter in areas that need a real-world knowledge and which cannot be reduced by statistical learning methods, i.e., human-made problems.

This, however, does not mean that these obstacles can’t or won’t be overcome. It took many decades after Alan Turing posed his challenge before a computer program successfully conquered the first human/computer barrier: language.

General Intelligence: The Next Step in AI

The next level of AI is artificial general intelligence (AGI) or strong AI. While not yet ready for prime time, the goal is to create an intelligence that will, at last, think and behave as humans.

The main obstacle in creating AGI seems to be the difficulty in building a machine capable of doing the things that come so easily to humans. As it turns out, it is much easier to program a machine to perform advanced calculus than it is to build one that can distinguish humor, determine human motivations, or have a casual conversation. Our brains are brilliant at making emotionally intelligent decisions, figuring out people’s emotions, and engaging in sarcasm.

There are some who doubt the world will ever see strong AI, while others believe we will achieve this goal in the next 20-30 years. If thought leaders like Mark Zuckerberg are correct, much sooner than that. In all likelihood, most of us will be here to usher in the age of AGI.

“I think it’s possible to get to the point in the next five to 10 years where we have computer systems that are better than people at each of those things (seeing, hearing, language).” Mark Zuckerberg, call to investors, April 2016.

The advancements that AGI could bring are simply astounding. There are also some worrisome ethical and social concerns. Consider the following predictions for the next decade which show the potential of strong AI.

Artificial Intelligence by the 2020s:

  • Nanobots become smarter than modern medical technology and most diseases are either cured or prevented.
  • Eating is replaced by nutritional nanosystems.
  • Self-driving cars will take over the roadways. Cars driven by people are outlawed on highways.

Artificial Intelligence by the 2030s:

  • Virtual reality will feel 100% real.
  • We will all upload our consciousness into the cloud for safe-keeping.

The Promise or Peril of Superintelligence

So what will happen once the world has succeeded in creating artificial intelligence that rivals or equals our own? You don’t have to be a creative genius to imagine the doomsday scenarios. Stephen Hawking, one of the world’s most preeminent scientists, warns that the rise of artificial superintelligence would mean the fall of mankind. “It [AI] would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” he warns.

Many scientists believe the rivalry between man and machine intelligence will be short-lived.  After all, one of the key objectives in developing strong AI is for the machine to learn. The AI may program, code, and upgrade itself  — without instruction.  This is what makes the evolution of AI so revolutionary and different from any technology man has ever made. The maker will no longer be in charge of his creation.

The Rise of the Artificially Superintelligent Machine

The AI machine, as a trial and error learner, will have an infinite capacity to develop and perfect skills or what scientists call recursive self-improvement. This is important as it means there will be nothing to slow AI down once it reaches the human level because the more intelligent it becomes, the better it will be at improving itself and the more it will learn and accomplish. This cycle equates to an exponential growth in learning and intelligence that would leave humanity struggling to keep up.

The Singularity and Complexity Brake

The prospect of superintelligence and its potential outcome can be frightening. Some of the foremost minds in science and technology, like Hawking, Bill Gates, and Elon Musk believe we will not be able to effectively control the artificial superintelligence we create.  Machines will then enter into a single-minded pursuit of their goals and that those aims may not coincide with what is best for humanity. This is called the Singularity.

When the Singularity will happen is also a topic for debate. Microsoft co-founder Paul Allen thinks once superintelligence is achieved, it may be years or decades away from the Singularity. He predicts multiple pluralities for a period of time due to the complexity of the human condition.

This theory is called “the complexity brake.” The idea is based on the practical knowledge that the more deeply a natural system is understood, the more difficult meaningful understanding becomes. This is because unlike man-made systems, natural systems are infinitely complex.

Once a machine’s intelligence has surpassed, even slightly, the level of a human’s, scientists believe we will be triggering runaway technology. At that point, simply “turning it off” will not be an option. Some have described this period as a technological explosion, in which humans could quickly become obsolete, unable to adapt and face extinction, while machine life would continue.

The Absence of Morality

It isn’t that machines would necessarily seek to exterminate the human race. But, because technology is not value loaded, we simply cannot assume it will learn human traits like empathy, respect, and regard for human life. After all, these traits are from the human subjective conscience. We humans have a hard time agreeing on what is right, moral, or ethical, therefore, it is unlikely we could program into the computer the traits that could reliably be deemed universally moral.

The Optimist’s Abundance

But what is darkness, without light? With AI there are plenty of optimists who visualize a world of artificial superintelligence where machines will be programmed with a set of fundamental rules, reminiscent of the Hippocratic Oath’s “utterly reject harm and mischief,” and “utmost respect for human life.” The theory contends that sufficiently directed, the machines will continue to be our servants, partners, and allies. They will then go about solving the world’s greatest quandaries.

The immediate priority of superintelligence will likely be solving the energy crisis, which has both social and economic implications. Ray Kurzweil, Google’s Chief Futurist, describes this time period as simply, ‘abundance‘. He believes we will enter a time where the cost of consumables has so drastically decreased, we will no longer work for money. The main challenge we will face at that time in our existence will be in avoiding boredom and descending into madness.

The Emotional Evolution of the Human Race

Some theorize during this time of abundance, humans will begin to evolve emotionally. With so much free time on our hands, we will tap into the wellspring of artificial intelligence with the goal of bettering ourselves as people. Perhaps we will use artificial superintelligence to develop parts of our brains that currently lie dormant, increasing our human abilities to new levels. This idea is gaining interest and the attention of luminaries who have named it AEI or artificial emotional intelligence.

AEI will help humans figure out the emotionally tricky aspects of life like interpersonal relationships, personal development, and psychological baggage. AEI may help us learn to better understand ourselves as individuals, leading us to make better decisions in life. It may aid in the revelation of our capabilities and limitations. No longer would we make uninformed decisions that affect us long-term, often lifelong. With AEI, we could all learn to appreciate art and literature, because we would no longer be impeded by creative inhibition. Some look to AEI to guide the human race to a happier, more peaceful, and compassionate future.

We are poised on the precipice of something that is exciting and frightening in equal parts. Currently, we enjoy the convenience and helpfulness that basic artificial intelligence brings to our lives. Each day it seems some new use for AI captivates our attention and pushes boundaries previously thought impossible. So often now are these innovations happening, for most of us, it isn’t hard to imagine a future in which we live alongside intelligent robots.

But is that where AI is headed? The divergent theoretical paths AI may take, open up far more questions about the future than they offer answers. Are we obliged, for humanity’s sake to consider both possibilities with equal scrutiny? Do we even get a vote? One thing is for sure, a time is fast approaching when these questions will be answered — for better or for worse.