With Artificial Intelligence (AI) plastered on the front of every news outlet, with endless headlines, and trending stories on the internet describing multiple future outcomes with the emergence of AI, AI critics, enthusiasts, and reporters have become synonymous with someone who has a split personality disorder. One side has AI stripping humans from jobs and leaving most in destitute; the inevitable – machines taking over jobs and being able to perform at a rate and quality that we could never imagine. And on the flip side, there are some who believe AI will create a utopian world, where we can focus on creative endeavors and relationships instead; as AI fueled robots will handle both mundane and complex jobs that consumed most of our time. However, there’s also a third side to AI; led by such public figures as Elon Musk, CEO of Tesla and Space X. Elon Musk fears AI. Thus, he is on a public crusade to bring awareness to the dangers of AI and the precautionary measures mankind must make as technology advances – in order not to unleash a force that we have no answer for and will not be able to control.
Those are all probable future outcomes with advanced AI; however, before reaching such technological stages within society, we must think about how AI will fit into our manmade ethical system in order to ensure a peaceful transition from a new age of man to machine. Now, AI has already infiltrated our society today in every facet of our lives – becoming more and more advanced with every passing day.
Thus, it’s critical to think about the system of ethics that we want to apply to AI once the time comes; before AI’s complexity goes beyond our grasp and we lose any type of control over this new species, which AI will be – and likely as mysterious as black holes that float around in our universe.
Questions now become: Do we punish AI if it commits a crime? Can we permanently shut AI off if it causes a significant human tragedy; like a human death sentence? Should we give AI a certain amount of human rights once it reaches a certain level of consciousness? There are many ethical questions circulating around AI that we will soon need to find an answer for – especially the more dependent our society becomes on technology. Every human dependency on technology becomes one more growing tentacle that AI can potentially use as control over us.
Thus, before we dive in to answer these proposed questions, we need to better learn what AI exactly is.
What is AI?
The term “artificial intelligence” was coined by John McCarthy in 1956. He defined the term as “the science and engineering of making intelligent machines”. In 1956, when machines were beginning to find life, the act of making machines simply compute and receive operational commands was a large human feat. Now, with advancements made in software today, all our machines can be classified as AI – depending on how we choose to define “intelligent” per John McCarthy’s definition of AI. Yet, what we do know is that not all software created today is considered equal. We have programmed software for playing the straightforward and simple game of Pong, then there’s Apple’s Siri, which is a built-in “intelligent assistant” to help Apple device users, and then we have futuristic AI dreamt up that will think like a human, yet, will be many times more intelligent. Thus, to help decipher through these distinct levels of AI, we’re going to go over the three different AI classifications in order to better understand where we are today with AI and how AI will advance itself and mankind forward.
This diagram shows the many characteristics that AI must have, in order to become indistinguishable from an average human adult. Without all of these characteristics in place, a machine would not be able to properly function in the real world without human assistance, assuming the AI can physically move in our physical world. As we clearly know, every human is on a different level with each of these characteristics, and a machine would be as well. Now, a machine can not only eventually reach the same level as a human adult with each listed functional characteristic, but it can far outpace what a human can ever reach as we are limited by our human biology. As an example, our human minds can only process and remember so much information for “predictive analysis”; analyzing statistical and experimental data and then formulating a solid future prediction. However, a machine does not have such limitations, and thus, the performance of a machine can be leaps and bounds more superior than a human.
The 3 types of AI
Let us take a deeper dive, and go over the three basic classifications of AI, that have become buzzwords in the media.
1) Artificial Narrow Intelligence (ANI): It’s called “narrow” because AI can only focus on one narrow set task. The intelligence capability of an ANI machine is limited to handling and computing only the one task that it is assigned through its program. For example, IBM’s Watson supercomputer utilizes cognitive computing, natural language processing, and machine learning in order to answer asked questions. Through this programmed ability, Watson was able to beat out a human contestant on the game show Jeopardy. Watson properly searches and filters for answers in existing databases to find the best fit response; yet, beyond that operational action, it does not possess true intelligence, self-awareness, nor life. Therefore, you could say that Watson is super intelligent, but it’s as dry as a brick wall.
All AI today is considered narrow AI. Drilling down into how these ANI machines function, we do not see any machine resemblance to a human brain – beyond the machine’s processing of its narrow assigned task. Also, ANI is also referred to as “weak AI”.
2) Artificial General Intelligence (AGI): General intelligence is when the machine is comparable to humans in intelligence. Thus, the machine can make general intelligent actions, where in a world standing alone, intelligence is required in order to survive. A machine with this level of intelligence would be able to mirror all human mental abilities, including: planning, creativity, reasoning, solving puzzles, learning, self-awareness, sensing, and so forth. Thus, behind closed curtains, you would not be able to tell the difference between a machine and a regular person that you would meet outside. It’s the Jetsons cartoon series coming to life. Amazing, yet scary to think that we will no longer be the only species with a high level of intelligence walking the Earth. This level of AI is also referred to as “strong AI”.
The intersection of the two lines in the graph below is when we will reach AGI, according to machine learning expert Jeremy Howard, who displayed this graph at his TED Talk. Today, machines are clearly not yet as intelligent compared to us humans as indicated in the graph, however, they are inching closer and closer with every passing day. According to the graph, human performance is linearly progressing forward, and this is supported by evolution, where prominent change takes thousands and thousands of years in any given species. However, with machines, that is not the case.
The machine performance learning curve is exponential, and is similar to a snowball rolling down a mountain, where the snowball’s momentum builds up with every passing second, and eventually the snowball grows from the size of a tennis ball to the size of a house, all the while rolling downhill at full steam.
This is the progression forward from AGI, which leads us into the next classification of AI.
3) Artificial Super Intelligence (ASI): Superintelligence is defined exactly as how it reads. The level of intelligence an ASI machine would possess would be beyond the intelligence of any human who has ever stepped on planet Earth. As defined by Nick Bostrom, superintelligence is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. It’s with superintelligence that a new marker will be dropped in human history, where mankind will either pivot in a good or bad direction depending on how friendly ASI will be to humans.
The difference in intelligence between a human adult and ASI is not even comparable. It would be similar to comparing a human adult’s intelligence to that of an ant. The ramifications of this difference in intelligence is mind boggling and opens countless questions that we need answers for today.
The age of Artificial Super Intelligence
Once we have reached the age of ASI, it’s imperative that the ethical system that is to be applied to ASI is properly ironed out and infused into machines. If we miss the ship in applying our desired ethical model to ASI, which the application should preferably be made to AGI, it more than likely will become too late to make the appropriate ethical calls when the time comes. The main fear of ASI is losing control over it, and it overtaking humans; like humans overtook inferior species as we advanced and built our cities. And as machines evolve, which will happen at an incredibly rapid pace due to the advanced processing power of computers and access to infinite knowledge over the internet, we will become tortoises trying to keep up with a Ferrari in any attempt to rescue mankind. Thus, with no guarantee that ASI will remain friendly, we must latch onto AGI when machines’ intellects are comparable to ours – and then we have the power to impose ourselves.
Now, one simple and straightforward example of a machine takeover that is not as extreme as a Skynet takeover of mankind, as was showcased in Arnold Schwarzenegger’s blockbuster Terminator movie series, is: imagine a machine that is made only to create shirts; as it is given an initial set of instructions that includes nothing beyond shirt printing. This machine is also connected to the internet without the ability to be disconnected. It constantly self-upgrades both its software and hardware – all the while blindly following the initial instructions of only creating shirts.
As more and more shirts are printed, the machine may begin viewing humans as a hindrance in its pursuit of printing more and more shirts. As a result, the machine may signal a nuclear assault against humans, allowing it to more efficiently continue printing more shirts; as the machine does not have a finite goal for the number of shirts to print – rather the goal becoming an infinite number of shirts to print. Thus, without a finite goal number, the machine needs an ideal environment to maximize its work; the machine cannot have humans using its resources for anything other than shirt printing.
There’s also the scenario where the machine may simply follow instructions in a cordial and normal fashion, printing shirts without hurting a fly, all the while scheming as it builds weaponry to eventually wipe humans off the face of the Earth.
Of course, these are futuristic draconian views over machines, yet, with a machine that is exponentially smarter than us, anything is possible. Look no further than how we treat animals, or even ants, when we dial pest control to kill ten-thousand plus ants without a second thought.
On the flip side of the coin, ASI may not, and does not, necessarily need to only have a draconian outcome – where it rises against mankind to destroy us. If we can somehow properly confine and control the power of ASI, the benefits to society can be immeasurable. Imagine a society where humans no longer need to work and don’t need to worry about the management of the work flow for upkeeping society; visualize the many processes involved for maintaining our power, food, shelter, the list goes on, that drain human time. Instead, we can focus on creativity or maybe simply the pursuit of knowledge, without needing to worry about surviving by collecting a paycheck in order to pay a mortgage and endless bills. This would open us to a new world of freedom that we never have had in human history. Machines can create this reality for us, by taking over all jobs and duties. Now, are human minds even ready for such a transition to take place; is the grass greener on the other side? Some say it is and they look forward to the day it becomes a reality.
The timeline for Artificial Super Intelligence
The question now becomes: how far away are we from the emergence of ASI? Not shockingly, there are many opinions; as frankly no one really knows. The first step to ASI is to reach AGI. Once a machine can reach human level intelligence, it will have access to all of mankind’s knowledge, research and science through the internet. Thus, AGI will not be bound by our ancient human interface known as senses, which includes eyes, ears and hands. Additionally, a computer does not need to worry about sleeping, eating, and using the restroom – it will literally be working around the clock. This will exponentially speed up AGI’s intelligence, and connecting multiple AGI’s directly together would create a network of brains per se. As a result, the transitional phase from AGI to ASI may not be as far off in the future as we may think. The question then funnels to when will we reach AGI?
According to Sun Microsystems co-founder Bill Joy and famous inventor and futurist Ray Kurzweil, who both agree with Jeremy Howard – they believe that we will soon achieve ASI as showcased by Jeremy Howard’s graph displayed on his TED Talk. However, the other camp who opposes this belief, which includes Microsoft’s co-founder Paul Allen, argues that the challenge in reaching such a level of technology is far off into the future because the challenge is steep.
Thus, how soon? AI experts say 2040 for AGI and 2060 for ASI. In Earth’s timeline of billions of years, 50 years, 100 years or even 500 years – it is nothing. This is evolution on steroids, as changes in this field will become exponential as we have learned, and we will be stepping into unfamiliar territory that has never been experienced on Earth. Once ASI is set into motion, the lid to shutting it off may be forever out of our grasp.
ASI will not be operating in our playing field that is run on hours and days, rather it will be operating in nanoseconds, all the while being overwhelmingly superior to us in every way imaginable, as its intelligence level continues to grow similar to nothing human history has ever witnessed.
This places our question regarding which set of ethics we want to apply to machines even more important. We do not know when we will reach a level of intelligence for AGI and ASI. Thus, we simply need to be prepared, otherwise, we may be caught off guard and awaken to a surprise we never planned for.
What types of ethics could be applied to AI?
AGI will one day come to fruition, and soon after ASI will silently follow. It’s inevitable and only a matter of time. Thus, with better understanding regarding the ramifications of this level of machine intelligence, we are now more in tune with what we as humans need to do as a preemptive measure through this technological evolution that we are standing knee deep in today.
Furthermore, because an ethical system is of utmost importance in the peaceful merging of AI and humankind together, we must evaluate the different ethical parameters, which can be applied to AI that would most benefit humans; or at least keep us safe, as machines may one day demand equality. As we learned, the benefits to advanced AI for humans would be: a simplified life, a prolonged life, improved ability to live out our dreams, and safety from dangerous elements on Earth (natural disasters).
Now, according to the prominent philosopher James Moor, there are three types of ethical agents that he believes in. An ethical impact agent is an agent where when an action is taken there will be an ethical consequence, whether intentional or not. For a machine, it has the potential to be an ethical impact agent, since an action it may take could harm or benefit humans.
Implicit: AI has specific ethical values programmed directly into it by humans, in order to control AI and set limits on what it can and cannot do. Thus, AI is unable to breach the programmed set limits, or secretly maneuver around the set limits. Imagine a jail cell with someone inside. The individual inside the jail cell cannot do anything outside of the confined area. Set limits for AI would be similar, and would act as safety or security considerations.
A spin-off from an implicit agent is an implicit unethical agent, where the implicit restrictions become malleable due to maybe a virus that enters the machine somehow. We are very familiar with viruses and hacks in society today, as personal and corporate databases are broken into daily.
Explicit: AI makes decisions based on a created and given ethical sandbox where outer limits are set by humans, and inputs are evaluated and then per the ethical sandbox an action is selected by the machine. The machine can process ethical boundaries humans have programmed into it over different boundaries it has yet to see, and then make the proper judgement call on the action to take. As well, the machine can process various ethical principles together when there’s a conflict in choice, and then make the reasonable and proper selection.
So, with an explicit agent, there will be ethical consequences to choices made. Whereas with an implicit agent, an automatic ethical predictable choice will be made from the database of ethical choices programmed in by humans.
Full agency: AI makes its own ethical judgements and defends its own reasonings for its actions. Thus, AI has the greatest freedom with this ethical paradigm. Like humans in their ethical decision-making process, machines will have features that include: consciousness, free will, and intentionality. Therefore, with this agent, the machine would be making decisions like a human adult.
As we can now see, there are different models for the application of ethics that we need to consider, before we open Pandora’s box with artificial intelligence. The approach to this ethical paradigm question requires both a philosophical and scientific lens. That said, the foundation of all the ethics we think should circle around Isaac Asimov’s three laws of robotics in some fashion – written in his short stories, which appear in the “Handbook of Robotics, 56th Edition, 2058 A.D”. Through inference, the robots in the stories can be viewed as having explicit ethical agents, and their ethical foundation is ruled by three laws:
1) A robot may not injure a human being, or, through inaction, allow a human being come to harm.
2) A robot must obey orders given by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
The goal of these laws is to create safe and kind robots in order to keep humans safe. From first glance, these laws would ensure that humans are not harmed, and simply, the laws look perfectly fit to serve humans. However, looking over law number one very closely, we can find some holes. Imagine if a robot that is more intelligent than us follows this first law. The robot may think that humans are placing themselves in harm’s way when they drive, fly in planes, sail in boats, and so forth; thus, the robot will simply review statistical deaths through these forms of transportation. As a result, the robot may feel compelled to place humans in confined bubbles in order to keep us safe from dangerous outside elements of the world; as statistically, people do not die as often when, for example, they are placed inside of a jail cell – compared to the number of drivers that die every year on highways. Thus, here lies another issue: robots thinking that they need to protect us from ourselves, which clearly is not what we want. Yet, again, this can be inferred from Isaac Asimov’s first law that we outlined above. Therefore, the answer is not so simple.
This begs the question: how can a robot completely know what we as humans want and desire, considering all of our ethical concepts are open to interpretation? These machines are different from us, as we inhabit a weak body of flesh and bones, and machines are magnitudes smarter than us. Not too many similarities exist between man and machine; just like how humans cannot relate to ants. Now, even among humans, ethics are interpreted differently, as mass killings continue to happen, wars are started, racism is rampant, and many basic ethical principles are broken on the daily.
Thus, how can we expect a machine to properly interpret our ethics, when we can’t even do that ourselves? Not only do single standalone people have different interpretations over basic ethics, but so do entire countries.
Answers to our ethical questions
Programming ethics into a machine, or even teaching ethics, can open a can of worms; especially when ethics are heavily dependent upon commonsense knowledge. This leads us to our original ethical questions that we outlined for machines. With the emergence of advancements in AI technology, we soon will need answers; as we have repeatedly hammered home in this article. We can no longer avoid this matter. How we view machines, compared to mankind, will begin the conservation. There will be economic, legal, and political reasons to grant the same rights we humans have, to machines. However, in the beginning stages, the granted rights may be like the rights we have given to animals, limited but existent. This is where the marker will be dropped, and the road paved for the future.
Do we punish AI if it commits a crime?
As told by Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies”:
“We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans—scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth…”
Will the punishment be comparable to the punishment that a human would receive for a similar crime? It’s difficult to quantify, since imprisonment or death do not mean anything for a robot. Yet, this is all dependent upon the intelligence level of the robot. For limited intelligence, not quite AGI, we can impose an imprisonment, where certain freedoms are taken away from the robot. For example, the daily task that the machine is given may be banned. In addition, another approach to punishment is to punish or fine the programmer or corporation who manufactured the robot.
However, once AGI or ASI is achieved, the conversation over how to punish a robot changes, as the robot can presumably become vindictive and cause greater harm if not properly punished. Thus, with such intelligent machines, the punishment must be swift and even greater compared to the punishment given to a human being for a comparable crime. Otherwise, with the instant vast pool of knowledge available to such machines and their reach over the internet, the damage the machines can unleash can be astronomical. Now, this is all true only if we can even punish a machine that is as smart, or smarter, than us. A lot of “ifs”.
Can we permanently shut AI off if it caused a significant tragedy; like a human death sentence?
This ties into our question above – where the form of punishment is dependent upon the level of intelligence that the machine possesses and the type of crime committed. A human being can be jailed, and if released, can be monitored under surveillance. However, a machine connected to the internet can be all over the globe via the internet. Think of a computer virus that is in one-million places in any given second; how can you control such a “species”? Thus, we believe disconnecting a machine from the internet is the first order of action in any form of punishment, in order to impede any type of proliferation; if the machine has not spread already.
Should we give AI a certain amount of human rights once it reaches a certain level of consciousness?
We already attribute moral accountability when a robot looks very realistic. And, as machines become more realistic and intelligent as time goes on, the more we will be able to relate to them. And once they become indistinguishable from us in certain human capacities, they will need to be looked at as social equals. The challenge then becomes knowing what traits trigger this classification as a social equal.
Now, classification cannot circle around intelligence alone – as intelligence alone does not lead to being sentient (the ability of an entity to feel and perceive things), conscious (aware of one’s body and surroundings), and self-aware (recognition of one’s consciousness). A calculator is intelligent, but it lacks the human capacities we just touched upon. Your calculator does not speak with you, or call you ‘stupid’ for entering in 1+1.
Some scientists and philosophers argue that machines should not ever be granted human rights, regardless of their stage in advancement in intelligence – as they are only a series of programs that humans do, another computer does, or that it does itself through self-programing. Clearly, the argument becomes a pendulum, over where the line should be or will be drawn. Thus, it’s an important question, as the window of time that human rights would apply to a machine will be slim; as this window falls between the timeline where a machine wants rights as it first gains some level of consciousness through increased intelligence, and then when the machine becomes too intelligent for us, and it finds humans setting its rights as plain laughable.
The discrimination robots may encounter in this window of time, and when their intelligence far surpasses human intelligence, may lead to irreversible problems and damage. Many different human races experienced discrimination at one point or another in history, and the progression to some semblance of equality took years. However, with machines, they would not be as forgetful, and as we learned, their intelligence grows exponentially. Thus, if they become vengeful, the outcome could be catastrophic for mankind.
There you have it
Mankind’s future is filled with unknowns, but notwithstanding a cataclysmic natural catastrophe, we will eventually reach ASI; and we better hope that we have figured out the answer to our proposed ethical questions by then. In the future, our lives could be in the hands of ASI, and we need to be assured that that’s a future that we want to be a part of. That’s why it’s important to know what type of ethical model we want our robots to abide by – as robots today are becoming more autonomous by the day. Sooner than later, these ethics will become the foundation to the minds of machines that will run the world and maybe even us.