Room no. 308

The room of our random experiments

View My GitHub Profile

Stopping AI, an ethical case

by 308

Definitions

The Moral Problem

As humans reign supreme on planet Earth, most people might not even know there is speculation of human extinction in the next few decades. As much as we would like humans to be superior to every other species around, it is impossible. Another species would eventually be born, which would have far more superior intellect than homo-sapiens (as we did to the chimpanzees) and might become the tipping point for human extinction. But as natural evolution is a much slower process, it is not what we are most worried about right now. What we fear is the potential rise of an Artificially Super-Intelligent enemy that might want to take over the world and eradicate all humans in the process. (More specifically, it is Quality Superintelligence that we fear the most as it might do things that humans don’t even understand, let alone the possibility of humans stopping them if anything goes wrong.)

So how do we solve this problem? Is it even a problem? Should we stop funding AI research as it might be able to prevent the onset of superintelligence? Can we live with dumb AIs? Can we stop the AI revolution even if we wanted to? I would like to tackle these questions in this post.

The Dumb AI paradox

One of the simplest ways that comes to mind while thinking about stopping the AI revolution is to stop funding AI research. Keep the AI as it is, treat it as a resource and just focus on the applications that it can provide to the human community. Prima facie, this looks good, right?

The Thought Experiment

Let me introduce a thought experiment that highlights the dumb AI paradox: Let’s suppose you want to hire a maid for cleaning your home. You are given a choice to choose either a human maid or a robot (AI) maid. The robot maid is far more efficient than a human maid, i.e. makes lesser mistakes in cleaning the room and demands less money than a human maid. On the other hand, the robot maid is only good at cleaning the room (refer to the definition of Artificial Narrow Intelligence) and cannot satisfy your occasional cravings for a cup of tea, while the human maid can. What do you think is a logical choice here?

Many factors are at play here, including your ability to make tea for yourself. But, if you want to hire the maid only for house-cleaning purposes, the robot maid looks like the obvious choice, right? Anyone who is not doing it might be making the wrong choice, is what you might think. What if I told you that we make this “wrong” choice every day?

The “Wrong” Choice?

Let us say instead of a maid, you are looking for a driver. You can either buy an AI-powered autonomous car or hire a driver for your car. How good do you think an autonomous car needs to perform compared to the driver to switch your choice from driver to AI? Let us explore some empirical data to understand this.

For a significant part of this discussion, I am going to use the example of Tesla’s driverless/autonomous car. Let’s find an answer to the question, “How good an AI powered self-driving car should be for it to be used commonly (or trusted by humans)?” Let’s talk about it from the utilitarian perspective. What could be the utility of a driver? I think the utmost utility of a car driver would be not to drive the car into accidents. Therefore, if the AI driver’s utility is higher than the human driver, the AI driver becomes an obvious choice from the utilitarian perspective. But we all know that Tesla has been called out by the media multiple times[3][4] for its cars causing accidents while being on autopilot mode. Hence, it only makes sense that its utility is less than the human driver, right?

Let us define the Death to Sell ratio (DSR) of a vehicle to be the ratio of fatal accidents that happened in that car to the number of cars sold. We will use DSR as a metric to measure how prone human drivers and AI drivers are to fatal accidents. Hence, this will perfectly serve our purpose of measuring the utility of different kinds of drivers. In the case of the human drivers, approximately 63.7 million cars were sold in the year 2020[5] (assuming that the number of humans drivers to be approximately the same as the number of cars sold), and approximately 1.35 million people die every year from road traffic crashes[6]. Therefore, DSR for team human drivers becomes approximately 0.0211. On the other hand, there were around 38 deaths[7] caused by the 499000 Tesla cars sold[8] in 2020, which makes the DSR for team Tesla to be around 0.000076.

But why?

This didn’t come out as we expected from the media thrashings of Tesla and human drivers certainly are more “deadly” (referring to the media articles about Tesla) than an AI powered driverless car. From the above argument, we can deduce that the AI drivers have much more utility than the human drivers, then why is it that we don’t see as many media articles calling humans “deadly” drivers as we do for AI drivers? I believe that the answer lies in human psychology and something related to fundamental attribution error (I have written “something related to” because I am not really sure if we can extend the idea of fundamental attribution error). “The fundamental attribution error refers to an individual’s tendency to attribute another’s actions to their character or personality, while attributing their behaviour to external situational factors outside of their control.”[9] To extend the idea of fundamental attribution error, we think of other humans (human drivers) as our own kind and tend to ignore their mistakes over the mistakes made by the AI drivers, who we attribute as “something other than our own kind”. So as long as the companies like Tesla don’t improve their AI, they are going to be in the spotlight for causing deadly car crashes, even if they produce cars that have far, far more utility than human drivers. And given our previous argument of not improving the AI and just using it for applications, the companies can’t survive in that case. In the dearth of companies that produce AI applications, AI won’t survive. And without AI, human technological progress will halt to a great degree.

Therefore, for the benefit of human race, curtailing the AI research and living with “dumb AIs” is not an option. Hence, we must research in AI, and hence, improve the AI.

The other side

The other reason why one might not want to have an AI car is because AI cars don’t take the blame after an accident. Hence, it is still an open question as to ‘who to blame after an AI car has an accident?’ Is it the owner of the car? the owner of the company? Or the car itself? But I feel that this kind of argument is useless when you compare the number of lives lost in car accidents in both the cases. From our previous DSR argument, if 6.37 million Tesla cars were sold in 2020, only around 485 deaths would have occurred as compared to 1.35 million deaths by human drivers.

From Evolutionary Perspective

“Of all species that have existed on Earth, 99.9 percent are now extinct.”[10] Following the trend, it is expected that humans (or homo sapiens) will go extinct too sooner or later if left to the natural course. We know that the onset of smart AI would be the single most significant event in the course of human history and may either lead to human extinction or human immortality (not individual immortality, but immortality as a species). Since extinction seems almost inevitable otherwise, the only chance we have got to not perishing is to develop AI to make human species immortal.

Let’s see if limiting the capabilities of AI makes sense from the evolutionary perspective:

Defining Evolution

Since evolution is a completely natural process, it might not make sense at first to compare the process of development of AI by humans (a completely non-natural or artificial process) to the process of evolution and hence, justifies the need for this paragraph. Since the comparison between the development of AI and evolution has been made many times before, I am just borrowing the definition that everyone uses in this context[11]. We would like to strip off every aspect of evolution except intelligence. For example, worms are certainly more intelligent than say amoeba (in a sense that it doesn’t make sense from an amoeba to dig a hole into the ground and live there), chimps are more intelligent than the worms (for example the chimp communication skills are unfathomable for worms), and humans are more intelligent than chimps (in a sense that chimps can‘t understand the man-made skyscrapers). Obviously, there is a physical aspect to evolution too, humans couldn‘t have built skyscrapers if we had the body of, say, a worm. But we don’t care about that physical aspect right now. Therefore, for our use, evolution is a process in which new species overtake the old ones in terms of the intelligence.

The Evolution Argument

Now that we have laid the basic definition of evolution, let continue with our argument. ASI is to humans what we are to the chimpanzees in terms of evolution, a more intelligent (or more evolved) species. And as the empirical evidences suggest, the more intelligent (as a species) you are, the more you dominate the planet earth. This means that the rise of ASI might lead to the downfall of human dominance on earth and we might be reduced to a secondary species like chimpanzees. So, does this mean that we should stop AI research so that we never reach the point of singularity?

Let’s see this problem from the point of view of a chimp (or chimpanzee species as a whole), or any other species that is superseded by another species in the process of evolution. What if chimpanzees knew that humans were going to outplay them and dominate and colonize the whole world, and possibly render the entire chimpanzee race at their mercy? Would it be ethical for those chimpanzees to try curtailing the reproduction possibilities of the most intelligent people of their own species in order to hinder the process of them evolving into humans?

If trying to stop letting other (more intelligent) species dominate was ethical, the evolution wouldn’t have reached humans because every species would have tried to stop evolution by killing their most intelligent members. The fact that the process of evolution has reached us is in itself an evidence of hindering the process of evolution to maintain one’s dominance being unethical. Hence, we should not try to limit the AI research to limit the capabilities of AI and maintain our dominance on planet Earth.

Conclusions and Final Remarks

From the previous two arguments, we can say that it is not ethical to limit the AI research. But, as the equation defining the world has too many variables, we can’t say anything for sure. This leaves us with a room to ask much deeper questions like ‘Is making a human-like AI even possible?’ Are we (the living creatures) intrinsically gifted with the sense to experience things? Why does the cold water (some molecules vibrating at a particular speed) feel cold when it touches our tongue (a bunch of atoms aligned in a particular way)? Can a robot have this experience too? Every organism in the course of evolution has gotten intelligent through its experiences. So, if AI can’t experience things, it is highly unlikely that it would be able to become intelligent.

This brings us to the famous mind-body problem. If each of our experience maps to some kind of electrical impulse in the brain, our mind is a part of our body. Otherwise, if the experience of seeing yellow can’t be described in terms of electrical impulses in our brain, then the mind is something different, detached, to our body. If mind is a part of body, we have some hope of creating an intelligent AI as we have a form of hardware (brain) that can convert our experience into intelligence. Otherwise, there might not even be a possibility of an intelligent AI.

References

[1] “AlphaGo | DeepMind.” https://deepmind.com/research/alphago/.

[2] “The Artificial Intelligence Revolution: Part 2 - Wait But Why.” 27 Jan. 2015, https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html.

[3] “Tesla’s Autopilot keeps causing its cars to crash - Vox.” 26 Feb. 2020, https://www.vox.com/recode/2020/2/26/21154502/tesla-autopilot-fatal-crashes.

[4] “Tesla’s Self-Driving Autopilot Was Turned On In … - WIRED.” https://www.wired.com/story/tesla-autopilot-self-driving-crash-california/.

[5] “Global car sales 2010-2020 | Statista.” 05 Feb. 2021, https://www.statista.com/statistics/200002/international-car-sales-since-1990/.

[6] “Road traffic injuries - World Health Organization.” https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries.

[7] “Every Tesla Accident Resulting in Death | Tesla Deaths.” https://www.tesladeaths.com/.

[8] “Tesla deliveries by quarter 2020 | Statista.” 04 Jan. 2021, https://www.statista.com/statistics/502208/tesla-quarterly-vehicle-deliveries/.

[9] “Fundamental Attribution Error: What It Is & How to Avoid It.” https://online.hbs.edu/blog/post/the-fundamental-attribution-error?sf55808584=1.

[10] “Evolution: Extinction: A Modern Mass Extinction?.” https://www.pbs.org/wgbh/evolution/extinction/massext/statement_03.html.

[11] “The Artificial Intelligence Revolution: Part 1 - Wait But Why.” 22 Jan. 2015, https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.