Big Data

Isaac Asimov: The 4th Law of Robotics

Bill Schmarzo By Bill Schmarzo CTO, Dell EMC Services (aka “Dean of Big Data”) August 23, 2017

Like me, I’m sure that many of you nerds have read the book “I, Robot.” “I, Robot” is the seminal book written by Isaac Asimov (actually it was a series of books, but I only read the one) that explores the moral and ethical challenges posed by a world dominated by robots.

But I read that book like irobot50 years ago, so the movie “I, Robot” with Will Smith is actually more relevant to me today. The movie does a nice job of discussing the ethical and moral challenges associated with a society where robots play such a dominant and crucial role in everyday life. Both the book and the movie revolve around the “Three Laws of Robotics,” which are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

It’s like the “3 Commandments” of being a robot; adhere to these three laws and everything will be just fine. Unfortunately, that turned out not to be true (if 10 commandments can not effectively govern humans, how do we expect just 3 to govern robots?).

There is a scene in the movie where Detective Spooner (played by Will Smith) is explaining to Doctor Calvin (who is responsible for giving robots human-like behaviors) why he distrusts and hates robots. He is describing an incident where his police car crashed into another car and both cars were thrown into a cold and deep river – certain death for all occupants. However, a robot jumps into the water and decides to save Detective Spooner over a 10-year old girl (Sarah) who was in the other car. Here is the dialogue between Detective Spooner and Doctor Calvin about the robot’s decision to save Detective Spooner instead of the girl:

Doctor Calvin: “The robot’s brain is a difference engine[1]. It’s reading vital signs, and it must have calculated that…”

Spooner: “It did…I was the logical choice to save. It calculated that I had 45% chance of survival. Sarah had only an 11% chance. She was somebody’s baby. 11% is more than enough. A human being would have known that.”

I had a recent conversation via LinkedIn (see, not all social media conversations are full of fake news) with Fabio Ciucci, the Founder and CEO of Anfy srl located in Lucca, Tuscany, Italy about artificial intelligence and questions of ethics. Fabio challenged me the following scenario:

“Suppose in the world of autonomous cars, two kids suddenly run in front of an autonomous car with a single passenger, and the autonomous car (robot) is forced into a life-and-death decision or choice as to who to kill and who to spare (kids versus driver).”

What decision does the autonomous (robot) car make? It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!

A number of surveys have been conducted to understand what to do in a situation where the autonomous car has to make a life-and-death decision between saving the driver versus sparing pedestrians. From the article “Will your driverless car be willing to kill you to save the lives of others?” we get the following:

“In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plow into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimized deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.”

While 76% is certainly not an over-whelming majority, there does seem to be the basis for creating a 4th Law of Robotics to govern these sorts of situation. But hold on, while in theory 76% favored saving the pedestrians over the driver, the sentiment changes when it involves YOU!

“When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.”

Seems that Mercedes has already made a decision about who to kill and who to spare.  In the article “Why Mercedes’ Decision To Let Its Self-Driving Cars Kill Pedestrians Is Probably The Right Thing To Do”, Mercedes is programming its cars to save the driver and kill the pedestrians or another driver in these no-time-to-hesitant, life-and-death decisions.  Riddle me this, Batman: will how the autonomous car is “programmed” to react in these of life-or-death situations impact your decision to buy a particular brand of autonomous car?

Another study published in the journal “Science” (The social dilemma of autonomous vehicles) highlighted the ethical dilemmas self-driving car manufacturers are faced with, and what people believed would be the correct course of action; kill or be killed. About 2000 people were polled, and the majority believed that autonomous cars should always make the decision to cause the least amount of fatalities. On the other hand, most people also said they would only buy one if it meant their safety was a priority.

4th Law of Robotics

Historically, the human/machine relationship was a master/slave relationship; we told the machine what to do and it did it. But today with artificial intelligence and machine learning, machines are becoming our equals in a growing number of tasks.

I understand that overall, autonomous vehicles are going to save lives…many lives. But there will be situations where these machines are going to be forced to make life-and-death decisions about what humans to save, and what humans to kill. But where is the human empathy that understands that every situation is different?  Human empathy must be engaged to make these types of morally challenging life-and-death decision. I’m not sure that even a 4th Law of Robotics is going to suffice.

 

[1] A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients.

Bill Schmarzo

About Bill Schmarzo


CTO, Dell EMC Services (aka “Dean of Big Data”)

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Dell EMC’s Big Data Practice. As a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

Read More

Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *

8 thoughts on “Isaac Asimov: The 4th Law of Robotics

  1. I think that the question could be viewed in the light of “what would a human do”. This could, e.g., probably be measured by real-life accident analyses and surveys. When faced with a situation between the driver’s life and the pedestrian’s life, what do people do. I’ll best most folks would sacrifice any number of pedestrians to save their own lives. Like they’d slam on the brakes and careen into a crowd rather than turn the steering wheel and drive off a cliff. I’m not sure it’s fair to expect an autonomous car to do any different. That would argue Mercedes has it right (I haven’t read that article yet – but plan to). Thanks for a thought-provoking post.

  2. It also has to do with accountability.
    What would a Driver do? Safe himself, and/or try to avoid an accident.
    If we now know that a “Robot” car will try to protect the Driver (aka Person(s) in the car as they are not anymore “the Driver”) and hence itself, other cars that will do the same will automatically lead to least damage…in most cases.

  3. And the discussion goes deeper and deeper. This was only about making a choice.
    What about who is to blame and who are we going to sue? I am sure relatives of the pedestrians will not accept the car’s choice if other options were open as well. Is it the manufacturer (“driver first”, hardware failure (brakes, steering mechanism) or software/algorithm error), the driver (he choose for a driver-first-car), etc? And many other flavors and possibilities can be thought of (e.g. the age sample from “I Robot” stays relevant).
    It all comes back to the The Trolley Problem, see also https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/
    Thanks for starting this discussion.

    • You’ve hit here what I think the biggest issue is with AV’s: Who is to blame. Any accident with an AV likely becomes a product liability issue (possibly class-action), where the manufacturer is to blame. We know lawyers aren’t paid to look at the bigger good either (witness the lawsuits against lifeguards, that have caused some beaches to stop supplying them all together).

  4. On a more sophisticated scenario, the car would have access to a database of precalculated impact each human death would cause. If, for example, the driver is a sole breadwinner of a family of five without financial reserves, it makes sense to spare this driver over a widowed parent of a toddler if the car is going to certainly kill both parent and toddler or if the toddler will shortly be adopted by a new set of parents: it’s a very significant loss for four people against two lives that will have a comparatively small societal impact. Same applies for a young healthy pedestrian in their 20’s against a car with 5 centenarian retired childless passengers who, collectively, have an expected lifespan of 50 more person x years ahead of them. Or, if the 20 year old is not healthy and, in fact, has a genetic disorder that will likely cause death before the 40’s, it’s a logical choice to preserve the passengers.

    If machines are going to make life-and-death decisions, it’s our (and their) obligation to make such decisions using the best possible processes and information to minimize human suffering.