The Three Laws of Robotics (also known as The Three Laws or Asimov’s Laws), were introduced in Isaac Asimov’s short story “Runaround” (1942) and included in his later I, Robot series. These laws represent an ethical worldview—the organizing principle and underlying theme of Asimov’s robotic-based fiction. Since they’re intended as a safety feature to guard against robots’ behaving in ways harmful to their human creators, the laws cannot be bypassed. Foreshadowed in a few of Asimov’s earlier stories, the Three Laws perhaps foreshadowed today’s conversations about artificial intelligence (AI) and emerging “smart” technologies.
The Three Laws, quoted below, have been adopted, altered and elaborated, both by Asimov himself and by other authors and screenwriters.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Interesting is the way in which, throughout the sci-fi genre, robots interact with humans and with each other. Asimov himself described robots that disregard the Three Laws entirely. In Robot Dreams (1986), for example, the robot Elvex enters a state of unconsciousness and is able to dream. The first two Laws are absent from his dreams, and the third Law has become “A robot must protect its own existence.”
Further, if a robot may not harm authorized government personnel but may terminate intruders with extreme prejudice, we end up with the Terminator, an autonomous robot, conceived as a virtually indestructible soldier, infiltrator and assassin.
Yikes! It’s clear that a fourth law is necessary.
Appended to and preceding the original Three Laws, the fourth, or Zeroth Law, states they must not be broken. According to the Zeroth Law, a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Today, the Zeroth Law exists throughout much of science fiction and popular culture. In the 1986 movie Aliens, for instance, Bishop, an android, accidentally cuts himself, later declaring that it is impossible for him to harm or by omission of action, allow to be harmed, a human being. He also remotely pilots a dropship, saving all aboard from a power plant explosion. This is in stark contrast to the 1979 Alien film, in which the android Ash relates his explicit instructions to the crew: “Return alien life form, all other priorities rescinded.”
Certainly, a robot may do nothing that, to its knowledge, will harm a human being nor, through inaction, knowingly allow a human being to come to harm. Consider what might occur if robots were to unknowingly break any of the laws.
Asimov’s The Naked Sun (1957) tells the story of a murder in which the only eyewitness is a malfunctioning house robot that allowed harm to be done to a human, in violation of the First Law.
Similarly, in the 1987 film RoboCop, the protagonist, a cybernetic police officer, is programmed with three directives that bear striking similarity to Asimov’s Laws, even if they differ in letter. Although they must be obeyed, these directives allow Robocop to harm a human being, namely Dick Jones, to protect another human, an executive chairman held hostage by Jones.
Robots’ violating any, or all, of the Three Laws carries implications and forms the basis of discussions about future technology. This is because neither robots nor artificial intelligence contain or obey the Three Laws: humans must elect to program them in.
But will they?
The South Korean Robot Ethics Charter may be the first government-backed ethical code for robots. The Charter reflects Asimov’s laws and covers standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots. Proposed in 2007, it was drafted to prevent social ills stemming from inadequate social and legal measures to deal with robots in society.
Similarly, the Future of Life Institute (FLI), arguing that AI has already provided beneficial tools that are used every day by people around the world, published its Asilomar AI Principles. These 23 guidelines are meant to ensure the development of artificial intelligence beneficial to humanity.
“The development of AI is a business,” argues sci-fi author Robert J. Sawyer, “and businesses are notoriously uninterested in fundamental safeguards—especially philosophic ones. A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.”