What are the Three Laws of Robotics?

Tricia Christensen
Tricia Christensen

In 1942, beloved science fiction writer Isaac Asimov penned a short story, “The Runaround.” In it scientists are puzzled by the bizarre behavior of a robot named Speedy, who cannot complete a task, though he must obey humans, because the task involves danger to the robot. Instead of being able to follow orders, Speedy sings snatches of Gilbert and Sullivan operettas and races around in circles. This comic story introduces the Three Laws of Robotics, rules usually memorized by science fiction enthusiasts, and familiar to an increasing number of scientists.

Within the fiction of Isaac Asimov, a robot may not harm a human, allow a human to come to harm, or disobey a human so long as the order does not counter the first and second rules.
Within the fiction of Isaac Asimov, a robot may not harm a human, allow a human to come to harm, or disobey a human so long as the order does not counter the first and second rules.

The Three Laws of Robotics became the springboard for Asimov to explore all the situations in which the laws contradicted each other or were inoperable. His first few “robot” stories became several and were later published in book form in the novel I, Robot. What is clear through Asimov’s work is that though the Three Laws of Robotics were meant to protect robots with relative intelligence and to protect their human users, there were loopholes and problems.

It would be easy to compare the Three Laws of Robotics the Hippocratic Oath, since there are similarities. The laws listed below are quoted from “The Runaround.”

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Speedy’s case, his order to collect selenium is countermanded by the fact that to do so places him in grave danger. He is trying to protect his own existence, yet also trying to obey orders. This result is strange behavior and a very comic story.

Throughout the short stories in I, Robot , and in Asimov’s follow-up novels The Caves of Steel, The Naked Sun, The Robots of Dawn, and Robots and Empire, Asimov continues to explore the inherent conflicts in obeying all three laws, and adds an additional law, the Zeroth Law, stating that robots cannot harm humanity or by failing to act, allow humanity to come to harm.

Some people might wonder why short stories written as early as the 1940s would have any relevance in the present day. Like many science fiction writers, Asimov dreamed what people would later discover. As we now have “smart” robots and machines of different types, serious discussion exists in the scientific community regarding laws needed to protect these expensive machines, and more importantly to protect humans from them. Implementing the Three Laws of Robotics is not simple, and theories on practical applications of laws for robots, based on Asimov’s novel is a matter of great debate.

What must be remembered, of course, is that Asimov presents us with problems that result from the Three Laws of Robotics, and seldom a complete set of conclusions on how to counter the inherent inconsistencies in the laws. They are nevertheless a springboard for all who research or create robots today, and the laws may be worth knowing as we continue to advance into the field of robotics.

Tricia Christensen
Tricia Christensen

Tricia has a Literature degree from Sonoma State University and has been a frequent wiseGEEK contributor for many years. She is especially passionate about reading and writing, although her other interests include medicine, art, film, history, politics, ethics, and religion. Tricia lives in Northern California and is currently working on her first novel.

You might also Like

Readers Also Love

Discuss this Article

Post your comments
Forgot password?