Asimov’s Laws of Robotics
In October 1941 the young American writer Isaac Asimov wrote the science fiction short story "Runaround", which was first published in the March 1942 issue of Astounding Science Fiction magazine. The story (it later appears in the collections I, Robot (1950), The Complete Robot (1982), and Robot Visions (1990)) featured the recurring characters Powell and Donovan, and introduced his Three Laws of Robotics
Asimov’s Three Laws of Robotics, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The cover of Astounding Science Fiction magazine, March 1942 (note how the poor man is squeezed into the rocket 🙂
In his later fiction stories, where robots had taken responsibility for government of whole planets and human civilizations (the concept originally appeared in the short story The Evitable Conflict of 1950), Asimov also added a fourth, or zeroth law, to precede the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The appearance in the Three Laws didn’t happened at once, but over a period. Asimov wrote his first two robot stories, "Robbie" and "Reason" with no explicit mention of the Laws. He assumed, however, that robots would have certain inherent safeguards. In "Liar!", his third robot story, makes the first mention of the First Law but not the other two. All three laws finally appeared together in "Runaround".
In his short story "Evidence", published in the September 1946 issue of Astounding Science Fiction, Asimov lets his recurring character Dr. Susan Calvin expound a moral basis behind the Three Laws. Calvin pointed out that human beings are typically expected to refrain from harming other humans (except in times of extreme duress like war, or to save a greater number) and this is equivalent to a robot’s First Law. Likewise, society expects men to obey instructions from recognized authorities such as doctors, teachers and so forth, which equals the Second Law of Robotics. Lastly men are typically expected to avoid harming themselves which is the Third Law.
Asimov’s Laws quickly grew entwined with science fiction literature, but the author wrote that he shouldn’t receive credit for their creation because the Laws are obvious from the start, and everyone is aware of them subliminally. The Laws just never happened to be put into brief sentences until I managed to do the job. The Laws apply, as a matter of course, to every tool that human beings use.
Moreover, Asimov believed that, ideally, humans would also follow the Laws:
I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior.
My answer is, "Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else."
—But when I say that, I always remember (sadly) that human beings are not always rational.
A journey of a thousand miles begins with a single step.