Isaac Asimov’s Three Laws of Robotics

I picked this little gem up from S F Signal, one of my all time favourite blogs. If you’re a sci-fi fan you should definitely add them to your RSS feed. This particular piece is something that has always fascinated me, for its brevity and completeness. Very few things are truly brief and complete, but Asimov nailed robot laws with this one. Here’s a young Asimov explaining his laws:

Edit: If you can’t see the vid below, you can see it here on S F Signal or here at YouTube.

The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For all those people that think the Skynet is falling in regards to robots and the machines taking over the world, this is some small comfort. Of course, truly self aware robots would happily break rules as easily and regularly as we do, but I guess that’s the perceived difference between social rules and hard programming.

Regardless, these rules exist initially in the form of fiction (I Robot being the primary example), but they also carry over into true life robotics. Any sufficiently advanced robot that gets developed will have these laws programmed in. And that’s a very cool thing. These are rules that first appeared in Asimov’s short story Runaround in 1942. (Incidentally, Asimov also coined the term robotics in 1941.)

i_robot_-_runaround

In further developments over time a fourth and fifth law have been added by others. I like to think of these as the Blade Runner Addenda:

In 1974 Lyuben Dilov’s novel Icarus’s Way added the Fourth Law:

A robot must establish its identity as a robot in all cases.

Nikola Kesarovski, in his short story The Fifth Law of Robotics, added the Fifth Law:

A robot must know it is a robot.

You can see why I think of these as Blade Runner laws. Another Fourth Law appeared in the 1986 tribute anthology, Foundation’s Friends. Harry Harrison wrote a story called The Fourth Law of Robotics in which a robot rights activist attempts to liberate robots by adding a Fourth Law that states, “A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.” The robots build new robots who see their creator robots as parental figures, which is all a bit weird. Like all the other robot stuff isn’t weird…

Anyway, this is a fairly non sequitur post, but I’m a big fan of the concept of robots so I love this stuff. Blade Runner is still my all time favourite film, for example. So if you’re a sci-fi writer and you like to play with robots, don’t forget Isaac Asimov and his ground breaking ideas.

.

Share and Enjoy:
  • Digg
  • del.icio.us
  • Facebook
  • StumbleUpon
  • LinkedIn
  • MySpace
  • Reddit
  • Slashdot
  • Technorati
  • RSS
  • Twitter

18 thoughts on “Isaac Asimov’s Three Laws of Robotics

  1. I actually borrowed Foundation’s Friend from the school library when I was in 10th grade! That brings back memories… ::heart::

  2. yes, but aren’t the rules Karel Capeck’s- from Rossum’s Universal Robots? Capek also coined the term ‘robot’.. so how’s that for prescient?

  3. Dianne – Foundation stuff is just eternally brilliant.

    nina – The word robot comes from the word robota meaning serf labor, or “drudgery” or “hard work” in Czech, Slovak and Polish and was indeed coined by Capek. But Asimov wrote the Three Laws. Having not read Rossum’s Universal Robots I couldn’t tell you if the Laws were hinted at there. Perhaps Asimov took his inspiration from Capek? Anyone?

  4. Of course, truly self aware robots would happily break rules as easily and regularly as we do

    Why? We have plenty of hard-coded rules that we wouldn’t be able to break. Eg reflexes: no matter now much someone might want to dispense with the “establishment” thing of jerking their leg when their knee is struck they can’t. A robot with the 3 laws hard-coded that’s self-aware would probably experience the same thing.

  5. Asimov’s laws are almost perfect, but there are still problems. What happens when you housework robot accidentally kills your cat because it is not covered by the first law?

    Some bright spark will inevitably look at robots for military applications and will rework the first law to read ‘friend’ instead of ‘human’. You then have a robot making judgement calls on what is ‘friend’ and what is ‘foe’, all based on a human definition that is likely to be opinionated and flawed.

    Should we ever die at the hands of robots, we’ll have no one to blame but ourselves.

  6. Michael – I suppose that’s the difference between a physical imperative and a mental instruction. Regardless of programming, obeying the robotic Laws is still decision based. Can programming a code like the Laws be as fixed as a parasympathetic physical imperative?

    Graham – I’m now imagining robots torturing cats while they struggle to come to terms with the Laws protecting humans! You’re right that anything that happens will be our fault.

  7. I definitely think the 3 laws can be fixed as physical imperative. Another analogy might be fire — if you were on fire you’d have an immediate imperative to roll around, jump in the water or whatever. Here it’s even stronger because it would be a conscious thing but you’d be absolutely unable to function until you stopped it (being on fire, that is).

    So if the Asimov universe comes true, the robot might have the same feeling when seeing a human in harm’s way as when we’re on fire, and the only way to stop it would be to help the person.

  8. Interesting perspective. Interesting as well that you say “the robot might have the same feeling“.

    But I wonder if it’s possible? A person will be compelled to put themselves out if they’re on fire as it’s a threat to their life, triggered by intense pain that is unbearable until stopped and almost certain death. But there have been cases of people killing themselves that way – my Rage Against The Machine album covers have taught me that much. So it is possible to overcome the physical imperative there.

    Is it possible to program something like that into a robot that can’t feel pain and whose life isn’t threatened by the death of a human?

  9. Not to get too philosophy-on-mind here but there are already hundreds of billions of robots who’ve been programmed to feel pain that we know of. 6 billion of which have a pretty advanced versions of pain that the simpler robots lack (eg. shame as a complex pain that’s useful for avoiding social disasters).

  10. I have not seen anyone actually discuss the laws, as much as how they may feel about what is being said. The difference here is the actual written word, vs. how they may feel about what they think is being said.

    I know this point may be looking at spitting hairs. Much like the actual definition of a word vs how that word may make us feel. Each can have a completely different meaning.

    I am purposing the same thing is happening here with not just one word, but with each statement.

    I have looked a a few different sights about Asimov’s Three Laws and seen that none of the discussions actually talked about what the laws stated.

    Granted the laws were not actually stated in any of Asimov’s books but were implied. As if the robots were fallowing these laws.

    The mistake I see in the laws is law 3. The robot is not to allow harm to come to it self unless it conflicts with law 1 & 2. But, law 2 said the robot must fallow all commands by humans unless it conflicts with law 1.

    This means it’s more important for a robot to fallow a command than it is for the robots self preservation. As long as the robot is not harming a human by action or inaction, the robot must fallow the command.

    This means that if the robot is told to harm self, the robot must do it. Since law 2 takes priority over law 3.

    The point here is that if we are to look at robots are being more human, this law makes them more machine like. After all, there may be some rare situations were people may harm self if ordered to, but for the most part, we have a biological drive for self preservation in spite of such orders. Here the robot does not.

  11. Keith – that’s actually the point. To prevent robots becoming self-aware and trying to take over, humans always have the upper hand as the robot is programmed to obey if a human says, “Kill yourself.”

    It’s the failsafe Skynet never had. :)

  12. True. Though if you recall in one of Asimov’s books, were people were living on a mining planet and they sent the robot to retrieve a vital element to save their life. The robot was conflicted. That was because once the robot got close to the element, it was putting it self in danger. It was not until a man put his life in danger that the robot was able to over come its programing.

    Granted, I had not read the book, so this is second hand. I guess the robot did not know the element would save the minors lives. And, it was not until one human was in immediate danger that the robot was able to over come its programing. Though part of me thinks there was very likely that the robot could have known the element was needed to save lives.

    hence, a possible conflict.

  13. Interesting – I haven’t read that one, so can’t comment on details. But if the robot didn’t know the element would save humans, it could be conflicted. If it knew the element would save humans, its programming should have sent it on. So possibly a narrative mistake, rather than a rules one!

Leave a Comment