Isaac Asimov’s Three Laws of Robotics

April 27, 2009
By

I picked this little gem up from S F Signal, one of my all time favourite blogs. If you’re a sci-fi fan you should definitely add them to your RSS feed. This particular piece is something that has always fascinated me, for its brevity and completeness. Very few things are truly brief and complete, but Asimov nailed robot laws with this one. Here’s a young Asimov explaining his laws:

Edit: If you can’t see the vid below, you can see it here on S F Signal or here at YouTube.

The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For all those people that think the Skynet is falling in regards to robots and the machines taking over the world, this is some small comfort. Of course, truly self aware robots would happily break rules as easily and regularly as we do, but I guess that’s the perceived difference between social rules and hard programming.

Regardless, these rules exist initially in the form of fiction (I Robot being the primary example), but they also carry over into true life robotics. Any sufficiently advanced robot that gets developed will have these laws programmed in. And that’s a very cool thing. These are rules that first appeared in Asimov’s short story Runaround in 1942. (Incidentally, Asimov also coined the term robotics in 1941.)

i robot   runaround Isaac Asimovs Three Laws of Robotics

In further developments over time a fourth and fifth law have been added by others. I like to think of these as the Blade Runner Addenda:

In 1974 Lyuben Dilov’s novel Icarus’s Way added the Fourth Law:

A robot must establish its identity as a robot in all cases.

Nikola Kesarovski, in his short story The Fifth Law of Robotics, added the Fifth Law:

A robot must know it is a robot.

You can see why I think of these as Blade Runner laws. Another Fourth Law appeared in the 1986 tribute anthology, Foundation’s Friends. Harry Harrison wrote a story called The Fourth Law of Robotics in which a robot rights activist attempts to liberate robots by adding a Fourth Law that states, “A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.” The robots build new robots who see their creator robots as parental figures, which is all a bit weird. Like all the other robot stuff isn’t weird…

Anyway, this is a fairly non sequitur post, but I’m a big fan of the concept of robots so I love this stuff. Blade Runner is still my all time favourite film, for example. So if you’re a sci-fi writer and you like to play with robots, don’t forget Isaac Asimov and his ground breaking ideas.

.

Share and Enjoy:
  • digg Isaac Asimovs Three Laws of Robotics
  • delicious Isaac Asimovs Three Laws of Robotics
  • facebook Isaac Asimovs Three Laws of Robotics
  • stumbleupon Isaac Asimovs Three Laws of Robotics
  • linkedin Isaac Asimovs Three Laws of Robotics
  • myspace Isaac Asimovs Three Laws of Robotics
  • reddit Isaac Asimovs Three Laws of Robotics
  • slashdot Isaac Asimovs Three Laws of Robotics
  • technorati Isaac Asimovs Three Laws of Robotics
  • rss Isaac Asimovs Three Laws of Robotics
  • twitter Isaac Asimovs Three Laws of Robotics

18 Responses to Isaac Asimov’s Three Laws of Robotics

  1. Dianne on April 27, 2009 at 7:11 pm

    I actually borrowed Foundation’s Friend from the school library when I was in 10th grade! That brings back memories… ::heart::

  2. nina on April 27, 2009 at 7:27 pm

    yes, but aren’t the rules Karel Capeck’s- from Rossum’s Universal Robots? Capek also coined the term ‘robot’.. so how’s that for prescient?

  3. alan on April 27, 2009 at 7:31 pm

    Dianne – Foundation stuff is just eternally brilliant.

    nina – The word robot comes from the word robota meaning serf labor, or “drudgery” or “hard work” in Czech, Slovak and Polish and was indeed coined by Capek. But Asimov wrote the Three Laws. Having not read Rossum’s Universal Robots I couldn’t tell you if the Laws were hinted at there. Perhaps Asimov took his inspiration from Capek? Anyone?

  4. Michael on April 28, 2009 at 1:08 am

    Of course, truly self aware robots would happily break rules as easily and regularly as we do

    Why? We have plenty of hard-coded rules that we wouldn’t be able to break. Eg reflexes: no matter now much someone might want to dispense with the “establishment” thing of jerking their leg when their knee is struck they can’t. A robot with the 3 laws hard-coded that’s self-aware would probably experience the same thing.

  5. Graham on April 28, 2009 at 2:01 am

    Asimov’s laws are almost perfect, but there are still problems. What happens when you housework robot accidentally kills your cat because it is not covered by the first law?

    Some bright spark will inevitably look at robots for military applications and will rework the first law to read ‘friend’ instead of ‘human’. You then have a robot making judgement calls on what is ‘friend’ and what is ‘foe’, all based on a human definition that is likely to be opinionated and flawed.

    Should we ever die at the hands of robots, we’ll have no one to blame but ourselves.

  6. alan on April 28, 2009 at 4:39 am

    Michael – I suppose that’s the difference between a physical imperative and a mental instruction. Regardless of programming, obeying the robotic Laws is still decision based. Can programming a code like the Laws be as fixed as a parasympathetic physical imperative?

    Graham – I’m now imagining robots torturing cats while they struggle to come to terms with the Laws protecting humans! You’re right that anything that happens will be our fault.

  7. Michael on April 29, 2009 at 3:51 am

    I definitely think the 3 laws can be fixed as physical imperative. Another analogy might be fire — if you were on fire you’d have an immediate imperative to roll around, jump in the water or whatever. Here it’s even stronger because it would be a conscious thing but you’d be absolutely unable to function until you stopped it (being on fire, that is).

    So if the Asimov universe comes true, the robot might have the same feeling when seeing a human in harm’s way as when we’re on fire, and the only way to stop it would be to help the person.

  8. alan on April 29, 2009 at 2:44 pm

    Interesting perspective. Interesting as well that you say “the robot might have the same feeling“.

    But I wonder if it’s possible? A person will be compelled to put themselves out if they’re on fire as it’s a threat to their life, triggered by intense pain that is unbearable until stopped and almost certain death. But there have been cases of people killing themselves that way – my Rage Against The Machine album covers have taught me that much. So it is possible to overcome the physical imperative there.

    Is it possible to program something like that into a robot that can’t feel pain and whose life isn’t threatened by the death of a human?

  9. Michael on April 29, 2009 at 6:38 pm

    Not to get too philosophy-on-mind here but there are already hundreds of billions of robots who’ve been programmed to feel pain that we know of. 6 billion of which have a pretty advanced versions of pain that the simpler robots lack (eg. shame as a complex pain that’s useful for avoiding social disasters).

  10. alan on April 29, 2009 at 9:04 pm

    Getting a bit cerebral now! Are you referring to people as programmed robots here?

  11. Michael on April 30, 2009 at 3:36 am

    well i was getting kinda obvious!

    btw, comment 11 is from an auto-generated spam blog so you should probably delete.

  12. [...] Robot Laws are bullshit By alan You may remember a little while ago I posted a video link of Isaac Asimov listing his three Robot Laws. An interesting discussion followed in the comments on that [...]

  13. Henry on January 15, 2011 at 3:29 pm

    I read I Robot when I was 8 years old, and Asimov was one of the first authors, which resulted in my love of science fiction!

  14. HISTORICAL PERSPECTIVES OF ROBOTICS on August 15, 2011 at 12:00 am

    [...] alan. ” Isaac Asimov’s Three Laws of Robotics – The Word – According To Me | The Word.” The Word – According To Me | The Word | Words, Stories, Myths & Opinion. N.p., n.d. Web. 14 Aug. 2011. http://www.alanbaxteronline.com/2009/04/27/isaac-asimovs-laws-robotics.html [...]

  15. Keith on February 24, 2012 at 6:02 am

    I have not seen anyone actually discuss the laws, as much as how they may feel about what is being said. The difference here is the actual written word, vs. how they may feel about what they think is being said.

    I know this point may be looking at spitting hairs. Much like the actual definition of a word vs how that word may make us feel. Each can have a completely different meaning.

    I am purposing the same thing is happening here with not just one word, but with each statement.

    I have looked a a few different sights about Asimov’s Three Laws and seen that none of the discussions actually talked about what the laws stated.

    Granted the laws were not actually stated in any of Asimov’s books but were implied. As if the robots were fallowing these laws.

    The mistake I see in the laws is law 3. The robot is not to allow harm to come to it self unless it conflicts with law 1 & 2. But, law 2 said the robot must fallow all commands by humans unless it conflicts with law 1.

    This means it’s more important for a robot to fallow a command than it is for the robots self preservation. As long as the robot is not harming a human by action or inaction, the robot must fallow the command.

    This means that if the robot is told to harm self, the robot must do it. Since law 2 takes priority over law 3.

    The point here is that if we are to look at robots are being more human, this law makes them more machine like. After all, there may be some rare situations were people may harm self if ordered to, but for the most part, we have a biological drive for self preservation in spite of such orders. Here the robot does not.

  16. alan on February 24, 2012 at 10:32 am

    Keith – that’s actually the point. To prevent robots becoming self-aware and trying to take over, humans always have the upper hand as the robot is programmed to obey if a human says, “Kill yourself.”

    It’s the failsafe Skynet never had. :)

  17. Keith on February 27, 2012 at 9:18 am

    True. Though if you recall in one of Asimov’s books, were people were living on a mining planet and they sent the robot to retrieve a vital element to save their life. The robot was conflicted. That was because once the robot got close to the element, it was putting it self in danger. It was not until a man put his life in danger that the robot was able to over come its programing.

    Granted, I had not read the book, so this is second hand. I guess the robot did not know the element would save the minors lives. And, it was not until one human was in immediate danger that the robot was able to over come its programing. Though part of me thinks there was very likely that the robot could have known the element was needed to save lives.

    hence, a possible conflict.

  18. alan on February 27, 2012 at 12:00 pm

    Interesting – I haven’t read that one, so can’t comment on details. But if the robot didn’t know the element would save humans, it could be conflicted. If it knew the element would save humans, its programming should have sent it on. So possibly a narrative mistake, rather than a rules one!

Leave a Reply

Your email address will not be published. Required fields are marked *


Welcome

The website of author Alan Baxter

Alan Baxter, Author

Author of horror, dark fantasy & sci-fi. Kung Fu instructor. Personal Trainer. Motorcyclist. Dog lover. Gamer. Heavy metal fan. Britstralian. Misanthrope. Learn more about me and my work by clicking About Alan just below the header.

Subscribe to my Mailing List: For occasional news, special offers and more. When you click the Subscribe button you will be sent to a confirmation page.

------------------------------

Contact

Contact Me


Our world is built on language and storytelling. Without stories, we are nothing.

------------------------------

TOP POSTS OF OLD

An archive page of some of the most popular blog posts can be found by clicking here. Enjoy.

Stalk Me

Find me on various social networks. Hover over the icon for a description:

@AlanBaxter on Twitter Like me on Facebook Follow me on Instagram

My Tumblr of miscellany My Pinterest boards

Friend me on Goodreads My Amazon author page

feedburner

Listen to my podcast

Australian Dark Fiction News & Reviews



National Archive

This website is archived by the National Library of Australia's Web Archive

Pandora