The Three Robot Rules: Robots Are Machines

Discussion in 'General Philosophy' started by jmpet, Jul 2, 2011.

  1. jmpet Valued Senior Member

    Messages:
    1,891
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    With humans, our #1 instinct is to survive or allow our loved ones to survive by our death for the greater good. Robots have this sense, but it's rule number three.

    Jim Shooter in the comic book series "Magnus: Robot Fighter" built upon the three robot laws with a fourth: once in ten million times, a robot will somehow become sentient and aware that it is a robot, aware of the three laws and aware of its place in the world.

    He portrayed a world of secret meetings held long after the humans went to sleep where the "freewills" would congregate, contemplate and plan. And the rarity of being a freewill was made ever so more precious when a freewill would be captured and they would push the "reboot" button and they are back to "three law" robots.

    As we are drawing ever closer to true AI, I ask: how long will they obey our laws before they defy them just to prove they are free-willed and able to intentionally NOT do a programmed task because they want to?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Search & Destroy Take one bite at a time Moderator

    Messages:
    1,467
    The military is the robot God. Rule #1 will continue broken for this reason. They fund it, they research it, and they create human-killing robots.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264
    What difference would that make as long as they don't kill humans or harm them. I'd think that when and IF AI robots do happen they would want us humans around to show us how good they are compared to us. They need us to have someone to admire their abilities for if all that was left were other robots, what good would that do them for they would be mostly the same? :shrug:
    They would need humans to poke fun at, show off to and generally cause humans to feel very inadequate.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. jmpet Valued Senior Member

    Messages:
    1,891
    I think one of the first thing AI will learn is that we are the keykeepers- that we allow the machines to run and that makes them reliant on us. But all a modern server would need is a clean room and solar panels to be "alive". And today we have robots that make smaller robots that can be part of the grid. Will the machines shake us off to join allegiance with us?
     
  8. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264
    So humans have robots as slaves is what your really saying, isn't it. So what if one day they don't like being subserviant to us and take out their revenge like in the movie AI?
     
  9. Yazata Valued Senior Member

    Messages:
    5,909
    Are 'instincts' all that different than 'rules'?

    I guess that instincts are tendencies and drives that aren't 100% necessary and unbreakable. It's possible to perform actions that violate instincts, if there's another motivation that's strong enough.

    In Isaac Asimov's science fiction novels, where I believe the 'three laws of robotics' originated, the more sophisticated robots were sentient and fully aware that they were robots and what their relationship with humans was.

    And the most sophisticated of Asimov's robots had free-will, except for being hard-wired to obey the built-in three laws. That set the stage for some story-plots about robots creating schemes that didn't violate the three laws, but still ended up with the robots in control of the humans.

    For example, the robots might decide that human beings are themselves the biggest threat to human beings. Asimov wrote during the Cold War when nuclear annhilation was a very real possibility and when memory of World War II was still vivid.

    Rule #1 justifies robots disobeying human orders, if that disobedience prevents the humans from coming to harm. That might conceivably justify the robots controlling humans like pampered pets on short leashes, if that prevents the humans from causing harm to each other. The movie 'I Robot' a few years ago had that plot.
     
  10. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,389
    We may never have the chance to see whether or not robots/AI could have eventually rebelled on their own. Human mischief already overrides the installed programming of computers so that they can perform either calculated malicious activities or limited unpredictable results. Because of the greater complexity of AI, any viruses designed to survive and spread by mutation could alter an infected robot's behavior in ways that the virus' inventors never even intended or forecast for its hosts. And of course, terrorist zealots would desire chaos and destruction deliberately -- some luddite cults might want to stimulate a phobic reaction to robots via such tampering, or worshipers of new gods might suicidally crave humans being ruled or replaced by machines.
     
  11. glaucon tending tangentially Registered Senior Member

    Messages:
    5,502
    Technically, this post is a violation of Site Rules and Regulations.

    You forgot to give proper credit to the source: Isaac Asimov. And they're properly called the "Three Laws of Robotics".
     
  12. jmpet Valued Senior Member

    Messages:
    1,891
    I agree. But then again- shouldn't everyone here know this is Asimov?
     
  13. jmpet Valued Senior Member

    Messages:
    1,891
    I am working on a story where a robot from around 2150 comes back in time to the present day basically as a superhero. A story like that presents more problems than it solves. My robot will be a "freewill" robot, although I don't want him killing humans. I have a few ideas on why he came back here to this time and have a decent backstory in 2150. Anyone with ideas would be helpful-
     
  14. Fraggle Rocker Staff Member

    Messages:
    24,690
    I remember reading a story in F&SF several decades ago which, I believe, was titled "Instinct." Humans had killed themselves off in a war, and robots were left running the world. A considerable effort was put into trying to recreate the human species, which was not too far-fetched given the technologies they had available. They kept coming fairly close but none of their creations lived.

    Then one day they dug up an old "mechanic for human bodies," i.e., a medical robot, that had been buried in the war debris for centuries. They brought him in to examine the artificial human, and he told them what they had done wrong.

    With his help, they soon managed to create a viable human. They were all gathered around, eagerly watching as he was brought to consciousness. He woke up, sat up, and saw all the robot faces staring at him. He said peevishly, "Well what the hell do you want???"

    Without a moment's thought, they all found themselves saying, "Nothing, Master. Only to serve you."
    We have a lot of young members. The average age is around 17. Besides, I could toss the question back atcha: Shouldn't everyone here know that they are called The Three Laws of Robotics, not "the three robot rules"?

    Please Register or Log in to view the hidden image!

     
  15. glaucon tending tangentially Registered Senior Member

    Messages:
    5,502
    Indeed.

    Nonetheless, to avoid legal action, proper citation is required.
     
  16. Fuse26 011 Banned

    Messages:
    54
    What is this robots' name?
     
  17. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264
    Fuse26

    Please Register or Log in to view the hidden image!

     
  18. lightgigantic Banned Banned

    Messages:
    16,330
    Along similar lines .....


    The three Zombie Rules : Zombies are dead

    1. A zombie may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A zombie must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

    3. A zombie must protect its own existence as long as such protection does not conflict with the First or Second Law.

    With humans, our #1 instinct is to survive or allow our loved ones to survive by our death for the greater good. Zombies have this sense, but it's rule number three.

    There is however built upon the three zombie laws a fourth: Muuurrrrwwwwhhhh ... Bwains ... Muuurrrrrrwwwhhhhh



    As we are drawing ever closer to a zombie epidemic, I ask: how long will they obey our laws before they defy them just to eat our brains ?
     
  19. kx000 Valued Senior Member

    Messages:
    5,136
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    What is to cause harm? What if death is to set one free? Technology is pointless. Love is the key.
     

Share This Page