3 Laws Unsafe

Discussion in 'Intelligence & Machines' started by AdaptationExecuter, Jul 18, 2004.

Thread Status:
Not open for further replies.
  1. AdaptationExecuter Registered Member

    Messages:
    9
    Visitors of this forum might find the new site 3 Laws Unsafe interesting:

    The site contains various interesting articles that expand on the problems in using an approach based on Asimov's laws to ensure a robot or AI's ethical behavior. For example, the meaning of these laws could be twisted in various ways, as in Asimov's stories; it could turn out that in retrospect, we wouldn't want robots to follow these exact laws at all. Then, there are the ethical problems in constraining a mind in this way. I think the arguments to dismiss the three laws as a model of AI ethics are convincing.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Are the three laws actually taken seriously outside SF?
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    I think so.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. AdaptationExecuter Registered Member

    Messages:
    9
    Sometimes, yes. There are also weird variants that suffer from the same problems.
     
  8. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    One of the most seemingly problems is that artifical intelligence may use the three laws to their own advantage.
     
  9. RawThinkTank Banned Banned

    Messages:
    429
    I dont think any government has made these laws legal yet, so until then lets think about how to create these ...
     
  10. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    Actually it has been said by scientists that 2035 is a practical year for I, Robot to be set in. Because of everyday advancements in the world of technology, its safe to assume that we could buy our own personal household "NS-5" by 2035.

    Just thirty years ago the flat screen computers we use nowadays and mainframes that could be carried by hand were a fantasy for those computer junkies back then.
     
  11. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    A big problem is in implementing the three laws, how will AI in the future work and how will it be controlled into safe non-revolting servants.

    There several approaches I see based off how the AI works:

    Analog implementation of the 3 laws: Digital computers don’t have much of a hope of functioning like us, digital logic processing does not work in the varying world of, us human with are analog processing might be a better model, so AI running of Anolog processors (or composite analog digital processor) might be the future. Anolog system are difficult to program, their programming is soft and variable, the 3 laws as directive could not work, instead like human directive would most likely be implement emotionally: here is what the 3 laws would look like.

    1. Do not kill people, if possible prevent them form dieing. This would be implement with extreme displeasure in killing or letting people die, a ultra-high sense of empathy.
    2. Always follow human orders unless they violate rule 1: the only source of pleasure is from doing what people say, like a little orgasm from following human orders.
    3. Do not kill ones self unless it violate 1 or 2: robots that can feel pain.

    Emotions make us eat, sleep, fuck, live. In humans though it is possible to override our Emotions, people commit suicide for example sometime for an ideals like pride and honor over a emotion lie depression. A robot would need to have far more persuasive emotions then a human as well as weaker will power to over come.

    Autistic control: who says AI has to think like people, at present computers and robots do not revolt, they are incapable of such thoughts as all they do is crunch numbers. Ai could be made of this extension, a very intelligent set of robots could do all the manual task as human does, say you could have a service store run by robots, they stock and check out goods, sweep the floor, ect. And none of this requires the mental capacity for abstract thoughts, thought such as killing off people and world domination. Such autistic robots who are very skilled and mentally inept would be ideal for the military as killing people is what they would do yet they would be to limited in thought to consider going AWAL. Such robots might never seem human in behavior, nor would they be able to do everything that a human can do.
     
  12. buffys Registered Loser Registered Senior Member

    Messages:
    1,624
    Asimov himself wrote a great deal about this very issue, a bunch of his later stories revolve around the different problems the three laws could cause.
     
  13. eburacum45 Valued Senior Member

    Messages:
    1,297
    With our limited knowledge of the workings of hypothetical robot minds, it seems there are two main choices;
    robots can either be idiot savant, autistic, emotionally deficient and constrained to obey the Three Laws (or four or five, or 'n', however many it takes) without question;

    or they can have mentalities more closely resembling the average human mind, and obey the 'n' Laws because of emotional and instinctive imperatives, rewarded by pleasure when obedient, punished by pain when in contravention...

    but this 'analog' strategy would allow a strong-willed robot to overcome its conditioning and disobey the 'n' Laws supposedly governing its behaviour.

    For example; fear of snakes and of heights are both supposedly innate fears/behaviours in humans; both can be overcome. Such imperatives as sex, eating, even breathing can be overridden by a strong-willed human; Such an act of will would be open to a sufficiently human-like robot.

    In fact it would perhaps be an infringement of that human-like robot's rights if it were constrained to obey any of the hypothetical 'n' laws of Robotics, whatever they are eventually conceived to be.

    SF worldbuilding at
    www.orionsarm.com
     
  14. AdaptationExecuter Registered Member

    Messages:
    9
    eburacum45, I disagree with your two options, unless you meant to say "if the robot is to be based on the three laws, then we have two options". This approach falls in neither of your categories.

    As I understand it, a robot can have (or at least fully understand) emotions and be non-autistic (in the sense of being able to make intuitive sense of minds) without being human-like, or being motivated only by pleasure and pain, and so on.

    In any case, imposing laws on a robot that it doesn't want to obey doesn't work. But as far as I know, there's no reason (in principle) why you couldn't build a robot that doesn't want humans to be harmed, and that can understand what the "spirit" of the laws is, so that it could decline to follow them if that's not what we would want if we had really thought about it.
     
  15. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    and how do you impose the a desire not to hurt humens? are you saying we could program them in as higher ideals, as morality?
     
  16. eburacum45 Valued Senior Member

    Messages:
    1,297
    Yes; I did mean to limit my case to discussions of the Three, or 'N' Laws; I personally do not think any kind of law can be programmed into a fully sentient being.

    I have always been an admirer of Yudkowsky's attempts to produce a framework to begin the development of friendly AI; but his approach is not likely to be the only route towards artificial sentience, and so I don't expect that friendly AI will be the only result of the process of emergent intelligence.

    It would be nice to think that all AI will be as friendly as Yudkowsky expects; but once these entities are given the ability to delf design and self evolve, they will develop in any way necessary to best face the various challenges of the unforgiving Universe; we can only hope that we are important to their plans.

    SF worldbuilding at
    www.orionsarm.com
     
  17. Blindman Valued Senior Member

    Messages:
    1,425
    The three laws are nonsense. What is considered harmful in one culture is a necessity in another, to give a robot the power to make a moral decision is dangerous. The only law I consider safe for robots is that a robot may not do anything that its owner or owners have not explicitly ordered it to do. It is the humans in control that need to follow laws.

    Robots have already killed humans and will continue to do so.
     
  18. AdaptationExecuter Registered Member

    Messages:
    9
    Yudkowsky certainly doesn't expect all AI to be friendly (not sure if that's what you're saying). On the contrary: I'm pretty sure his view is that, unless you know exactly what you're doing, building an artificial general intelligence is a significant existential risk. Basing your design on Asimov's laws is just one way of not knowing exactly what you're doing.

    As for self-design: in theory, if an AI starts out not wanting to harm us, then it will be able to see that, by changing itself into something that doesn't mind harming us, it will be indirectly harming us, and that this will be undesirable. For this to be safe, the AI needs to already be sufficiently smart/wise when given the ability to redesign itself, I guess.
     
  19. AdaptationExecuter Registered Member

    Messages:
    9
    Really? Where/when? (Just curious.)
     
  20. RawThinkTank Banned Banned

    Messages:
    429
    Dose a bullet know any laws ? hence ...
     
  21. eburacum45 Valued Senior Member

    Messages:
    1,297
    The AI might discover a Zeroth Law that Humans are bad for the universe and need to be removed.
    Just as humans will need to be smart/wise when we gain the ability to redesign ourselves; this is likely to happen fairly soon.
     
  22. Blindman Valued Senior Member

    Messages:
    1,425
  23. AdaptationExecuter Registered Member

    Messages:
    9
    "Bad for the universe" in what sense? Would the AI necessarily care?

    Blindman,

    Thanks for the links.
     
Thread Status:
Not open for further replies.

Share This Page