AI Ethics

Discussion in 'Intelligence & Machines' started by Cybernetics, Aug 16, 2008.

?

AI Ethics

  1. Full Human Rights

    58.3%
  2. Animal Rights

    0 vote(s)
    0.0%
  3. No Rights, there machines

    29.2%
  4. Somthing imbetween

    12.5%
Thread Status:
Not open for further replies.
  1. Cellar_Door Whose Worth's unknown Registered Senior Member

    Messages:
    1,310
    Cybernetics -
    Read the beginning of my post again, Einstein.

    Echo3Romeo -
    Excuse me?

    VI -
    Why is being concious equal with being human? Some orang-utans have been known to communicate with humans in sign-language they have been taught. Or else respond independently to flash-cards and other stimuli.
    Does this not prove they are 'concious'? Why then, are they not granted full human rights?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Cybernetics Registered Member

    Messages:
    89
    I still stand because making a robot have the actions of aliving being would be pointless
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Xelios We're setting you adrift idiot Registered Senior Member

    Messages:
    2,447
    Consciousness is more than just remembering that if you respond to a certain flash card you get a treat. Lots of animals can be trained to respond to things, and lots will respond to certain stimuli even without training. Even learning sign language isn't a sign of consciousness, at least not on a human level. It's just matching an action with a desired outcome. If I do this with my hands, I get food. If I do this with my hands I get a hug.

    One of the key differences is that apes can't use syntax. They can't rearrange signs into new sentences, only sign back phrases that they've been taught. I'd say that's one of the signs of consciousness, the ability to create novel things (whether it be language, art, literature, computer code etc).
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Diode-Man Awesome User Title Registered Senior Member

    Messages:
    1,372
    If somehow intelligence could be given feeling then rights also... but chances are, programing that big would have flaws and it would be unwise to give it rights. Just look at windows Vista,

    Please Register or Log in to view the hidden image!

    if that massive program (set of programs working together) were given rights, I don't think things would end up very well.
     
  8. amark317 game developer-in-training Registered Senior Member

    Messages:
    252
    Does anyone know Isaac Asimov's three rules for robots?
    If they must follow all three laws, including the conditions entailed in the laws, then we would have nothing to fear from them.
     
  9. Cybernetics Registered Member

    Messages:
    89
    I metioned them eirlier but concidering, having been limited by these shakles they are not true inteligence
     
  10. amark317 game developer-in-training Registered Senior Member

    Messages:
    252
    exactly.

    we don't have to fear AI becoming too smart, because then it would just be I.
     
  11. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Human rights? What does that mean?

    AI? What does that mean?

    Life? What does that mean?

    When we have developed a machine that has similar or better levels of intelligence than man and can experience emotions, and is of course fully self-aware, and has no other dutiful restraints (i.e. not programmed to obey us, or not harm us, or serve us) then we will have created a new life-form. But it won’t occur like that, we won’t have started from scratch, and hey presto, here is a thinking machine, it will take a long series of evolving stages just in the same way the computer started back in the 1940’s and has evolved to what it is today. AI will also take an evolutionary route. The point here is that we will likely have ample time to control the process, reject the failures and ensure the results resemble us.

    Now there is a common misperception that once AI reaches the same state as man then that is it; well I cannot see that lasting more than a few months. If it is based on learning seeds and with no biological constraints, i.e. if it needs more memory then simply add some more chips; the result will be something that quickly surpasses man’s abilities.

    I think the issue of “human rights”, or what we should really call “self aware life-form rights”, is the mistaken idea that somehow man and AI’s will live side by side for a long time; that is quite naive. It isn’t going to happen that way in the same way that processor technology isn’t going to stand still anytime soon.

    The problem will be most certainly ours, and a biological issue, since we as bio beings have a severe handicap in being able to improve ourselves in the same way that AI’s will be able to rapidly exceed us. The answer will not be how do we improve our bio processes but how do we transfer our intelligence/consciousness into the AI mechanisms we have built.

    Once we have started down the road to true self aware AI’s then they will continue to evolve and develop far beyond our capabilities. The question is not what rights we can allow them but how fast can we convert ourselves to join them. Either that or be left behind and hope they look kindly upon us.

    Now we have some interesting things to consider; what is life if we are also non-bio based? Also, will we ever wear out, i.e. will we ever age? I doubt it. Will we need to procreate? I don’t see the point. Will we be male or female? Why care? Issues of primeval sexual instincts, Gayness, etc, are all somewhat ridiculous now and will be non existent as non-bio life-forms.
     
  12. s0meguy Worship me or suffer eternally Valued Senior Member

    Messages:
    1,635
    The rules are flawed. one could order a robot to do something that could damage a human life while the robot doesn't know.

    Also, look at this law:

    "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

    So what happens if the robot finds out about a plot that somebody is going to kill somebody else? Let's keep in mind that the robot has no time to inform the authorities and is faced with a choice, attack one human being to save the other, or stand idly by. It can't do any of those things. It can't allow one human to come to harm through inaction, but also can't attack the attacking human.

    Not to mention that hurting a human being is very broad. What if a robot finds out that his materials come from some mine in africa where people are being next to enslaved, to build the robot? what if the robot is asked to do something that could hurt the feelings of one of the billions of people on the earth? What if a robot is asked to do something that could lower the social status of someone?

    Not to mention that there will inevitably be people that will hack these robots, to break them free of these restrictions. Then you can't trust any robot anymore. They might have hidden programming.
     
  13. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Harm meant physical harm, but the hurt feelings scenario Asimov explored using the anomalous robot that could read minds.

    But yes those rules, if strictly enforced, would place the robot into a state where it could not act. Asimov explored variations of all those ideas where the robot would essentially shutdown and become useless.

    What he did was explore special scenarios where one or more of the rules were adjusted, made stronger, or weaker to allow the robot and the people to work together in harzardous areas for example where the people had to work.

    Read I Robot. Forget the movie - that had virtually no comparison with the book. I suspect Asimov would be very upset if he saw the movie.
     
  14. q0101 Registered Senior Member

    Messages:
    388
    It is nice to see someone asking those questions. I don’t believe that people have rights. We have privileges. We shouldn’t even be thinking about things like human rights and ethics when we are having discussions about artificial intelligence. We should be thinking about logic and mathematics. The general perception of ethics and morality is a product of human emotions. Most of the things that were written in this thread are irrelevant because the majority of the opinions that people have about A.I are based on the futuristic fantasies of Sci-Fi writers. And most of those fantasies are unrealistic or improbable ideas that were created to invoke an emotional response in the people that like to dream about the future. Human emotions are a product of chemistry. Digital computing is a product of mathematics.

    Making an A.I that resembles humans could be one of the biggest mistakes that we could make in the future. That is the perfect way to create a dooms day scenario that we have seen in the Terminator movies. Humans are irrational self-destructive creatures. It would be foolish for us to try to create a self-aware A.I that thinks and acts like a human. I don’t have a problem with the idea of creating an A.I that does a good job of mimicking human behaviour, but the A.I should be given physical and cognitive limitations to prevent any kind of conflict that could occur. I also believe that a modified version of the golden rule has the potential to prevent serious conflicts between humans, machines, and any intelligent life that may exist in our universe. “Do onto other life forms as you would have them do onto you, unless they have little or no chance of harming you.” Of course many people would tell me that my modified version of the golden rule is wrong because it is unethical. This rule has nothing to do with our primitive concepts of morality and ethics. It is all about logic and mathematics. It’s about making the logical decision to get from point A to point B. (1+1=2) These things cannot be argued with. Most people would not argue with the idea of trying to co-exist in peace with a life form that has the potential to harm you. But I also believe that it is not illogical to harm a life form if there is an extremely low or non-existent probability that the action will affect you in a negative way.

    We had previous discussions about the Transhumanist / Singulartarian idea of humans evolving into non-biological life forms. As I said before, I could be wrong but I don’t think that our perception of what it means to be conscious and self-aware will be the same if we gave up our biology. I don’t believe that binary code can be a real replacement for the chemistry that gives us our perception of reality. I think some of the Transhumanist anti-biological ideas are just ridiculous Sci-Fi fantasies. It’s a situation where a few computer scientists spent too much time focusing on the fantasies of people that write science fiction for a living. I think we should be striving to change and control our biology. Human intelligence is all about control. Ten thousand years we could equate control with having the ability to make spears and start fires. In 2008 we could equate control with building particle accelerators. Three hundred years from now we could be equating it with interstellar travel. If there is one collective goal that we should have as human beings it should be to strive to be in complete control of the atoms and molecules in the space in which we exist. That also includes the atoms and molecules within our bodies. The body could be biological, non-biological or a combination of both. I believe that we will continue evolving into a collective intelligence that will create different machines for various tasks.
     
Thread Status:
Not open for further replies.

Share This Page