View Poll Results: AI Ethics

Voters
24. This poll is closed
  • Full Human Rights

    14 58.33%
  • Animal Rights

    0 0%
  • No Rights, there machines

    7 29.17%
  • Somthing imbetween

    3 12.50%
Page 4 of 4 FirstFirst 1234
Results 61 to 71 of 71

Thread: AI Ethics

  1. #61
    Whose Worth's unknown Cellar_Door's Avatar
    Posts
    1,310
    Cybernetics -
    WTF these prove life not sentience
    Read the beginning of my post again, Einstein.

    Echo3Romeo -
    Fire?
    Excuse me?

    VI -
    If the machine was actually CONSCIOUS, full human rights.
    Why is being concious equal with being human? Some orang-utans have been known to communicate with humans in sign-language they have been taught. Or else respond independently to flash-cards and other stimuli.
    Does this not prove they are 'concious'? Why then, are they not granted full human rights?

  2. #62
    I still stand because making a robot have the actions of aliving being would be pointless

  3. #63
    We're setting you adrift idiot Xelios's Avatar
    Posts
    2,447
    Quote Originally Posted by Cellar_Door
    Why is being concious equal with being human? Some orang-utans have been known to communicate with humans in sign-language they have been taught. Or else respond independently to flash-cards and other stimuli.
    Does this not prove they are 'concious'? Why then, are they not granted full human rights?
    Consciousness is more than just remembering that if you respond to a certain flash card you get a treat. Lots of animals can be trained to respond to things, and lots will respond to certain stimuli even without training. Even learning sign language isn't a sign of consciousness, at least not on a human level. It's just matching an action with a desired outcome. If I do this with my hands, I get food. If I do this with my hands I get a hug.

    One of the key differences is that apes can't use syntax. They can't rearrange signs into new sentences, only sign back phrases that they've been taught. I'd say that's one of the signs of consciousness, the ability to create novel things (whether it be language, art, literature, computer code etc).

  4. #64
    Awesome User Title Diode-Man's Avatar
    Posts
    1,372
    If somehow intelligence could be given feeling then rights also... but chances are, programing that big would have flaws and it would be unwise to give it rights. Just look at windows Vista, if that massive program (set of programs working together) were given rights, I don't think things would end up very well.

  5. #65
    game developer-in-training amark317's Avatar
    Posts
    252
    Does anyone know Isaac Asimov's three rules for robots?
    If they must follow all three laws, including the conditions entailed in the laws, then we would have nothing to fear from them.

  6. #66
    I metioned them eirlier but concidering, having been limited by these shakles they are not true inteligence

  7. #67
    game developer-in-training amark317's Avatar
    Posts
    252
    exactly.

    we don't have to fear AI becoming too smart, because then it would just be I.

  8. #68
    In search of Immortality Cris's Avatar
    Posts
    9,014
    Human rights? What does that mean?

    AI? What does that mean?

    Life? What does that mean?

    When we have developed a machine that has similar or better levels of intelligence than man and can experience emotions, and is of course fully self-aware, and has no other dutiful restraints (i.e. not programmed to obey us, or not harm us, or serve us) then we will have created a new life-form. But it won’t occur like that, we won’t have started from scratch, and hey presto, here is a thinking machine, it will take a long series of evolving stages just in the same way the computer started back in the 1940’s and has evolved to what it is today. AI will also take an evolutionary route. The point here is that we will likely have ample time to control the process, reject the failures and ensure the results resemble us.

    Now there is a common misperception that once AI reaches the same state as man then that is it; well I cannot see that lasting more than a few months. If it is based on learning seeds and with no biological constraints, i.e. if it needs more memory then simply add some more chips; the result will be something that quickly surpasses man’s abilities.

    I think the issue of “human rights”, or what we should really call “self aware life-form rights”, is the mistaken idea that somehow man and AI’s will live side by side for a long time; that is quite naive. It isn’t going to happen that way in the same way that processor technology isn’t going to stand still anytime soon.

    The problem will be most certainly ours, and a biological issue, since we as bio beings have a severe handicap in being able to improve ourselves in the same way that AI’s will be able to rapidly exceed us. The answer will not be how do we improve our bio processes but how do we transfer our intelligence/consciousness into the AI mechanisms we have built.

    Once we have started down the road to true self aware AI’s then they will continue to evolve and develop far beyond our capabilities. The question is not what rights we can allow them but how fast can we convert ourselves to join them. Either that or be left behind and hope they look kindly upon us.

    Now we have some interesting things to consider; what is life if we are also non-bio based? Also, will we ever wear out, i.e. will we ever age? I doubt it. Will we need to procreate? I don’t see the point. Will we be male or female? Why care? Issues of primeval sexual instincts, Gayness, etc, are all somewhat ridiculous now and will be non existent as non-bio life-forms.

  9. #69
    Worship me or suffer eternally
    Posts
    1,635
    Quote Originally Posted by amark317 View Post
    Does anyone know Isaac Asimov's three rules for robots?
    If they must follow all three laws, including the conditions entailed in the laws, then we would have nothing to fear from them.
    The rules are flawed. one could order a robot to do something that could damage a human life while the robot doesn't know.

    Also, look at this law:

    "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

    So what happens if the robot finds out about a plot that somebody is going to kill somebody else? Let's keep in mind that the robot has no time to inform the authorities and is faced with a choice, attack one human being to save the other, or stand idly by. It can't do any of those things. It can't allow one human to come to harm through inaction, but also can't attack the attacking human.

    Not to mention that hurting a human being is very broad. What if a robot finds out that his materials come from some mine in africa where people are being next to enslaved, to build the robot? what if the robot is asked to do something that could hurt the feelings of one of the billions of people on the earth? What if a robot is asked to do something that could lower the social status of someone?

    Not to mention that there will inevitably be people that will hack these robots, to break them free of these restrictions. Then you can't trust any robot anymore. They might have hidden programming.

  10. #70
    In search of Immortality Cris's Avatar
    Posts
    9,014
    Harm meant physical harm, but the hurt feelings scenario Asimov explored using the anomalous robot that could read minds.

    But yes those rules, if strictly enforced, would place the robot into a state where it could not act. Asimov explored variations of all those ideas where the robot would essentially shutdown and become useless.

    What he did was explore special scenarios where one or more of the rules were adjusted, made stronger, or weaker to allow the robot and the people to work together in harzardous areas for example where the people had to work.

    Read I Robot. Forget the movie - that had virtually no comparison with the book. I suspect Asimov would be very upset if he saw the movie.

  11. #71
    Registered Senior Member
    Posts
    388
    Quote Originally Posted by Cris View Post
    Human rights? What does that mean?

    AI? What does that mean?

    Life? What does that mean?
    It is nice to see someone asking those questions. I donít believe that people have rights. We have privileges. We shouldnít even be thinking about things like human rights and ethics when we are having discussions about artificial intelligence. We should be thinking about logic and mathematics. The general perception of ethics and morality is a product of human emotions. Most of the things that were written in this thread are irrelevant because the majority of the opinions that people have about A.I are based on the futuristic fantasies of Sci-Fi writers. And most of those fantasies are unrealistic or improbable ideas that were created to invoke an emotional response in the people that like to dream about the future. Human emotions are a product of chemistry. Digital computing is a product of mathematics.

    The point here is that we will likely have ample time to control the process, reject the failures and ensure the results resemble us.
    Making an A.I that resembles humans could be one of the biggest mistakes that we could make in the future. That is the perfect way to create a dooms day scenario that we have seen in the Terminator movies. Humans are irrational self-destructive creatures. It would be foolish for us to try to create a self-aware A.I that thinks and acts like a human. I donít have a problem with the idea of creating an A.I that does a good job of mimicking human behaviour, but the A.I should be given physical and cognitive limitations to prevent any kind of conflict that could occur. I also believe that a modified version of the golden rule has the potential to prevent serious conflicts between humans, machines, and any intelligent life that may exist in our universe. ďDo onto other life forms as you would have them do onto you, unless they have little or no chance of harming you.Ē Of course many people would tell me that my modified version of the golden rule is wrong because it is unethical. This rule has nothing to do with our primitive concepts of morality and ethics. It is all about logic and mathematics. Itís about making the logical decision to get from point A to point B. (1+1=2) These things cannot be argued with. Most people would not argue with the idea of trying to co-exist in peace with a life form that has the potential to harm you. But I also believe that it is not illogical to harm a life form if there is an extremely low or non-existent probability that the action will affect you in a negative way.

    Now there is a common misperception that once AI reaches the same state as man then that is it; well I cannot see that lasting more than a few months. If it is based on learning seeds and with no biological constraints, i.e. if it needs more memory then simply add some more chips; the result will be something that quickly surpasses manís abilities.

    I think the issue of ďhuman rightsĒ, or what we should really call ďself aware life-form rightsĒ, is the mistaken idea that somehow man and AIís will live side by side for a long time; that is quite naive. It isnít going to happen that way in the same way that processor technology isnít going to stand still anytime soon.

    The problem will be most certainly ours, and a biological issue, since we as bio beings have a severe handicap in being able to improve ourselves in the same way that AIís will be able to rapidly exceed us. The answer will not be how do we improve our bio processes but how do we transfer our intelligence/consciousness into the AI mechanisms we have built.

    Once we have started down the road to true self aware AIís then they will continue to evolve and develop far beyond our capabilities. The question is not what rights we can allow them but how fast can we convert ourselves to join them. Either that or be left behind and hope they look kindly upon us.

    Now we have some interesting things to consider; what is life if we are also non-bio based? Also, will we ever wear out, i.e. will we ever age? I doubt it. Will we need to procreate? I donít see the point. Will we be male or female? Why care? Issues of primeval sexual instincts, Gayness, etc, are all somewhat ridiculous now and will be non existent as non-bio life-forms
    We had previous discussions about the Transhumanist / Singulartarian idea of humans evolving into non-biological life forms. As I said before, I could be wrong but I donít think that our perception of what it means to be conscious and self-aware will be the same if we gave up our biology. I donít believe that binary code can be a real replacement for the chemistry that gives us our perception of reality. I think some of the Transhumanist anti-biological ideas are just ridiculous Sci-Fi fantasies. Itís a situation where a few computer scientists spent too much time focusing on the fantasies of people that write science fiction for a living. I think we should be striving to change and control our biology. Human intelligence is all about control. Ten thousand years we could equate control with having the ability to make spears and start fires. In 2008 we could equate control with building particle accelerators. Three hundred years from now we could be equating it with interstellar travel. If there is one collective goal that we should have as human beings it should be to strive to be in complete control of the atoms and molecules in the space in which we exist. That also includes the atoms and molecules within our bodies. The body could be biological, non-biological or a combination of both. I believe that we will continue evolving into a collective intelligence that will create different machines for various tasks.

Page 4 of 4 FirstFirst 1234

Similar Threads

  1. By Asguard in forum Ethics, Morality, & Justice
    Last Post: 12-04-09, 03:14 AM
    Replies: 2
  2. By greenberg in forum General Philosophy
    Last Post: 12-04-07, 01:36 PM
    Replies: 26
  3. By blackie101905 in forum Ethics, Morality, & Justice
    Last Post: 11-30-07, 03:59 PM
    Replies: 4
  4. By Upheaval in forum Ethics, Morality, & Justice
    Last Post: 09-27-07, 10:17 PM
    Replies: 15
  5. By Tnerb in forum General Philosophy
    Last Post: 08-25-07, 08:24 PM
    Replies: 2

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •