Should self-aware, concious AI be given rights?

Discussion in 'Ethics, Morality, & Justice' started by Norsefire, Jan 29, 2008.

  1. Norsefire Salam Shalom Salom Registered Senior Member

    Messages:
    11,529
    Scientists and engineers have been researching and pioneering new breakthroughs in the field of AI, or self-aware machines. They are developing robots which are able to learn for themselves, and perhaps in the future even actually be self aware like we Humans are.

    Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. draqon Banned Banned

    Messages:
    35,006
    their true self-awareness has to be proven mathematically or at least approximated to be as that of self-awareness of humans...before we speak of rights.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. shichimenshyo Caught in the machine Registered Senior Member

    Messages:
    5,110
    Yes, if a machine thinks exactly like a human, and can reason like one, why should it not have rights?
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. draqon Banned Banned

    Messages:
    35,006
    who's to say the machine does not imitates self-awareness? ...for the sake of these rights...programmed imitation by its own creators (humans)
     
  8. shichimenshyo Caught in the machine Registered Senior Member

    Messages:
    5,110
    If you cannot prove its imitated then you must assume that it really is self aware.
     
  9. draqon Banned Banned

    Messages:
    35,006
    if you cannot prove Q than P must be right? :shrug:

    Than you would have to prove that P is right.

    You cannot assume that P is right with no proof.


    ----


    You cannot assume AI have true consciousness without proof that it is indeed a true consciousness that we as humans experience.
     
  10. Pandaemoni Valued Senior Member

    Messages:
    3,634
    I would say that you should program the thing to not want special rights. We will develop these hypothetical AIs to solve problems for humans, to the extent the rights of an AI conflict with those of a human, that would mean necessarily that the desires of the AI must in some way conflict with what the human wants to do.

    Simple solution, reprogram the AI so that it wants what the human wants, and then there is no conflict.

    Would anyone suggest that a computer, sentient or not, should have the right not to be reprogrammed?
     
  11. draqon Banned Banned

    Messages:
    35,006
    I would argue that AI's must prove themselves that they are consciouses, cause as far as I see it they are metal junk and should be scrapped for better models to serve humanity needs.
     
  12. shichimenshyo Caught in the machine Registered Senior Member

    Messages:
    5,110
    How do we prove that human beings have "true" conciousness? There is no mathmatical equation to prove it, so how can we truly know?
     
  13. draqon Banned Banned

    Messages:
    35,006
    easy, we assume humans' consciousness as true consciousness from here on.

    In fact...from here on, I, draqon of white stars encapsulating this very essence....well yeah I the draqon proclaim humans' having the true "true consciousness" by which all other claims of consciousness shall be judged
     
  14. shichimenshyo Caught in the machine Registered Senior Member

    Messages:
    5,110
    This Discussion assumes that the AI is actually self aware, so lets just stick to that realm of debate for now.

    Please Register or Log in to view the hidden image!

     
  15. Read-Only Valued Senior Member

    Messages:
    10,296
    As far as I'm concerned the question is completely moot - I do not believe such machines can ever be built. They will be able to approximate it but never fully accomplish it.
     
  16. Read-Only Valued Senior Member

    Messages:
    10,296
    Sorry, but I don't deal in impossible assumptions. That's for daydreamers and fiction writers.

    So, with that I will exit this discussion.
     
  17. draqon Banned Banned

    Messages:
    35,006
    impossible assumptions?

    Why do we exist and perceive ourselves, or are you not assuming there anything?

    And self-awareness exists already in machines...they have complex algorithms on Spirit and Opportunity rovers that let them deduce were to go based on sensory data...what to do, which energy blocks to turn on or turn off...and even how to adapt to changed in the environment! And thats now. refer to this document: http://marstech.jpl.nasa.gov/publications/EstlinICRA2007Final.pdf
     
  18. shichimenshyo Caught in the machine Registered Senior Member

    Messages:
    5,110
    I'm sorry that you lack the imagination to continue in a friendly "what if" discussion. Oh well :shrug:
     
  19. Read-Only Valued Senior Member

    Messages:
    10,296
    It's not a matter of imagination at all. Indulging in some "what ifs" is simply a waste of time, effort and mental energy. Some are worth it - this one is not.
     
  20. Read-Only Valued Senior Member

    Messages:
    10,296
    Silly boy! That's NOT intelligence - simply good programming. There isn't even the slightest degree of intelligence involved there.

    (I gather that the likes of you would be impressed by a thermostat that turns on the heat when it gets too cold or sensors that turn on security lights when it becomes dark. What you are talking about is just extensions of those simple, mechanical devices - not intelligence at all.)
     
  21. Norsefire Salam Shalom Salom Registered Senior Member

    Messages:
    11,529
    Read-Only, engineers in Japan are working on "robots" that are able to work on their own. The only ability they are given by their creators is that ability to learn; they must make their own judgements and learn as a child does.

    Now, if a brain can be self aware there can be a way to basically make a metallic brain

    Under the assumption that it does happen, however, should that AI be given rights as an individual?
     
  22. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    Following your assumption (in bold), what possible reason could you give for not granting these machines basic human rights?

    But there's no mathematical proof that you are self-aware, so why do you have rights? Obviously, mathematical proof has never been required for rights before, so why the change?

    If the machine passes the Turing test, on what basis would you deprive it of rights? You can prove nothing about the machine's self-awareness that does not also hold for human beings.

    But a self-aware machine can't be a simple automaton, programmed to perform just one task. It will have to be a machine that thinks just like you do (and probably feels as well). Do you think we could "program" a human being not to want rights?

    Definitely. Reprogramming a sentient machine would be the same as brainwashing or psychologically torturing a human being.
     
  23. Norsefire Salam Shalom Salom Registered Senior Member

    Messages:
    11,529
    I did not state that a sentient machine shouldn't have rights, I simply did not give my opinion because I usually don't when I'm a topic creator involving opinion-based discussion

    Anyway, this here is fascinating:

    "Strong AI" is AI which can be self-aware and concious. Studies in how a Human brain works can lead to advances in AI development.
    We would also need to give such a machine senses, obviously, and therefore a machine COULD feel pain. If it is sentient, then it can if it has senses. Because after all what is pain? When something harms you, right? So couldn't a machine technically feel pain if it is sentient and it knows that something harmed it?
     

Share This Page