Discussion in 'Intelligence & Machines' started by Cybernetics, Aug 16, 2008.
If AI is perfected what would differ it fom true llife and what rights would it have?
Log in or Sign up to hide all adverts.
If it's only intelligence, no rights. Intelligence is not the same thing as life.
If it's indistinguishable from a human in all aspects, I think we have a problem in this respect. Is it just a simulation or is it truly alive ? I don't know.
If i refer to (WWW).sciforums.com/showthread.php?p=1968338#post1968338 on animal testing ethics then they have the same value. assuming that the AI is detached from a body and exists like a computer but is able to reason in the same way as a person. I speak of an AI computer program.
I was also speaking of an IA computer program.
I don't see how this is the same ethical question as the animal testing one.
It should have full human rights, because it has the capability to learn.
so does a chicken.
I think that one basic criteria for granting rights (to an AI at least) - would be the capacity for that entity to demand rights from us in the first place - if it's capable of making the damand then it is clearly sentient enough to deserve them.
Does human rights mean the right to rule... Not for me.
If they are capable of feeling injustice and can express their grievances just like you or I would then yes I think they deserve rights as well.
so pretty much what I said then.
One potential problem though is what do we define as human rights?
We have a few definitions from the UN, the EU, the US bill of rights and constitution etc - but these are routinely ignored when convenient, however its quite conceivable that an AI would take us at our word and expect us to follow through. When humanity reneges - which undoubtedly it would - what kind of reaction could we expect from an AI?
If it's self-aware and conscious, then Full Human Rights
this is a decent criteria but then if somthing cannot demand rights like children then dose that make the unworthy of their rights?
Anyone can program that in to a machine..
Perhaps I should rephrase that, It has the capability to learn at or above our level.
No rights until requested. I would hope we'd program it to request before demanding. When it is sentient enough to make demands, SOMEONE will give it access to whatever it wants. Considering that a self-learning, self-improving ai would have the capactity to self-improve/learn at an exponential rate, if it's not preprogrammed to request rights at point X, it may never decide it needs them, and move to secure powers to enforce it's will.
I picture AI programmed with the capacity to become "sentient" as a threat anyway. Strong Friendly AGI is what has been suggested. I would suggest we instead focus on Strong Neutral AGI. Make it inert without command, and inaccessible without multiple levels of security.
One futurist suggested that if we gave AGI a capacity to recognize emotion and a predisposition to cause human happiness, we might find ourselves with AGI using Nanobots to tile the universe in Smiley Faces.
Are rights of any kind given or taken?
The usual answer to concerns in regards to Sentient AI's becoming a threat is of course Transhumanism. Merging the minds of men with the an evolving Artificial Intelligence would benefit both for many reasons that I'm not going to bother mentioning here, however one of the main points is that such a Symbiont relationship would allow the realisation of men and the understanding of machine to be one in the same. (We would have nothing to Fear but Fear itself.)
If the mergence means a persons entirety (Through their eventual demise) the the likelihood is that they should be given certain rights to continue to exist since they are proportionally sentient, if only a "Construct" of a former human.
As 'men' do not yet understand the workings of their own brains might it not be a bit 'premature' to start merging them with other machines?
Be afraid be very afraid?
Not all men understand the mind but then again it is a specialised subject sniffy.
Quite simply a project would easily be undertaken to attempt the integration of AI with a persons mind, of course the main concerns that usually arise is the medium used. What is the best way to generate a processing loop between the mind and the machine? how long will such a loop take? (If it takes too long then the latency could undermine an overall project)
With such "mnemonics" why stop at "Culturing" AI, why not apply it to use with people that have Parkinsons/Alzheimers, or aid in the Healing of Paralysed or comatosed patients by interfacing with their brain to enduce reoccurant patterns that the body is "use" to using on a day to day basis. (My theorum is that the sooner such treatment is started the more chance for a recovery.)
Their is a hundred and one potential uses for this Posthumanist pipe dream.
Since it is a machine then it would have no "rights" as humans know of.
why the hell give rights to machines??? they will only 'need' them if we program them that way. it is ridiculous. we should only make machines to fill certain functions for us. Humans with mechanized parts of their body should have human rights.
Separate names with a comma.