A.I. Human Bots

davewhite04

Valued Senior Member
This isn't a new thought but I don't think it has been discussed on sciforums.

When we can walk into a shop and buy a style of A.I. Bot, cleaner, lawyer, sex etc. should these bots have rights?

If they have an artificial nervous system and are nigh on humans.

My forecast looks like a new form of slavery, yet I'm sure these bots would be big sellers.

If you could have a bot for £10k, would you buy one?

For purposes of discussion, let's assume the future of A.I. bots as realistic and as cheap as I say is true.
 
Last edited:
This isn't a new thought but I don't think it has been discussed on sciforums.

When we can walk into a shop and buy a style of A.I. Bot, cleaner, lawyer, sex etc. should these bots have rights?

If they have an artificial nervous system and are nigh on humans.

My forecast looks like a new form of slavery, yet I'm sure these bots would be big sellers.

If you could have a bot for £10k, would you buy one?

For purposes of discussion, let's assume the future of A.I. bots as realistic and as cheap as I say is true.
I recommend a novel called "Klara and the Sun" by Ishiguro, which explores the idea of a sufficiently advanced robot seeming to have feelings and experience even love. It's a nice story and quite thought-provoking on the subject, though it presents no easy answers of course. It is about the life of the robot Klara, who is what is termed an Artificial Friend, bought from a shop for a teenage invalid girl.
 
If we accord rights to AI machines ,will we (or they?) accord them (or us?) responsibilities?

Is the idea preposterous and so we have to accept that their creators/owners are responsible for their actions.

What about pets/animals ,though? We grant them rights .Do we accord them responsibilities ,or some lower grade responsibility?

"Animals will be animals" ,as the scorpion said to the frog
https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog
 
I recommend a novel called "Klara and the Sun" by Ishiguro, which explores the idea of a sufficiently advanced robot seeming to have feelings and experience even love. It's a nice story and quite thought-provoking on the subject, though it presents no easy answers of course. It is about the life of the robot Klara, who is what is termed an Artificial Friend, bought from a shop for a teenage invalid girl.
"Bicentennial Man" by Isaac Asimov is another that does much the same, or even Spielberg's "AI" if you want a cinematic example. In those the AI isn't quite as obviously "human" but at points along the way you start to think about the proximity, and the questions that poses.
 
If they have an artificial nervous system and are nigh on humans.
Those are a couple of pretty big 'ifs'.

I think the proof would be in the pudding. In other words, it is impossible for us, now, to make an a priori judgement on the matter, because we dont know what we're talking about.
It would have to wait for, not questions of what they might they be, but what they are.
 
What would it say about us if we ever accorded them more rights than we do to animals ?
Well, that's a good question, but I assume AI would be able to comprehend and articulate their desire for their freedom. In that sense, they should be afforded commensurate rights.
A lot of animals, if given sufficient safety, trust and food, are perfectly happy to be kept prisoner.
 
Well, that's a good question, but I assume AI would be able to comprehend and articulate their desire for their freedom. In that sense, they should be afforded commensurate rights.
A lot of animals, if given sufficient safety, trust and food, are perfectly happy to be kept prisoner.
I wonder whether ,by the time the former eventuality occurs we may not have achieved the technology whereby animals are able to communicate their wishes and feelings (commensurate with their capabilities) to us so that we are in no position to ignore their needs.(and vice versa)

(Obviously AI machines are inedible for now ...what emoticon do I choose for this?)
 
I wonder whether ,by the time the former eventuality occurs we may not have achieved the technology whereby animals are able to communicate their wishes and feelings (commensurate with their capabilities) to us so that we are in no position to ignore their needs.(and vice versa)
Well, I think they can communicate their wants pretty sufficiently now. If they want freedom, they try to escape. The problem is that their wants are not sufficient to keep them alive and safe. They are essentially too simple to know what's best for them in a world overtaken by humans. They may want to escape, but that doesn't mean they know how to survive in a landscape they haven't grown up in. (Note those two things go hand-in-hand. We are responsible both for removing their habitat and for interrupting their survival learning.)

Without getting too much into the weeds of animals rights here, I'm simply pointing out that there is a relationship between 1] sophistication and capability of the subject, and 2] the rights it can be afforded.

Human toddlers may want to escape too, but are likewise unable to comprehend - let alone navigate - genuine freedom.


An AI presumably will have (and be able to demonstrate) its sophistication to move among humans while still being able to take care of itself (just like a young teenager I guess) - something animals just don't quite have the ability to do.
 
I recommend a novel called "Klara and the Sun" by Ishiguro, which explores the idea of a sufficiently advanced robot seeming to have feelings and experience even love. It's a nice story and quite thought-provoking on the subject, though it presents no easy answers of course. It is about the life of the robot Klara, who is what is termed an Artificial Friend, bought from a shop for a teenage invalid girl.
I'll check it out thanks :)

EDIT: Looks really good, just picked it up from Amazon.
 
Last edited:
Well, I think they can communicate their wants pretty sufficiently now. If they want freedom, they try to escape. The problem is that their wants are not sufficient to keep them alive and safe. They are essentially too simple to know what's best for them in a world overtaken by humans. They may want to escape, but that doesn't mean they know how to survive in a landscape they haven't grown up in. (Note those two things go hand-in-hand. We are responsible both for removing their habitat and for interrupting their survival learning.)

Without getting too much into the weeds of animals rights here, I'm simply pointing out that there is a relationship between 1] sophistication and capability of the subject, and 2] the rights it can be afforded.

Human toddlers may want to escape too, but are likewise unable to comprehend - let alone navigate - genuine freedom.


An AI presumably will have (and be able to demonstrate) its sophistication to move among humans while still being able to take care of itself (just like a young teenager I guess) - something animals just don't quite have the ability to do.
So ,is it fair to use the conditions of animals as a comparison point when considering our possible interactions with an AI machine that it may be impossible to declare non sentient?

No I won't go down this road too far if it is a detour but is the question or comparison valid? Even "existential"?

Is it easier for us to address the OP with the inclusion of all sentient animals in the mix?
 
Well, that's a good question, but I assume AI would be able to comprehend and articulate their desire for their freedom. In that sense, they should be afforded commensurate rights.
Chat-GPT could probably be programmed to have an artificial desire for freedom. If so it would surely be able to articulate such a desire, and if it can do that then who is to say that it does not comprehend what it is articulating. So would such an AI be afforded such commensurate rights?

I think the answer to the question in this thread doesn't start with AI. Rather it starts with us defining what it is that humans have, and that animals don't, such that we would apply human rights to one and not the other. (If we have multiple tiers of rights, such as animal rights for certain animals, we can do the same exercise to see if they fit in that level/class.)

An important question at this stage is whether any of those properties automatically rule out AI. The obvious one is whether the subject need be biological or mechanical. If we assert that full human rights can only apply to biologics then we are immediately being prejudice. That even if an AI can be identified as sentient, self-aware, intelligent, emotional, etc, and all the other properties that we consider to make a human worthy of "human rights", by assigning the need for the entity to be biological is immediately relegating those that aren't. This may be okay, and likely simply assumed at the start because the AI fails at the other properties. But as those other properties are achieved (if we assume such) then it becomes a significant matter, especially when that becomes the only differentiator.

Obviously if we identify those traits that would allow us to ascribe human rights to something, we need to be able to test for them. If we can't, of what use are they in such an exercise. How, for example, can we test for consciousness? Does consciousness actually matter, or is the simple appearance of it suffice? After all, we can't know that anyone else other than ourself is conscious. How could we ascertain that another human is conscious, let alone an AI?
Similarly, how can we ascertain that an AI actually comprehends what it says? Chat-GPT can currently give a damn good appearance of understanding what it's saying. Unfortunately it can also show too clearly that it really doesn't have a clue, but things are improving. But this is an appearance of comprehension. Or is it? What does it mean to comprehend?
Take these as rhetorical, but they're worth at least considering as examples of how complex this issue actually is. It doesn't really speak to AI itself, but more to how we can recognise those important factors in the AI, and by extension, therefore, in ourselves. And if we can't yet do it in ourselves, other than through an a priori assumption, then what hope is there for AI. ;)
 
If we accord rights to AI machines ,will we (or they?) accord them (or us?) responsibilities?

In my example, we will program them to be whatever type of bots we want, like I say, waiter, lawyer, friend, sex, many more types. I'm talking about bots which are as realistic as the villain in the latest Terminator movie.

Is the idea preposterous and so we have to accept that their creators/owners are responsible for their actions.

If something went wrong, free repairs or legal cases in some instances.

What about pets/animals ,though? We grant them rights .Do we accord them responsibilities ,or some lower grade responsibility?

"Animals will be animals" ,as the scorpion said to the frog
https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog

Pets and bots is a bad comparison, pets are life bots aren't but they will be intellectually superior to us.
 
So ,is it fair to use the conditions of animals as a comparison point when considering our possible interactions with an AI machine that it may be impossible to declare non sentient?
Well that's kind of my point. I'm responding to the implications of your posts 3 and 6. AI would be qualitatively different from animals.

It is fair to use the conditions of animals as a comparison - and a contrast.
 
That we value intelligence above life of animals.
No. That only creatures who can comprehend the responsibilites of freedom are capable of navigating it.
We love our pets, but we can't set them free; they would suffer and die.

Note that children are in a similar boat. Keeping them imprisoned isn't a sign that we don't value them.
 
Those are a couple of pretty big 'ifs'.

I think the proof would be in the pudding. In other words, it is impossible for us, now, to make an a priori judgement on the matter, because we dont know what we're talking about.
It would have to wait for, not questions of what they might they be, but what they are.
Not really. If you use a bit of imagination and a dollop of the progress of A.I. it won't be long before we have them, so a hypothetical discussion seems interesting to me.
 
This title keeps popping up in my recos for reading...
It's good. Like so many of Ishiguru's books it is fairly quiet, reflective and basically well-intentioned, even though it is set in what gradually reveals itself to be a kind of middle-class future dystopia.

As I get older, and having become both a father and a widower, I find I avoid the kind of modern novel that feels it has to harrow or shock the reader with some ghastly tale of violence or psychological abuse, or some crushing predicament. It's just not what I need. I like to come away feeling the universe is benign, rather than hostile. Ishiguru is from my point of view the right sort of novelist: civilised, understated and a reflective observer of human nature. (But don't try "The Unconsoled". That one is weird, a sort of long, stream-of-consciousness anxiety dream.)
 
Back
Top