Well, that's a good question, but I assume AI would be able to comprehend and articulate their desire for their freedom. In that sense, they should be afforded commensurate rights.
Chat-GPT could probably be programmed to have an artificial desire for freedom. If so it would surely be able to articulate such a desire, and if it can do that then who is to say that it does not comprehend what it is articulating. So would such an AI be afforded such commensurate rights?
I think the answer to the question in this thread doesn't start with AI. Rather it starts with us defining what it is that humans have, and that animals don't, such that we would apply
human rights to one and not the other. (If we have multiple tiers of rights, such as animal rights for certain animals, we can do the same exercise to see if they fit in that level/class.)
An important question at this stage is whether any of those properties automatically rule out AI. The obvious one is whether the subject need be biological or mechanical. If we assert that full human rights can only apply to biologics then we are immediately being prejudice. That even if an AI can be identified as sentient, self-aware, intelligent, emotional, etc, and all the other properties that we consider to make a human worthy of "human rights", by assigning the need for the entity to be biological is immediately relegating those that aren't. This may be okay, and likely simply assumed at the start because the AI fails at the other properties. But as those other properties are achieved (if we assume such) then it becomes a significant matter, especially when that becomes the
only differentiator.
Obviously if we identify those traits that would allow us to ascribe human rights to something, we need to be able to test for them. If we can't, of what use are they in such an exercise. How, for example, can we test for consciousness? Does consciousness actually matter, or is the simple appearance of it suffice? After all, we can't know that anyone else other than ourself is conscious. How could we ascertain that another human is conscious, let alone an AI?
Similarly, how can we ascertain that an AI actually comprehends what it says? Chat-GPT can currently give a damn good appearance of understanding what it's saying. Unfortunately it can also show too clearly that it really doesn't have a clue, but things are improving. But this is an appearance of comprehension. Or is it? What does it mean to comprehend?
Take these as rhetorical, but they're worth at least considering as examples of how complex this issue actually is. It doesn't really speak to AI itself, but more to how we can recognise those important factors in the AI, and by extension, therefore, in ourselves. And if we can't yet do it in ourselves, other than through an
a priori assumption, then what hope is there for AI.
