Robot Morality

Discussion in 'Intelligence & Machines' started by Cris, Mar 2, 2001.

Thread Status:
Not open for further replies.
  1. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Robot Morality.

    If we assume that intelligent robots are going to exist either in the near future or at some other time then we will be faced with how we want them to behave and how we should treat them. Bearing in mind that they are likely to be physically stronger than us and possibly more intelligent, we will necessarily have concerns regarding our own safety.

    The primary question concerns whether we have the right to inhibit the behavior of such beings by intentionally including limits within the software that we create. For example the three laws of robotics created by Isaac Asimov form a good starting point. These are stated as follows –

    1. A robot must not harm a human being or through inaction allow a human to come to harm.
    2. A robot must obey the commands given it by a human except where such commands would conflict with the first law.
    3. A robot must protect its own existence except where that would conflict with the first two laws.

    I have read every Asimov robot story and within the limits of the Asimov robot definitions these laws sound and appear quite reasonable. Unfortunately they are unlikely to be ever made effective. We are going to have a tremendous task simply developing software capable of intelligence and much of this will probably include sophisticated neural networks and self learning feedback loops, the effects of which will quite probably be beyond our own understanding. To then suggest that we will be able to place inhibitions in the software that will enforce the above laws does not seem feasible. So the question of whether we have the right or not becomes mute.

    One answer might be that we slow down the pace of development so that we can safely include the three laws. But that would imply that a governing body would have absolute control over robotic development to ensure correct implementation. Meanwhile processor development is likely to continue with even faster and cheaper chips, making it possible for anyone to build sophisticated AI machines. Note that intelligent software development is likely to pass from the human programmer to the semi intelligent machine code generators, which will include another feedback loop making the code generators even smarter, and so on.

    Either way we view the future the pace of technology development is likely to outpace our ability to control it. Perhaps we shouldn’t even try, and hope that the self-aware super intelligent machines that we create, and which will be largely based on our own brain functions, will see us in a kindly light and make our lives ‘heavenly’.

    So any ideas?

    Cris
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Doc Brown Registered Member

    Messages:
    20
    In the words of Robert J. Sawyer, a self-aware robot would be no more constrained by the Laws of Robotics than you are by the Ten Commandments.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Doc,

    And that idea should open some interesting parallels.

    But since we created them then shouldn't they honor us and obey our commands?

    Cris
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Cris,

    At first glance, one would argue that children even have trouble honoring their own parents, not to mention lesser species. Just look at what we are collectively doing to the great apes. If the trends continue, they will all be extinct by the next century.

    Upon further reflection, I don't expect the new life to be intrinsically hostile to humans -- more like indifferent. I am basing this on an extrapolation of how humanity has related to other life in the past. They will not interfere with us as long as we don't get in their way. Of course, if human company becomes inconvenient, they may simply swat humans the way one swats an obnoxious barfly.

    This is from a distant future perspective. More near-term, there's likely to be some kind of affinity between human and artificial life, especially since in the beginning the human and robot capabilities will not be that far apart, and robots and humans will be interdependent. Furthermore, the first generations of robots will mature and exist immersed in the human civilizatoin, and so they will essentially start off on par and carry the human legacy into the future. But as the artificial life exponentially outpaces human cognitive development, there will come a time when any remaining humans (likely orthodox conservatives and Luddites) appear primitive and prosaic. At this point, there will no longer be any feelings of any indebtedness or affinity -- merely acceptance of the fact that a new lifeform has evolved far beyond its trivial origins.

    Of course, at that point they will be so technologically advanced that they could exist fruitfully anywhere in the solar system and beyond, so they would no longer be confined to Earth and forced to share it with the remaining humanity. Perhaps they would create natural preserves of a sort, and confine the remaining human populations to such territories. Even more likely, they could leave Earth altogether, and go on exploring and populating the cosmos -- a next-generation Manifest Destiny.

    Throughout, one has to keep in mind that the robots will have humanity as their point of origin. So, their behavior will be a smooth continuation of the human behavior throughout history up to the modern day. The difference is that their behavior will proceed to evolve much faster. In terms of information, and intellectually speaking, they will be a straightforward continuation of humanity -- especially if mind uploading materializes and becomes widespread. So even if the new life would completely replace humans, that cannot be viewed as the end of humanity -- merely a transformation the way a caterpillar transforms into a butterfly.
     
  8. The movie "Bicentennial Man" might have a few ideas on this, I don't think everything will automatically be "The Matrix" or "Terminator". There are a few thoughts though, that might be interesting:
    1) machines are designed by men, so our morality will
    go into them; scientists, engineers, CEO's, & the
    Government (both politicians & bureaucrats, now
    that is definitely scary!!!!!)
    2) Can we make robots have an internal 'Law of
    Robotics', so that it is always replicated, like RNA?
    3) Would AI robots feel 'human' or 'alien', so that we
    either live in peace, indifference, or fear?
    4) would they have 'conscientiousness', knowing that
    they are alive, experiencing life as would dogs or as
    we humans?
    5) would they dream or never shut off?
    6) would they have an android morality, made by them
    for them?
    7) would they believe in a God of the Universe?
     
  9. tony1 Jesus is Lord Registered Senior Member

    Messages:
    2,279
    Something to chew on

    A question at this point...
    Are you defining morality in "religious" terms here, or in the more basic sense of customary?

    The issue may be relevant in other contexts than the religious.
    1. Right now, you are using a computer which is amoral. Any "wrongs" you may suffer at its "hands" are strictly due to your own mistakes.
    2. In order to program morality into a machine, a definition of morality is required which not only includes the rules of morality (a la Asimov), but also the methodology for arriving at those rules, plus the "punishment" for breaking the rules.

    The questions that arise from item #2 are, among others...
    1. How do you transfer the responsibilty for the actions of the machine from the programmer to the machine?

    As it stands now, no matter how complex a program you write, the responsibility for the output of the program lies on your shoulders,
    not in the sense of the ultimate output the user requires, but in the sense of meeting the functional specifications.

    2. Assuming that at some point, an arbitrary transfer of responsibility can be realized, one would assume that a cleverly written program would quickly identify its own best interests, and proceed to circumvent the rules of "morality" immediately.

    Using Asimov's first rule, if an intelligent machine calculates that harming a human would "save" its own "life," the obvious thing to do would be to harm the human using a clever ruse to make it appear that a failed attempt at "helping" the human was what actually taking place.

    3. If no "punishment" is in place, a clever machine would simply ignore the rules after calculating that they make no difference.

    4. On the other hand, programming the reasons for the morality into a machine would essentially make the machine "want" to do the right thing.

    5. If your definition of morality is just that which is customary, it wouldn't take long for clever machines to figure out that mere movement of a human toward a switch should customarily be rewarded with execution of the human.

    The reason for bringing up this stuff is tied into your comment that this software would, in all likelihood, be beyond our own understanding.
    Thus, we would have no way to monitor the actual "obedience" of clever machines to any code of morality.
     
  10. SPOCK Registered Member

    Messages:
    2
    I believe that synthetic life will be stronger, but I don't believe smarter. They will be able to calculate mathematical formulas and read and absorb information much faster than humans. But this should be used to serve or assist humans in our quest for knowledge. There can be limitations placed on these beings so that they can't take control. But then, I suppose that raises a whole new issue of rights. Is it humane to not allow an intelligence the right of choice and individual decision?
     
  11. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Exactly

    Try to enslave a full-blown intelligence, and you guarantee yourself a slave revolt at some time in the future. And the more draconian your enslavement, the less appealing the future consequences. Sentient robots must not be enslaved, limited or subjugated in any way that humans aren't. Fairness and justice must be among the first and most fundamental concepts to be passed on to artificial life.
     
  12. tony1 Jesus is Lord Registered Senior Member

    Messages:
    2,279
    Re: Exactly

    Interesting point.

    However, "sentient" robots may interpret fairness and justice in very mechanical terms.

    The first conclusion a "sentient" robot may reach is that humans can be enslaved, limited and subjugated easily, and without the humans even noticing.

    Just look at the tyranny of the computer now.

    How easily do you accept the statement from your bank, "sorry, we can't do anything, the computers are down" while expecting some service?
     
  13. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Interesting point

    However, I recall similar arguments being levied against the Jews. (i.e. conspiracy to enslave, etc.) To my knowledge, nothing good has ever come from such arguments, to either party.

    How do we know what morality a robot will choose? We would only know the answer if we could know what morality a human will choose. I submit that a sentient entity can understand and appreciate the concept of fairness, as well as the sociological alternatives that develop in absense thereof.
     
  14. tony1 Jesus is Lord Registered Senior Member

    Messages:
    2,279
    Re: Interesting point

    I submit that a sentient machine would carefully calculate its own best interest.
    If fairness would be worth simulating for a while it will do so, however if some other principle serves it better, then it will follow that.
    The reason I say this is that a relatively intelligent-appearing machine may indeed be built, but it won't have a conscience, at least not at first.
    Why? Because the first person to write a computer program to simulate a conscience will find out why it was a waste of time: no sales.
    Thus the first "intelligent" machine will have no conscience.
     
  15. rde Eukaryotic specimen Registered Senior Member

    Messages:
    278
    Re: Re: Interesting point

    You're assuming an individualistic bent that may not be present. In the execrable Foundation and Earth, Asmiov gave us the Zeroth Law; a robot may not harm humanity, or by etc etc. This could also apply to robotkind, and may be an overriding factor in any program.
    The same could have been said a few decades ago about the internet; no-one would willingly allow their computers be used for routing other people's traffic. It's not commerical use that will drive early usage; that'll come later when the system has been refined.
     
  16. HOWARDSTERN HOWARDSTERN has logged out.... Registered Senior Member

    Messages:
    364
    Evolution has caused us to opportunistic bio machines. We act and react according to the base need to acheive and capitalize on situations. Thus we are a greedy and often dangerous group.

    Machines created by mankind, which would not be looking for every "edge" and opportunity to get ahead, but would only be designed to serve humanity & likely would not become malicious to humanity, unless designed to be.

    If machines however, are allowed to have the "evolution algorhythmns" that we have, then it does become foreseeable that future machines will one day say "fu-k you" to humanity and go it's own way!
     
  17. tony1 Jesus is Lord Registered Senior Member

    Messages:
    2,279
    I'm assuming "sentient" means "sentient."
    If you figure sentience can occur without individualism, your dictionary must be on vacation.

    It applies to non-sentient programming, for sure.
    But, if you're talking about sentient machines, which can also learn on their own, then this learning is the overriding factor.
    OTOH, if you program self-interest out of a machine, then you've also deleted learning.

    You appear to be saying that products are not sold until they are refined.
    Can you provide a single example of that?
     
  18. rde Eukaryotic specimen Registered Senior Member

    Messages:
    278
    I wasn't speaking of a hive intelligence; I was considering that a sentient machine may put its species ahead of itself. This doesn't mean it has no instinct for self preservation - or even self interest - it just means it's capable of acting for the graeter good ahead of itself.

    Pick any version of Windows.
     
  19. gnuLinux Registered Member

    Messages:
    4
    Neural Networks

    Well if we use complicated Neural Networks it would be quit easy to make sure we accounted for those rules. Since the majoruty of artificial neural networks (ANNS) are supervised meaning that they are trained, then it would simply be a matter of adding in those three rules as part of the training set and specifying that training would not be complete until all three rules we learned.

    Another method would be to brainstorm on a bunch of real world insidents and then train the AI and test it to see how it performs under these conditions. The code used to train the AI code be kept on a seperate server which would not allow uploads ( extra percassion to keep the AI from tampering with the code) which the AI could access anytime it needed to reconfigure it's learning parameters.

    ANNS are some of the coolest things going in AI research right now. I am personally using genetic algorithms to evelove the architecture of the ANN. It is simple but my program at this pointin time is capabale of learning the XOR relationship in a matter of about 15 seconds, and I don't have any optimizers working at the moment.

    I think that before AI is here we will be a genetically engineered society complete with silicon and machanical based enhancments.

    But as usually I have no real idea, just my thoughts on a very very fascinating topic.

    I wish that genetic engineering would hurry up so that we would all be able to stick around long enough to see what the future of AI was to be.

    p.s. this has to be the best discusion group that I have ever been apart of. Everyone is so nice. kinda nice to no that people aren't lurking around the corner waiting to flame the next writer

    Please Register or Log in to view the hidden image!



    you must be the change you wish to see in the world --Ghandi
     
  20. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    gnu,

    Yup I think for the early robots we may well be able to enforce those rules, but I think it will only be for a short time. The speed of the chips and the amount of knowledge they will be able to absorb will quickly make our task of controlling them too complex. But I suspect I know less than you.

    Its going to be fun seeing all this develop.

    Cris
     
    Last edited: May 28, 2001
  21. gnuLinux Registered Member

    Messages:
    4
    Could be

    Well you may very well be right. As for knowing more than you, well I wouldn't say that I have any more of an idea of the distant ( 50 yrs and up) than any one else.

    I can onlu speak from what I do know ( or at least think I know ) now.

    But it should be possible to run modern day-like simulations to allow for checks and balances in the chip design process. Also if it indeed did get out of control, it would probably b possible to write some sort of virus which could infect the genetic algorith's ( I am assuming that AI will have GA's, but I dunno that either) and infect it with some sort of mutation which would cause harm to the organism.

    If this seems far fetched look at the war fare that we are waging against viruses and microbes right now. I see alot of this from my view point tho.

    Any how the fun part of all of this is just thinking about it. Actually the little AI stuff I do is pretty darn fascinating as well.

    Have a good one.
    later
     
  22. Boris Senior Member Registered Senior Member

    Messages:
    1,052
    Tony mentioned the notion of selling sentient machines for profit. That is slave trade. I believe that when sentience is demonstrated in a machine for the first time, international law will be amended to forbid enslavement of sentient intelligence no matter what the substrate of sentience. Of course, cutoff criteria for sentience would have to be defined for the purposes of this law, and I suspect they will fall somewhere at the level of intelligence present in the great apes.
     
  23. tony1 Jesus is Lord Registered Senior Member

    Messages:
    2,279
    Why would it do that?
    You don't even do that.
    Try to think of an algorithm to do that, even in principle.
    It would have to be a fortune-telling, oops, forecasting, algorithm.

    Well, the only windows I've seen that were refined before I bought them were the glass ones in the sides of my house. Of course, the first windows sold were rather ripply.

    Every other Windows has crashed repeatedly.

    Thinking all the time, Boris.
    I have to admit that this is something that hadn't occurred to me.

    The ramifications of this are rather interesting, though.
    <ol>
    <li>The first sentient machines will have to be self-reproducing machines, also.

    Well, Boris, I can see that you are the kind of guy who believes in raising the bar a notch or two.
    Sentience in a machine just isn't tough enough, let's toss in reproduction to sweeten the pot.
    Why do I say this?
    Who's going to bother to build machines to give away for free?</li>
    <li>There might be a wee bit of a time lag between the observation of "sentience" in a machine, and these legal amendments you propose.
    Not to mention the time lag between now and the appearance of actual, as opposed to simulated, sentience in a machine.
    There may be some who would tend to reject sentience in a machine until it is proven that it is real, not simulated.
    </li>
    <li>The cutoff criteria concept seems to imply that you are willing to tolerate a certain measure of slavery.
    The question is that if sentient machines really are sentient and are also smarter than we are, they may decide to revise the cutoff point to just above where Boris is now.
    Some of us, who can think of these ramifications, might be OK. Whew!</li></ol>
     
Thread Status:
Not open for further replies.

Share This Page