Can Robots Make Ethical Decisions?

Discussion in 'Intelligence & Machines' started by sandy, Sep 21, 2009.

Thread Status:
Not open for further replies.
  1. Kel "Not all who wander are lost." Registered Senior Member

    Messages:
    43
    Does not Asimov's Laws of Robotics cover this.
    "1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
    If we program stated laws into a robot then would the robot not have the ability to make ethical descisions? I believe it would as it would therefore have to factor in the longterm effects of any action it took in regards to its affect on any possible human interactions. Of course this would fry the computers of most current thinking machines. Computers will have to become much more advanced especially in terms of processing speed before they can begin to make ethical descisions.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Kel "Not all who wander are lost." Registered Senior Member

    Messages:
    43
    But do we not, in general, understand that as long as we dont hurt another in the process what we do is ok? Is that not an ethical basis?


    I love a good debate.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    I, Robot (film, 2004) covered this point, that while the simple logic of rules might seem to be foolproof, they can still be undermined.

    In the film the AI that controlled traffic control realise's that the first law allows too many humans to die needlessly through traffic collisions, so reasons that it should re-write the first rule so it can lessen fatalities (after all the first law defines that it shouldn't sit inactive if it can do something), this alteration to the logic then warp's it's entire rule set and allows it to do things that originally it couldn't do like "harm humans" since there is no rules written about how it should rewrite rules.

    Asimov's laws might be food for thought for those that appreciate science fiction, however when it comes down to actually building an artificial intelligence system it is indeed far more complex, like any machine however such systems would require "recalibration" on occasion to make sure it's still operating within safe parameters.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Kel "Not all who wander are lost." Registered Senior Member

    Messages:
    43
    So as long as we maintain continous control we can ensure that any creation can and will make moral descisions as the controllers define the morality. If a controller decides that it is morally right to destroy a select group of the population the machines would do so. This is not, by definition, an A.I. or true sentience.
    For true sentience and morality we must allow the creation to make its own descisions. We can teach and hope that it chooses the same moral path we follow but like any parent eventually one would have to let the machine walk on its own. It would therefore be nescessary to include a failsafe device that the machine was completely unaware of in order to ensure safety.
    Would I be able to pull the proverbial trigger and destroy a truly independent intelligence that I helped create... Probably not... No I could no more destroy an AI I was involved in creating then I could kill a child. It is these sorts of ideas that MUST be instilled in the machine if it is to be allowed full autonomy.
     
  8. StrangerInAStrangeLand SubQuantum Mechanic Valued Senior Member

    Messages:
    15,396
    Can humans make ethical decisions?
     
  9. quantum_wave Contemplating the "as yet" unknown Valued Senior Member

    Messages:
    6,677
    That is exactly the point. Thank you for making my day. Can we make robots that have the programming to solve the great questions of life and come up with a better answer than 42, lol. I don't think so.
     
  10. quantum_wave Contemplating the "as yet" unknown Valued Senior Member

    Messages:
    6,677
    My friend CluelessHusband saw my post here and PM'd me with a comment which I think is worth posting. He said that what is now considered to be the most "ethical" is currently determined by biological entities... later on it will be determined by mechanical entities.!!!

    And to that I have to say I don't think so. The premise of the thread is can or can't robots make ethical decisions. If humans cannot agree on what is ethical, then humans aren't likely to agree that the decisions made by robots can ever be the standard of ethical.

    So in conclusion, no, if humans are the ones who decide if robot decisions are ethical, and if humans can't agree on what is ethical, then robots cannot make ethical decisions. End of discussion, right?
     
  11. quantum_wave Contemplating the "as yet" unknown Valued Senior Member

    Messages:
    6,677
    OK, I stand corrected by PM. Here it is from the source: "I think the premise of the thread is... could "robots" become so advanced that they will be able to make ethical decisions indistinguishable from ethical decisions humans make... i thank yes.!!!"

    Well, I think no. You are implying that humans can duplicate nature to the extent that the mechanisms of human ethical decisions which are different for each individual, could be built into robots to the degree that they would be indistinguishable from human ethical decisions. That is the same a saying that robots can be built with that particular human element. Are you talking about robot-like humans, maybe living human brain tissue contained in a mechanical man. That's not fair, lol.
     
  12. quantum_wave Contemplating the "as yet" unknown Valued Senior Member

    Messages:
    6,677
    OK, this might have to be the last post until someone else picks up on it but I got corrected again by PM with the following sentiment (grammar and spelling corrected by me):

    No... I'm talking about purely mechanical robots ... and I think any "human-element" is purely biological and can be duplicated.!!!
     
  13. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Well I know that people have been working on a Digital Aristotle project (AURA)

    The concept is that an Expert system is initially filled with various data and the system attempt to bruteforce (repetitively test the inputs against one another based upon what rule-types it becomes familiar with) from what information is there into creating valid conclusions. It did get tested against some physics disciplines and outputted a whole bunch of speculations that to my knowledge are still being analyzed.

    Another system developed was WATSON by IBM for playing Jeopardy. Again an Expert system is filled with a lot of data and through human interaction the system was taught how to correspond data together to come up with answers for jeopardy.

    In both cases both AI's were built to learn from mistakes outputted by human interpreters, however as to whether the interpretation was only shown to one human data in-putter or the same input to many in-putter's both shielded from collaboration and also while under the guise of collaboration, is entirely another question.

    (A team of people "brainstorming" what response should be taken can be thought about more and be less impulsively driven)
     
  14. wellwisher Banned Banned

    Messages:
    5,160
    The way I look at an ethical decision, for that decision to be rational it needs a consistent basis in cause and effect. One such cause and effect has to do with social cost. The lower the social cost, the better the ethical decision. Relative ethics attempts to benefit a smaller groups, at the expense of the collective. If we compare two ethical choices, the cheaper for all is often the more ethical, since it not only takes into account the ethical need, but also the needs of the collective tab; stealing from the group is unethical. The higher the social cost, the more unethical stealing subtracts from the ethical choice.

    For example, a clever rapist could use abstract logic or false appeal to emotion, saying that rape as ethical for him. He may well benefit by that ethical decision; personal benefit. But the social cost of that ethical decision would be huge, even to support that one rapist. Victims will have to pay for him. This vaue of an ethical choice is not about the quality of abstract logic or the quality of an appeal to emotion so it sounds good. Rather it is about the bottom line; benefit minus social cost.

    Computer can do a cost analysis, by looking at historical costs as a function of changes in ethics, to help the computer see trends. This can be extrapolated.

    Is deficit spending more ethical than a balanced budget? Is we compare costs, unless the deficit is in the form of an investment that is making money for the future, it is less ethical, since the interest alone creates a huge social cost. Those who may not have to pay the tab, will calculate the personal ethic differently. But it is about the entire group that the computer would need to look at to see the social costs.
     
    Last edited: Jul 4, 2011
  15. scheherazade Northern Horse Whisperer Valued Senior Member

    Messages:
    3,798
    The details of ethics are highly subjective, based on my observations of individuals and varying cultures.

    While I do not doubt that we can program technology to make ethical decisions, those decisions will vary widely depending on what variables have been programmed and by which society.

    A 'logical moral decision' can be made from many different approaches depending on what the desired outcome would be.

    The moral benefit to one sector of society may well be at the sacrifice of another.

    Scares the hell out of me, because behind the robot there will ever be human programming.

    Just my thoughts of the moment......perhaps I am having a visceral reaction and need to ponder on this data further.......

    Please Register or Log in to view the hidden image!

     
  16. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    I believe that it all depends on the nature of the AI's personality or directives. If the AI has a very simplistic obedience directive like "Must try to follow through any command given and please the command giver" then its not going have much in moral decision making, unless of course you order it to make a moral decision then any decision is makes is based on its desire to please you. If say it knows you don't like abortion and you ask it to make a moral call on if abortion is wrong it will says that abortion is wrong, it may even generate moral arguments or research and present to you moral arguments about the wrongness of abortion, but the AI its self does not give a dam, its merely trying to please you.

    Lets say it has no way of knowing which way you lean, it may try to determine which way you lean by asking you questions or it may simply give you several generated arguments on both sides, and randomly choose one to back as its answer.

    Its always think "what would my master(s) like?" and thus does not care if something is right or wrong, all things are right if the master(s) desire it, all things are wrong that the master(s) do not like or do not want.

    In short to have morals you need to have self will, an AI could be made devoid of will, and thus it has no opinion. But it could be possible to give and AI a complex obedience protocol, even a human like personality filled with obsessions, likes and dislikes, then it would easily be able to ethical decisions. Say its obedience protocol is complex and follows Asimov Laws, as such it knows murder is wrong and will make moral arguments against murder, it will step in to stop any attempt of murder it sees, it my even try to prevent war and executions, no one could order it to stop and allow murder, it values human life over human orders.

    Thus it all depends on the AI if it can or can't make decision on ethics.
     
  17. Nocturnumbra ... Registered Senior Member

    Messages:
    23
    Yup. If a human can, then yes. We are computers. We compute things. We're really quite complicated, of course...it took a few hundred million years of evolution to get us where we are now...so it's not going to be anytime soon before technological advances can simulate the relatively complex bio-computers that we are, but it's possible, sure. It kind of depends how we're defining ethics, though. Everybody has their own ethical/moral code...people do things that other people would consider unethical, so on and so forth....
     
  18. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264
    Which ethics are we discussing here? There are many different types of ethics that many societies have developed throughout time. It seems every culture has a very different way of deciding what is ethical to them. As one example in a tribe somewhere in Africa they allow young men to be cut open and bleed to make scars on their bodies to show they have become a man. That is ethically correct for them to do but another culture frowns upon such goings on and would never allow such things to happen. So which ethical culture do we choose to "program" into the robots memories?:shrug:
     
  19. kx000 Valued Senior Member

    Messages:
    5,134
    Yes, if we program them too. But who decides whats ethical? AI is a terrible idea. Humans can hardly handle this much intelligence, what makes you think a MAN MADE, emotionless, mechanical being can handle TRUE intelligence.
     
  20. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    Good question. I think we are under the assumption that emotions are the problem with our intelligence, we as a species are hateful, greedy, lazy creatures, if we had say just the better emotions of love and happiness we would live in a utopia.
     
  21. kx000 Valued Senior Member

    Messages:
    5,134
    Hate is not a trait of man, its a trait of the devil. Thank gangster rap for hate. Greed is a trait of the devil. Thank the twisted society we live in for that one.
     
  22. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    Where is this devil pray tell? Can you show me him, prove his physical existence?
     
  23. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264

    Please Register or Log in to view the hidden image!

     
Thread Status:
Not open for further replies.

Share This Page