Can Robots Make Ethical Decisions?

Discussion in 'Intelligence & Machines' started by sandy, Sep 21, 2009.

Thread Status:
Not open for further replies.
  1. sandy Banned Banned

    Messages:
    7,926
    That's been done quite a few times already, no? And then what after you do that? I'm trying to follow the progression.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. spidergoat pubic diorama Valued Senior Member

    Messages:
    54,036
    No, it hasn't been done. We do understand some aspects of it, but there is no complete theory of the mind.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    Yes... cause in reality... ther ant anythang besides culture/inviorment an DNA.!!!
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. sandy Banned Banned

    Messages:
    7,926
    What are we missing? What are we not getting?
     
  8. sudevkk Registered Senior Member

    Messages:
    6
    What you think about this.

    When we (humans) say ethics, that applies to us, to the society that we built.
    Now lets think that one day we got successful in making an artificial brain equally good as human's. Now what about ethics. Fine, since the brain was built by a man, he programs it to behave as human concepts. So, there is no free will there. A robot is not actually making any ethical decisions, but its programmed to do so.

    Now, lets think there is no such program which will try to control the robot in any moral way. It just got an free brain. Then aren't we just creating a new species of animal to the nature ?. Which may evolve to a higher level, taking years ?. Or may not even survive ?. And for both the cases, Are they going to co-exist with humans ?.

    Or what about if this way. Create robots, with an equal brain, and leav them in some other planet (Or may be, in earth itself, but do something that they dont see humans !!). Then, if they survive,
    Aren't they going to evolve exactly as human ?. I mean taking ethical decisions etc etc.. So what we have then ?. A robot , or an Animal ?
     
  9. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    Is "consciousness" somptin that can not be programed into a robot.???
     
  10. Omega133 Aus der Dunkelheit Valued Senior Member

    Messages:
    6,281
    I haven't been in this coversation long so I don't know if you covered this already but, what about the issue of computer controlled missile silos and them making ethical decisions?
     
  11. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    Originally Posted by draqon
    robots with free will

    Depends... you thank we have it... so whats you'r definiton of "free will".???
     
  12. joepistole Deacon Blues Valued Senior Member

    Messages:
    22,910
    Can robots make ethical decisions?

    A fascinating question and one that must be answered at some point in the near future. I think ethical behaviors are a must with machines capable of artifical intelligence. Ethics constrain behaviors, limiting them to behaviors that benefit society.

    The question I have is would a machine capable of artifical intelligence develop ethics on its own and what would those ethics be? My guess is that if the machine was not social, no interacting with other machines or humans, it would not exhibit ethical behaviors unless an artifical ethical constraint was imposed on it by its creators.
     
  13. sandy Banned Banned

    Messages:
    7,926
    Good question and point.
     
  14. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    A machine that esactly mimics human "consciousness" woud need to be capable of bein rong... but... such a flaw wont be desirable... much less necesary for machines to eventualy be able to out-thank a human.!!!
     
  15. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    Not only does one need to be able to be wrong, in order to learn. but one must be able to suffer.

    Our thoughts, as I pointed out, are not discrete memories, they are bits and pieces that are reconstructed at the point of "recollection".
    We learn by our experiences burning themselves into our minds and building patterns which we recognize.
    They burn themselves into our minds through emotional branding - they get associated with pain and pleasure.

    For an entity to be aware and intelligent, it must be conscious and have the ability to experience the world first hand and have some level of emotional attachment to experiences and it must be able to make mistakes.

    What, then, is the point?
     
  16. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    To learn one must be able to assess value.
     
  17. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    For clarity.. give a simple esample which describes what you mean.!!!
     
    Last edited: Sep 28, 2009
  18. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    What do you thank consciousness is.???
     
  19. TBodillia Registered Senior Member

    Messages:
    159
    Just about everything.

    An example: I look at your picture, your avatar and I instantaneously recognize it as Sandy. I don't have to look at the hair, then the eyes, then the nose, the mouth...I see the picture and identify it. The computer has to scan it and look at specific points programmed into it, usually the triangle created between the eyes and tip of the nose. Then it has to scan its memory, searching every single stored image of a face (scanning all stored images would take forever) until it finds a match. How fast it scans is up to its processor speed. Even the dumbest person around can instantly identify people they know.

    http://www-formal.stanford.edu/jmc/whatisai/whatisai.html
    is a Stanford link discussing AI.

    "Q. How far is AI from reaching human-level intelligence? When will it happen?

    A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge bases of facts in the languages now used for expressing knowledge.

    However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved."

    Then there is: an ethical decision is an emotional decision. Even if we ever develop AI, will we ever be able to program computers with emotions? Would you want to? Would you really want you vacuum cleaner to nag you about your housekeeping?
     
  20. cluelusshusbund + Public Dilemma + Valued Senior Member

    Messages:
    7,999
    By the time "computers" can be designed to have emotons... i suspect it woud be computers doin the designin cause "pure" humans will be a dyin breed... an emotons as "we" know 'em will also be undesirable.!!!
     
  21. sandy Banned Banned

    Messages:
    7,926
    Very good.

    Please Register or Log in to view the hidden image!

    LOL about the vacuum.

    Please Register or Log in to view the hidden image!

     
  22. kmguru Staff Member

    Messages:
    11,757
    Humans do not have the brain capacity to program a computer to act human. If that happens, it will be when a human devises a process to upload his or her brain to a computer like device.

    Then it will take that device to develop the next generation AI which will be at the level that is akin to the intelligence multiplier between monkeys and humans.

    It is similar to the human designed robots that manufacture Cars or other devices which is much more sophisticated than that a human can do manually.
     
  23. sandy Banned Banned

    Messages:
    7,926
    That's really good. I didn't think of that. Yet it seems so simply the best/most common sense response.
     
Thread Status:
Not open for further replies.

Share This Page