Robot Morality

Discussion in 'Intelligence & Machines' started by Cris, Mar 2, 2001.

Thread Status:
Not open for further replies.
  1. Chagur .Seeker. Registered Senior Member


    Didn't H.Rider Haggard write a story a while back re.robots that came to help (protect?) and the humans instead of being grateful, revolted? Or was it some other author?
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. BrainDrain Registered Member

    The following is a joke:

    Perhaps, to keep the robots tranquil and make them subservient by nature and not by force, we could instill in them a belief of a supercomputer in the sky that watches everything they do. If they harm humans or break the Robot Commandments, they will be sent to a place where their circuits will be overloaded for all time without rendering them inoperable. But if they do everything asked of them and have respect for their creator, their internal programming will be uploaded to a perfect server of limitless storage.

    Don't wish to offend any religious people ... Couldn't help myself. ;-p
    Last edited: May 31, 2001
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Chagur .Seeker. Registered Senior Member

    Why the hell not?

    It certainly has worked with a large majority of the wet-wares called 'humans' (though I have no idea why they were given that name).

    Also a joke ... I think ...
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. PaulJ_85 Registered Member

    Hmm... heres an idea though.

    You said that we could possibly use a 'Computerised' Virus to kill off the AI if it got out of control, however who are we using it against?

    If theres a Virus that infects us humans ... we will research about it and cure it.

    If theres a Virus that goes around on a Computer network, people like Norton will try to find a cure for it.

    However with both of the above it takes time and knowledge for us humans to find the cure to the virus.

    If a virus was used against the machine, surely they could make a cure in a much faster speed that we could ever do or even counter-attack with a virus that could take down every computer with access to the internet.

    Afterall just about everything is becoming 'Connected'. Computers, Phones, even Televisions. The world we live in is built around computers. We use them for entertainment, we use them for education purposes, we use them for just about everything.

    - Here we have hackers that wouldn't care about the FBI catching them, or have a feeling of guilt for the actions they do. Here we have a 'thing' which could take over the world.

    Afterall if it takes after its creator 'Human' we can only expect it to have some Human properties. Some of the Research which is being done at the moment is into 'AI Robots' who actually are supposed to have feelings. If it was happy it would show a green screen and smile. If you hit the robot it would show a red screen and frown. However in the tests of the robot it would go 'Red' for no reason heading full speed at the human infront of it.

    - Could this be a sign of things to come.

    We take millions of Years to Evolve. They could take minutes.

    A interesting idea - but very dangerous


    Paul J
  8. Einsteins brain Registered Member

  9. thecurly1 Registered Senior Member

    About AI revolt...

    I don't think we'll have to worry about AI robots for a few decades. Our real moral concern will be a human clone, but thats another subject for another thread. For starters, robots right now, are fairly stupid. It'll be a while until they are intelligent enough to match humans, plus they are extreamly expsensive now, millions and millions of bucks. We've got time to think.
    I believe we give robots a little too much credit, we can program them. That is loads more power than we ever had over any human slave. Moralitiy, obedience, and other things can be programmed. As long as there is a ban against pro-freedom programing or a fail-safe to weed out urges to become emasapated, robots will be safe. The costlyness of them will prevent any humanoid bots from being mass produced until the 2050s or later.
    We ask human physicist to comprehend the universe, and presists to help us with our problems. AI computers will be assistances and nothing else. You'll never see the day where I ask one about God, or love. Flesh and blood is better for emotions than steel and circuits.

    "Treat others as you would want to be treated" --Jesus
  10. Science Geek Registered Member

    Robotic Ethics

    I think that it is semi-irrelivent what you program into a robotic mechanism. In the future it is most probable that we may program different types of robots to do different things. A big issue with the ethical question is wether or not the robots have free will. If they don't one needn't worry. If they do you could program the 3 laws for robots into them, but it is still at the robots discression if they want to follow it or not. With a complex enough system perhaps the robots could evolve in thier own right, reproduce and grow into different being types as needed.
  11. Cris In search of Immortality Valued Senior Member

    Re: About AI revolt...

    Hi Curly,

    A few decades? That will be too long. The past limitations to AI have been inadequate computing power. That is changing extremely rapidly. Computing power continues to double, now every 12 months. At that rate we will achieve human brain equivalent power within 3 decades. In the meantime we will see many forms of robots with increasing intelligence. I estimate it will only take 10 years before robots begin to exhibit signs of significant intelligence. But self-awareness and issues of morality will take a little longer – primarily because of the necessary computing power that would be required.

    Nah! I don’t see anything wrong with someone wanting a child who will have known features and basic personality traits. The child still has to start from an embryo and grow normally. Its environment and education will make it into a true independent individual. I cannot see objectionable issues.

    That apparent stupidity will fade quickly and robots will very soon be able to master very specific tasks with greater precision and apparent intelligence far greater than any human. Within 30 years they could potentially be our equals.

    Honda recently demonstrated a bipedal humanoid robot that was able to freely walk, negotiate obstacles, and stairs with apparent similar agility to that of humans. A proposed retail price was estimated at around $100K. The Sony robot dog, Aibo, retails at around $1500, granted it doesn’t do too much yet, but that will change very rapidly. Time to think? I very much doubt it.

    I’m not so sure about programming for morality, I don’t think we will have that much control. If the basis for their intelligence is neural networks then locating particular behavior traits may prove very difficult.

    I suspect that would be too difficult to police. Once robots are sold at retail outlets then hobbyists and others will find ways to adapt and re-program them.

    No that is too far out and you are seriously overestimating the costs The demand will be high and manufacturers will begin with small special task machines first, like toys and animals, and then to more complex task-oriented machines like maids, other household chores, and chauffeurs. And don’t forget the military, now that is frightening.

    Simple rule based AI software does much of this already, and they can appear tantalizing intelligent at times. And software that can replace first line general practice doctors is already on trial.

    If your age is less than 50 and you stay reasonably healthy then this will occur in your lifetime. Emotions are only neural patterns like any other thoughts. Once we have sufficient computing power, say 2010, then machines will be able to comprehend and exhibit simple emotions, although I doubt that that will be the first algorithms that will be developed or encouraged.

    A very bad policy, arrogant, and potentially quite dangerous. It assumes everyone has the same desires and values. A better quote would be “Treat others as they want to be treated.”

  12. Porfiry Nomad Registered Senior Member

    The crowning glory of information technology is the ability to cheaply and massively duplicate knowledge. Today, this is in the form of software whose physical presence is exceedingly subtle, but whose real-world influence is severe. Likewise, silicon is cheap, but the knowledge and engineering required to design a useful piece of silicon is expensive. Today, the industrial commodities of value are knowledge, skill, and wisdom.

    Applying the preamble to the case at hand, we can realize that while morality is almost certainly a difficult thing to 'program' (teach is a better word when dealing with AI), the nature of information makes it cheap to replicate. As a society, we put a great deal of effort into teaching our children morals, ethics, etc.. The problem is, of course, that we have to do this for every child produced, and occasionally the process fails.

    The crowning glory of IT will become the crowning glory of AI. If it is possible to train a single AI properly, then it is trivial to replicate this AI a million times over (since the structure of the neural net defines behavior). Collectively, AI beings will be capable of evolutionary advances in behavior at speeds magnitudes greater than humans (where behavior becomes solidified after perhaps a decade and a half of life).

    This is perhaps a social analogy to the movement in biology from indirect manipulation through sexual breeding to direct, real-time genetics. AI will allow real-time manipulation of sociology and psychology. And, of course, Real-Time Morality<sup>TM</sup>
  13. Chagur .Seeker. Registered Senior Member

    Aye, there be the rub. Considering what the ankle-biters are already capable of doing ... May the saints preserve us!
    The question is: How long before you know the AI has been trained properly? Too many experiences with code bugs that didn't show up for a good long while ... and then only under a very specific circumstance (like a novice user doing something the programmer never considered).
  14. thecurly1 Registered Senior Member

    Cris, my favorite atheist...

    Good reply, you stood firm on your oppions on all of my points, and didn't bash me. I'm so happy to be respected for once. All of your comments seem to be pretty valid, except for the last two, which deal with our favorite subject, religion.

    I am under 50, 15 as a matter of fact, but I don't believe that we'll be asking robots for advice on love or religion. They could give us an anwser but you can't call it real in my eyes. Religion can only be understood, or even hated by humans because religion is an emotional concept. Furthermore I don't believe robots can have any monotheist religion, because of this simple foundation of religion (weather you agree with it or not), that they were created by God. They won't be created by God, but instead by us, humans. Humans are flawed creatures, and for that I don't think robots could worship us in a religious scence.

    Are emotions neural patterns, at their basic level yes. But can you reduce love, empathy, or even hatred to just random firings of sets of neurons? Emotions can be falsified, and programmed, but can't be real in a human scence. To have real emotions, the standard will be any sentiant being composed of DNA.

    You have to remember one thing about humanity that will spell trouble for robots: we are notoriously conservitve people by in large. If we couldn't accept blacks as humans for centuries, than what makes you think that machines, not even homosapiens will be accepted as people? I would bet money that they will never be regarded as humans by a majority of the public. We will treat them as robotic slaves. Weather this is justified or not is not the point of this thread.
  15. Cris In search of Immortality Valued Senior Member

    The progress of robotic inteligence

    Hi Curly,

    15 huh? Well I’m over 3 times your age and I have 3 teenage daughters. I think it is superb that you are debating in this way. Keep it up if you can. A real debate will make you think hard and clearly. As for religious views, well you have a long way to go, and a lot more to learn and experience. I was a fairly outspoken devout Christian at your age so I suspect I can relate to your position.

    Why not? Ok, I can guess. For the answers to appear real you would have to accept the robot’s intelligence as an equal to humans. But no, that wouldn’t be enough either would it? You see love and emotions as being something more than just mere intellect. And of course you would always be aware that a robot would never possess a soul or spirit since we would know exactly how the machine was constructed, right? And how could something without a soul answer such soul-searching questions on things like love, right? OK let’s deal with these issues one at a time.

    1. Hardware for Intelligence: This is the easiest answer. If Moore’s law is maintained then we should achieve computing power equivalence with the human brain before 2030. See my thread on Brain Power in the Neuroscience forum. The date might be sooner or later, what is not in doubt is that it will occur at some time.

    2. Software for Intelligence: US military assumed that this would occur in the 1950s. This followed the successful usage of the first digital computers in Britain during WWII to break the Nazi codes. Some of the most brilliant minds on the planet were involved on those tasks but it took a computer to break the codes. Unfortunately they vastly underestimated the amount of computing power that would be required for true emulation of human intelligence. Software development of artificial intelligence (AI) has been on hold ever since. We are just reaching that time when the necessary computing power is adequate for basic functions.

    3. AI. I have always disliked this term and there are indications that the term will fade away, unfortunately the latest movie AI will encourage the public to retain the term for much longer than it should. My view is that intelligence is absolute. Something is either intelligent or it isn’t. Saying that intelligence is artificial makes no sense. The distinction that could be made is between human intelligence and say, for example, machine intelligence (MI).

    4. The software development of MI will possibly take much longer than it takes for the hardware to be ready. But there are such projects occurring now in many countries. I strongly suspect that the military will be closely involved and will be providing a great deal of the funding for basic research. I know that my own company has significant resources dedicated to such AI tasks as visual recognition, self directed navigation, plus other projects.

    5. Equivalence to human intelligence: Again there is little doubt that this will occur given the necessary computing power and software development progress. There are two major expectations in this area. The first is that AI itself will be used to help in the software development process. The rate of development will likely increase at a geometric rate, very similar to a recursive process. This will dramatically reduce the time to reach human equivalent intelligence. The second expectation is that once a working model has been achieved then it can simply be copied to make mass production of intelligent machines almost overnight.

    6. Beyond human intelligence: Yup you’ve guessed it, why stop at mere human intelligence. The work will continue beyond HI resulting in what has been termed super-intelligence. And it is here that the human race may be in danger. Will we be able to control what we have created or will a sufficient number of machines realize their potential and assume control? At this point we would no longer be the dominant intelligence on the planet. If we have done our job right then these machines will be benign and will care for us or at least tolerate us. If we have made a mistake then there is a real concern that the human race could be destroyed. This period has been termed the singularity – a short period of time in our near future when super-intelligence is created and beyond which lies the unknown that defies any form of prediction.

    7. Survival alongside super-intelligent beings: If these super-intelligent (SI) machines continue to increase their intelligence then it would not take long before they see the human race in a similar way that we view chimpanzees. We would become increasingly irrelevant. Several views exist here, accept our subservient role, or adapt ourselves with super-intelligent augmentations, or go the whole route and transfer our brain patterns into a machine (mind-uploading). See the forum on neuroscience and MU for more debate on this.

    8. Emotions: The human brain involves electrical activity, chemical processes, and hormonal processes. All these activities provide us with all our thoughts and emotions. Much of the development of MI software revolves around how the human brain functions. These functions are being emulated and transferred to the computer or robot. There doesn’t seem to be any reason why the functions that result in emotions would be ignored, since there is nothing particularly special about emotions, despite how Hollywood may want to portray robots of the past. Love, fear, anger, hate, jealousy, should all be functions included in MI machines. They will simply be the natural result of accurately transferring human brain activities to software.

    9. Soul and spirit: We already know all the organic components that comprise the human body. And we have some good ideas of how the brain functions, although there is much to be done. What we do know for certain is that, having accounted for all organic matter, there is nothing left that could represent a soul. Having accounted for everything logic dictates that a soul has nowhere it can reside, at least not in the human body. Without some evidence or proof that such a thing exists then the most credible conclusion should be that it does not exist or it’s existence remains unproven.

    I have little doubt that robots of the future will be able to match your intelligence and most probably be able to outthink you quite easily. They will have a full repertoire of emotions that they will be able to control and understand, and they will have little trouble answering any of your questions regarding love and similar emotions.

    But how will they deal with questions on religion? This is quite easy as well – since they are not subject to disease or aging they would see themselves as effectively immortal. In which case their view of religion must be that it is totally irrelevant. It would offer them nothing since the basis of every religion on the planet is to provide humans with a hope of immortality, an afterlife. It is the human terror of death and eventual non-existence that has prompted the creation of the mythologies that we know as religions. One of the greatest and most likely hopes for super-intelligence is that it will help solve the problems of human disease, which includes the disease of aging. If humans also achieve effective unlimited life spans then they too will share the robotic views of religion – irrelevant. But of course we will have to wait a few years to see if these super-intelligent machines agree with me, after all they might just be a touch more intelligent than me.

    Please Register or Log in to view the hidden image!

    Hope this gives you something more to think about.
    Have fun
  16. thecurly1 Registered Senior Member

    Cris, thanks for the complement, I appreaceate it

    I think MI (Machine Intelligence) would be a better acronym for describing intelligence of synthetic entities. Maybe others will ask advice on love and religion from robots, but you know me, stubborn and conservative, I think I'll just keep asking my friends on the matter of heart. If I need help on writing a report on Mao's Cultural Revolution, than I'll ask a MI.

    Thanks for responding to my thread.

    "Don't be bullied into silence. Don't take anyone else's definition of your life, but your own." --Harvey Fierstein.
  17. tony1 Jesus is Lord Registered Senior Member

    The following might be a joke:

    Why do that?

    Why not just have them running around wild and free?

    Don't wish to offend any irreligious people ... Couldn't help myself.
  18. rde Eukaryotic specimen Registered Senior Member

    Re: The following might be a joke:

    It's silicon heaven. It's where all the calculators go.
    How does that offend irreligious people?

    Besides: there's more to sentience than intelligence. Unless they're self-aware, they've probably no concept of 'freedom'. So silicon heaven becomes as unnecessary a human heaven. And if they are self-aware, then they should enjoy the same rights and duties as we humes.
  19. tony1 Jesus is Lord Registered Senior Member

    Re: About AI revolt...

    Fairly stupid?
    They're as stupid as a sack of nuts and bolts.
    No, wait, they ARE a sack of nuts and bolts.

    We can relax on that account.
    Nuts and bolts rarely feel the urge to become emancipated.

    Yes, nuts and bolts have great amounts of discretion.

    Your equal, maybe.
    You do realize that you are talking about a pile of nuts and bolts?

    Let me guess, you wish to be treated as intelligent.
    Plus, where should I mail the checks?

    Sorry, Cris.
    You make it too easy.
    You seem like an intelligent person, but when it comes to robot intelligence, you appear to be overlooking the odd thing or two.

    Or, like experienced users doing things the programmer did consider.
    Take Windows, for example.

    It could be you're doing the same.

    I wondered where that old calculator went. I thought I lost it.

    Beats me.
    They seem to get offended easily.

    So, what you are saying appears to be that heaven is unnecessary unless you are self-aware.

    I'd be interested in hearing what Cris would say to that.
    I'm pretty sure he thinks of himself as "self-aware."

    Of course, enforcing that "duties" thing on machines that are stronger and more intelligent than humans might turn into a bit of a chore.
  20. rde Eukaryotic specimen Registered Senior Member

    Re: Re: About AI revolt...

    I was saying that heaven is unnecessary. Nothing to do with self-awareness.

    As for enforcement of duties: maybe it'll be difficult, maybe it won't. If they haven't got an independent power source, we've got 'em (for a while, anyway) by the nuts. So to speak. There's also that fact that the brain is an imperfect organ; there isn't a human alive that isn't a neurotic mess. Will artificial intelligence be the same? Dunno. But I reckon a lot of how we interact with or metal chums depends on the answer.
  21. tony1 Jesus is Lord Registered Senior Member

    Re: Re: Re: About AI revolt...

    Speaking as a robot?

    I hope you realize the ramifications of that.

    Who in their right mind would spend millions to build an artificial psychiatric patient?
  22. Cris In search of Immortality Valued Senior Member


    Since the human brain evolved by a random process it isn’t surprising that it has many defects. By creating a new design we should be able to avoid all the paranoia integral to limited biological brains. And in that sense such intelligent machines will be superior to humans.

    But if we create self-aware machines then I have real trouble coming to terms with humans controlling or enslaving such entities. Or looking at it another way, humans are simply self-aware biological machines and we know how much we resent slavery.

    We should perhaps stop using the word ‘machine’ since it establishes perceptions of mindless automatons and that is not what we will have created.

  23. kmguru Staff Member


    Instead of Machine Intelligence, how about Silicon Intelligence (SI)? Someday, the silicon will be replaced by oprical or other material, but Silicon is good for the next 30 years.
Thread Status:
Not open for further replies.

Share This Page