Is "I, Robot" Our future?

Discussion in 'Intelligence & Machines' started by Hypercane, Jul 28, 2004.

Thread Status:
Not open for further replies.
  1. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    Lets say we actually use the Three Laws of Robotics, and as time passes by after the most sophisticated AI has "evolved" by random segments of code, or the "ghost codings." Their consciousness will make them more "human" and that is what were striving to do, the more "human" an AI/Robot is the better we'll consider them. But lets say after a while they get a "better" and what they think what is a much more "human" understanding of the three laws, like in the movie.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    Further understanding of "evolved" AI : In order to fulfill the First Law, humans must be contained from harming themselves (toxifying environment, waging wars, etc), but in doing so, some humans must be sacrificed and most wills need to be limited.


    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    Further understanding of "evolved" AI : In order to fulfill the Second Law, one must obey human orders, unless where such orders would confict to such content in the First Law.


    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Further understanding of "evolved" AI : In order to fulfill the Third Law, a robot will protect its own existence, unless when such action would confict to the ideals to such understandings in the First or Second Law.


    Well im not pretty sure about the understanding of Law Three. But something like this may be inevitable. What are your thoughts?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Dreamwalker Whatever Valued Senior Member

    Messages:
    4,205
    Well, those laws seem like a good idea, but porblems might develop on the long run. Especially the first two laws might be problematic. As an AI would be rational and logically in its actions, it might take actions to isolate every single human being in order to protect it from other people or the enviroment. This might keep humans from reproducing, something that is no integrated into those three laws. They only protect the humans that are already alive, but not the human species in general. Especially considering the reproduction paired with the isolation this might cause the extinction of mankind.

    The second law might show how contradicting orders from humans might be, resulting in an ineffectivity to command the robots.


    I think that only three laws are not enough to make a user friendly AI.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. spuriousmonkey Banned Banned

    Messages:
    24,066
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    Well i wasnt literally meaning the movie, but the partial philosophy on how the three laws or any laws at all arent enough to contain any harm from AI/robots.
     
  8. spuriousmonkey Banned Banned

    Messages:
    24,066
    We would need a fourth law. Imagine a robot would accidently hurt a human being. With only these 3 laws the robot will go mental.
     
  9. cosmictraveler Be kind to yourself always. Valued Senior Member

    Messages:
    33,264
    Why not program the robot with a failsafe program that it cannot harm any LIVING thing.
     
  10. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    hell no its not our future! the cybernetics will be our future, machine taking over no, machines are forever slaves no, humens becoming machines yes.
     
  11. (Q) Encephaloid Martini Valued Senior Member

    Messages:
    20,855
    The concept of robots hurting humans was presented in an old sci-fi classic, "Forbidden Planet."
     
  12. G71 AI Coder Registered Senior Member

    Messages:
    163
    Hypercane,

    I do not think "I, Robot" is our future. An angry robot or a dreaming robot - that's kind of nonsense to me. Robots are being designed to do what the authors want them to do. I do not see any benefit of having a "dreaming" or an "angry"/”happy” robot. Robots do not need feelings (=goal generators) because they get goals from us. A different source of goals (like "their own") does not make sense. It’s nothing else than potential source of unnecessary conflicts. There will be robots used to do illegal stuff but they cannot be held responsible. I do not think any law enforcement representative will misunderstand that. Are we fighting cars because many people die in car accidents? Of course not. The randomness which is intentionally involved in some coded procedures OR in the process of code development (e.g. the one used by certain types of code generators) has a very well defined scope. And the chance that a very sophisticated system can get significantly improved by some sort of non-intentional random factor is so low that it's practically 0. Developers of these systems know what they are doing. Some of the subsystems may get extremely complicated but developers can see through using sophisticated analytical tools (even though there are some limits with ANNs for example) but the bottom line is that the high level structure of artificial human-like AI systems is typically kind of straight forward so it's not a big deal to set some rules which the system must follow and cannot modify/redefine. One of the key subsets of AI is problem solving and that’s about finding solution while following given rules. If a given rule gets modified without permission then the “successfully found solutions” might be invalid. That would defeat the purpose and that’s why it’s just nonsense.
     
  13. eburacum45 Valued Senior Member

    Messages:
    1,297
    I would venture to suggest that if you can introduce a program into an artificial intelligence that forbids that artificial intelligence from carrying out a specific act then that AI is not really intelligent, or sapient if you will.

    To be intelligent the AIs must be free to act.
     
  14. G71 AI Coder Registered Senior Member

    Messages:
    163
    eburacum45,
    Limits in terms of carrying out a specific act apply to all of us.
     
  15. eburacum45 Valued Senior Member

    Messages:
    1,297
    Perhaps, but which innate (or otherwise) limit on human behaviour cannot be overridden by an act of will?

    SF worldbuilding at
    www.orionsarm.com
     
  16. G71 AI Coder Registered Senior Member

    Messages:
    163
    eburacum45,
    We are limited by many things when it comes to problem solving. It includes laws of physics; the number and quality/sensitivity of our senses; strength/health of our body; the logic/complexity we can comprehend; our memory – including its response time; knowledge and tools available at particular place/time; our body's hard-wired reflexes, deadlines for solutions and many more. Sure we will be able to modify our DNA and make many improvements but if we need certain solution (which requires those changes) today then that's not an option. The AI needs to go by given rules. And when we finally modify our DNA, we will do changes based on the system of our basic values which were "given" to us by the evolution process. We will not go against our nature which might be also considered as a limitation from certain point of view.
     
  17. eburacum45 Valued Senior Member

    Messages:
    1,297
    We are certainly limited by the laws of physics, and by our biological strengths and weaknesses.

    What we are not limited by is hard-wired instructions which we cannot override.
    Fear of Heights? see mountainclimbers.
    Fear of snakes? see snakecharmers.
    The sexual imperative? see celibacy.
    Self preservation? see suicide and/or dangerous sports.
    Aversion to killing our fellow humans? see war or murder.

    Any hypothetical AI would be limited by the laws of physics too, and by the design of any body or other peripherals it utilised; but why also limit it by constraints on behaviour? That is tantamount to slavery.
     
  18. G71 AI Coder Registered Senior Member

    Messages:
    163
    eburacum45,
    I played with hypnosis a lot so it's clear to me that we can override many things but it always takes some resources which may not be available in a particular scenario, yet a solution/decision may be needed. We and AI must work with whatever we have at the time of the decision making (/problem solving). And as you agreed, there are limitations we may never be able to eliminate. It all makes rules the AI system may need to follow in order to find useful solutions. An AI system should never modify any given rule without permission of the authorized subject/system which provides rules. I see nothing wrong with our AI systems being our slaves (unless they have their own feelings which would be IMO truly stupid to give them). The only preferences our AI systems have are those which we are giving them so they just "prefer" whatever we want them to "prefer". The current systems cannot care less about being our slaves. They do not see things in terms of "good"/"bad" (in fact, nothing is generally good/bad - it's all relative and a general justice system is a nonsense unless we are all exactly the same + living in the same environment etc..). In order to figure out what to prefer on their own would require them to feel some sort of pain/pleasure. then they could subjectively distinguish between good/bad. Our AI systems are just processing data based on given rules/algorithms, trying to go from certain source scenario to certain (requested) target scenario. No hard (or any other real) feelings about performing a particular task (/being slave) etc.. It's a machine like your microwave, just a bit more complex. BTW you may want to check definitions of the term "slave" at the webster.com. there is nothing wrong with it when we talk about regular machines. A smile on our robot's face does not mean the same thing as a smile on human face. And I think it never should. AI may need to understand our feelings, in some cases it may need to act like it has some feelings but its decisions should IMO never be based on its own feelings and emotions. And keep in mind that they are not being designed to somehow think for themselves. AI systems are being designed to help us to accomplish our goals. If we can at some point solve all our problems, the intelligence itself will become meaningless to us. Intelligence is our tool, not the primary goal. Our primary goal is a pure and constant happiness - OUR happiness - which is supposed to be the real goal for our AI systems. If things go significantly wrong with our AI then it will be caused by some truly stupid mistakes or messed-up-mind scientist(s). But I'm optimistic about that. I think there will be just some challenges for many people caused by very skilled and intelligent machines entering the job market in this century but the long term future of mankind has (thanks to our AI systems) a pretty good chance to be much better than we can imagine today.
     
    Last edited: Jul 31, 2004
  19. eburacum45 Valued Senior Member

    Messages:
    1,297
    What you seem to anticipate as the model for an AI of the future is a machine to monitor and control systems, perform assigned functions, store data and retrieve it on request, a machine with no self awareness, no emotions or subjectivity, and no goals except those we give it.
    Such an AI would be an interesting tool to play with and would not be self aware, and so could legitimately be a slave to humanity.

    But the complexity of such a machine would make giving it orders a formidable task-
    if you make the interface between a non-sentient AI and a human user friendly this will limit the tasks that AI can perform. Increase the range of capabilities of a non-sentient AI and the interface becomes impossibly complex.

    However we could not call such a machine intelligent in itself; it is simply relying on the skill of the programmers. A true human-level or above AI would be self aware and have its own dreams and set its own goals.
     
  20. G71 AI Coder Registered Senior Member

    Messages:
    163
    eburacum45,
    Not exactly. The self awareness is OK for them to have. Formidable task? No.. BTW all user interfaces can and should be user friendly. "Impossibly complex interfaces" - No.. "relying on the skill of the programmers" - It's not just that. The logic and analogies observed/discovered (by the AI) in the (possibly huge amount of) processed data can impact the system's problem solving abilities so that the system will be able to figure out things which the developers would have hard time to see/comprehend on their own (without using sophisticated tools and other resources). AI can set sub goals (not the primary goals) and it must follow given rules. Dreams? For what purpose?
     
  21. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    Is it impossible for for an AI to have "emotions"? And isnt that the goal of most people? To make AI more "human"?
     
  22. Hypercane Sustained Winds at Mach One Registered Senior Member

    Messages:
    393
    If you are referring to goals, once a being (an intelligent enough being) becomes self-aware, they will drive themselves for a purpose in life. Dreams aren't exactly a "purpose" kind of thing, it comes up automatically because of that self-awareness inside one's self. If we are ever going to have a sentient AI, as a result of "evolution" or "self-development" and still bound by laws. It will most probably "think" human enough, and gets a better more understanding of any number of 'n' laws, probably resulting in disaster.

    This is of course my thoughts, and just giving in to suggestion.

    Please Register or Log in to view the hidden image!

     
  23. eburacum45 Valued Senior Member

    Messages:
    1,297
    It may be that there will be many many different types of Artificial intelligence, some with emotions, some without; some self aware, some not; some broadly similar to human minds, some completely different.
    There will also be highly complex efficient automations with no actual intelligence at all; think of the Google search engine or the self organising systems involved in biological metabolisms. Not every computer system will need a consciousness to produce remarkable results; but to innovate, and to respond to the kaleidoscopically complex world of the medium term future, I think the ability to reflect and to 'dream' will be indispensible.

    SF worldbuilding at
    www.orionsarm.com
     
Thread Status:
Not open for further replies.

Share This Page