Lets say we actually use the Three Laws of Robotics, and as time passes by after the most sophisticated AI has "evolved" by random segments of code, or the "ghost codings." Their consciousness will make them more "human" and that is what were striving to do, the more "human" an AI/Robot is the better we'll consider them. But lets say after a while they get a "better" and what they think what is a much more "human" understanding of the three laws, like in the movie. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. Further understanding of "evolved" AI : In order to fulfill the First Law, humans must be contained from harming themselves (toxifying environment, waging wars, etc), but in doing so, some humans must be sacrificed and most wills need to be limited. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Further understanding of "evolved" AI : In order to fulfill the Second Law, one must obey human orders, unless where such orders would confict to such content in the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Further understanding of "evolved" AI : In order to fulfill the Third Law, a robot will protect its own existence, unless when such action would confict to the ideals to such understandings in the First or Second Law. Well im not pretty sure about the understanding of Law Three. But something like this may be inevitable. What are your thoughts?