When i watched "I, Robot" which by the way was a very excellent movie just in case you want to know, it made me spawn a few ideas about the future relationship between Robotic Artificial Intelligences and us human beings. In the movie you could say Spoiler robots 'wreaked havoc' on the human race for a sort of "good" reason which was our own protection. And i came to the conclusion that if/when we will create AI's they will always have some kind of flaw or 'malfunction' maybe have their own consciousness. There has always been flaws in our machines, whether simple or complex. So its inevitable that they will someday have a consciousness. So whether through good or bad, they WILL flaw and most likely do something aggressive towards humans. One example is they could make millions suffer by engineering a biological weapon buy studying our simple protein based bodies. What are your opinions on this?