Does not Asimov's Laws of Robotics cover this. "1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." If we program stated laws into a robot then would the robot not have the ability to make ethical descisions? I believe it would as it would therefore have to factor in the longterm effects of any action it took in regards to its affect on any possible human interactions. Of course this would fry the computers of most current thinking machines. Computers will have to become much more advanced especially in terms of processing speed before they can begin to make ethical descisions.