The autopilot may have a lot of complex information, and be required to make a lot of very fast decisions. Its information will always be limited by security clearance, availability of data and the accuracy of its sensors (The one in Florida couldn't distinguish a hulking great truck from the sky; the truck driver apparently didn't see a low, dark-coloured - more like the one in the background http://www.wcpo.com/news/national/tesla-driver-killed-in-crash-while-using-cars-autopilot - car approaching at speed in the opposite lane; all three drivers were inattentive, but only two died. ) Assuming that its programming is sophisticated enough to make the kind of informed choices mentioned in this thread, its decision-making still doesn't require an "ethical" component. It could simply assess relative quantities of damage. Maybe according to the odds of saving the lives of victims in the projected accident. Maybe in $ figures. Maybe in time and material to produce replacements for the personnel, mechanics and road furnishings. Maybe in terms of damage to society or disruption to traffic. Maybe according to a table of human valuation by age, sex, occupation, police and health record. It's a machine: give it technical terms of reference, not sentimental ones.