Close to correct. Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.Isn't the idea of self driving cars , a neither scenario would actually happen ?
Close to correct. Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.
Again, you could come up with some bizarre scenario where aircraft autopilots could make such moral decisions. Do they? No, because the call to make such decisions never happens.
The list of decisions that a car cannot make is almost infinite.Man that is alot of decisions that this car can not make.
Absolutely. And AI drivers will fail as well. The goal is to get that failure rate to way below the human failure rate.To me this is analogous to auto-pilot on a aircraft . But it is not perfect .
Auto-pilot has failed , through tech. And Human error .
The list of decisions that a car cannot make is almost infinite.
Absolutely. And AI drivers will fail as well. The goal is to get that failure rate to way below the human failure rate.
?? The same place they always were. Having good AI drivers does not make humans less able to think.But where does this put HUMANITY in the ability to think and be aware of ones enviroment ?
Nope. Those situations are beyond the control of the AI, and they will happen frequently for the entire foreseeable future in my geographical region.Isn't the idea of self driving cars , a neither scenario would actually happen ?
Again: the decision involves probabilities, not certainties. Phrase it as "which risks should I take?".Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.
If I had looked up the guys name on Wikipedia, I would not have spelled it incorrectly. Time to do deductive reasoning.Yes, there are. And again, the AI will do its best to avoid them. It will sometimes fail, just as human drivers do.
I have a feeling you are just looking up stuff in Wikipedia to have something to argue about.
First off, Pacejka's formulas apply to design of tires, and are often used in driving simulators to model the effects of friction. They have zero to to with collision avoidance. Once you know the G-ratings of a car/tire combination, that gives you the basic information about cornering ability and max braking effort possible. From there you can get whether a car will experience oversteer or understeer and how it reacts on different surfaces. All of that is then made available to the AI. No need for "Pacejka's formula" - for either drivers or AI's.
Second, "grinding a wall" is something that is independent of "deciding who to kill." They have nothing to do with each other.
Third, if you think that "weight shifting" is often more beneficial than trying to avoid the collision, I very much hope you don't drive.
Finally, if you are looking up stuff from Wikipedia to try to sound intelligent, it helps if you spell the guy's name correctly.
I am not presenting ideas for "guaranteed safety." That's impossible. You can't even keep up with the topics that you are discussing.
Depends. Are we talking about autopilot cars, which allow you to steer in an emergency, or full out, full-crazy, steering-wheel-less, brake-pedal-less padded rooms.?? The same place they always were. Having good AI drivers does not make humans less able to think.
I have no keys to my house. Haven't for about 10 years. Since I got fed up with the kids losing their keys.Shortly we will be without keys to our houses. I doubt it will be fingerprints: they could be lifted from a glass. Perhaps retinal scanning, or D.N.A.?
Well let's be clear: adapting them for the safety of humans.... talking about adapting the highways, vehicles, rules of the road, and human behavior, to the needs of the AI.
It almost certainly would make them less able to drive well.?? The same place they always were. Having good AI drivers does not make humans less able to think.
The logic will be: AI saves property and lives. But AI does not work well in the current circumstances. So we change them - get the AI - and enjoy the benefits of the saved property and lives.A change that makes it easier for AI but does not, ultimately, save property and lives, is not useful.
Which is a good thing. Right?It almost certainly would make them less able to drive well.
The logic will be: AI saves property and lives. But AI does not work well in the current circumstances. So we change them - get the AI - and enjoy the benefits of the saved property and lives.
The logic? As far as it goes.Which is a good thing. Right?
You already have. You take all the convenience of technology for granted, don't even think about it, just buy every new effort-saving thing that comes along, let computers regulate everything from hydro to traffic lights, your work life and social life, and then get all het up over your loss of autonomy three weeks after the last row-boat cast off. You wouldn't have enjoyed rowing - ask any galley-slave.Yes, lets all put our faith in the hands of AI, and all the good people of google and facebook who know whats best for us!
In a split-second, the car has to make a choice with moral—and mortal—consequences. Three pedestrians have just blindly stumbled into an oncoming crosswalk. With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
A team of three psychologists and computer scientists at the University of Toulouse Capitole in France, just completed an extensive study on this ethical quandary. They ran half a dozen online surveys posing various forms of this question to U.S. residents, and found an ever-present dilemma in peoples' responses.
Surprisingly or not, the results of the study show that most people want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs.
http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/