I'm not sure what the question really is? Is it "can robots learn right from wrong" or is it, "do you think this is a good idea"?
Maybe it's both. I think AI is a long way from worrying about ethics and morality.
If you program them to make certain decisions under certain circumstances, that is what they'll do. There will need to be test data to evaluate before setting them loose on the public. If they increase safety then it's a good idea.
Are seat belts a good idea? If they save some lives, reduce injuries and don't create a lot of injuries, it's a good idea. Using a robot in those limited circumstances is no different.
Why do I feel that I'm always bursting your bubble?