Oh pishposh. Robots/computers do whatever they are programmed to do. Sure, they can choose to do things which have been programmed into them as good, probably mirroring their programmer. In and of themselves, however? No. This is not the world of Asimov. I, Robot is not real. Then again, I suppose it would help if I fully accepted the relevance of ethics anyway, considering that every person has their own ethics, and no two are precisely the same. With that in mind, I suggest you qualify what an ethical decision is, and exactly how you make said decision, then consider how to translate that into a computer/robot. Intelligence is an interesting concept, can it be had without consciousness? I wonder.