# Can Robots Make Ethical Decisions?

Discussion in 'Intelligence & Machines' started by sandy, Sep 21, 2009.

Not open for further replies.
1. ### sandyBannedBanned

Messages:
7,926
Is that like an artificial brain?

3. ### Michael歌舞伎Valued Senior Member

Messages:
20,285
It must be physically possible to create an artificial brain.

5. ### spidergoatValued Senior Member

Messages:
52,075
Are you blond? Of course.

7. ### sandyBannedBanned

Messages:
7,926
Yes, blonde.

But at least I didn't ask if neural net was like Aqua Net.

8. ### flameofanor5Not a cosmic killjoyRegistered Senior Member

Messages:
351
I have yet to see anything go in that direction.

9. ### spidergoatValued Senior Member

Messages:
52,075
Scientists have successfully simlulated a portion of mouse brain in a computer. I predict computers will eventually be much smarter and faster than us. Our thinking speed is many times slower. By the time that happens, the difference between "them" and "us" will become blurred.

10. ### sudevkkRegistered Senior Member

Messages:
6
Robots being alive is a fiction ?. okey, what is the proof that you are alive ?. Its just an awareness, isn't it ?. You are told by your brain that you are Mr.X Mr.X, and you are alive.

Yes, Very correct. But a human also does the same, rite ?. Every human action is also like he is being instructed. It may feel there is a decision involved, and unpredictability, but in fact there is not.

11. ### one_ravenGod is a Chinese WhisperValued Senior Member

Messages:
13,406
I don't think speed is the factor - that's where they are wrong.

And therein lies one of the problems.
To make something that truly has the ability to reason that thing must be capable of being wrong - it must be flawed.
Without the level of accuracy we have come to expect from computers, what's the point? What real purpose is there?

12. ### draqonBannedBanned

Messages:
35,006
robots with free will, can make ethical decisions, the rest can be programmed to make ethical choices (socially correct) in each scenario.

13. ### one_ravenGod is a Chinese WhisperValued Senior Member

Messages:
13,406
Do not exist and I doubt ever will.

14. ### ScaryMonsterI’m the whispered word.Valued Senior Member

Messages:
1,074
I for one welcome our new rodent brain laminated computer overlords!
:xctd:

15. ### sandyBannedBanned

Messages:
7,926
Fascinating discussion. I still don't think anyone could ever program a computer to think like a human. We're too complex, no?

16. ### spidergoatValued Senior Member

Messages:
52,075
Who said anything about programming? They would be taught general principles, if anything. Even real humans cannot be moral simply by following a flow chart.

17. ### one_ravenGod is a Chinese WhisperValued Senior Member

Messages:
13,406
So what's the point?
If it is at all possible, let's say we succeed in making an artifical intelligence that is free to come to its own conclusions, make it's own decisions and possibly learn from its mistakes...
Aside from masturbation, why would we?

18. ### spidergoatValued Senior Member

Messages:
52,075
Why wouldn't we create our own life forms? It would be a second genesis. If this technology is developed, it could be implanted into our own brains. How would you like telepathy, or to think 10 times faster, or to be able to do advanced math in your head, or call up any information you wish?

19. ### one_ravenGod is a Chinese WhisperValued Senior Member

Messages:
13,406
No interest at all.

20. ### StryderKeeper of "good" ideas.Valued Senior Member

Messages:
13,101
Hurm.... I can just imagine a Drone asking "Red or Blue"?

21. ### sandyBannedBanned

Messages:
7,926
Doesn't teaching mean programming?

22. ### CutsieMarie89ZenRegistered Senior Member

Messages:
3,485
I think it's possible. Computers are already capable of seeing patterns so if AI could manage to learn, just as living creatures manage to learn. Then they could draw their own conclusions and then make decisions based on those conclusions be they correct or incorrect. Which I believe is ethics.

23. ### domesticated omInterplanetary homesteaderValued Senior Member

Messages:
3,231
LOL

I think I have the answer to the question for this whole thread. All you'd have to do is programatically define good and evil, then insert the following code into the routine representing its consciousness

Code:
if (action what_you_are_thinking_of_doing != good)
{
return;
}