Discussion in 'Intelligence & Machines' started by sandy, Sep 21, 2009.
I am a fan of most Sci-Fi movies/programs - but not a super-fan of ST. My preference is Babylon5.
Log in or Sign up to hide all adverts.
Hopefully then you will also delete posts 83 84 85 94 101 which are no less off topic.
Can a robot think on its own, and make decisions based on his personal sentiment, or feeling? [Stupid glare]
Yes... jus on a much lower level than what "we" esperience... an as "robots" evolve their abilities will leave biological "consciousness" in the dust.!!!
Which model? 2009 or 2050?
Haha...wouldn't the model 2100 have bigger chances of accomplishing the previously mentioned capabilities?
It would, except that the robot wars of 2090 would put a dent in that upgrade...Please Register or Log in to view the hidden image!
Please Register or Log in to view the hidden image!
It depends on whose mores or what mores are involved.
Some humans can't make ethical decisions ...
Ethical decisions are putrely subjective. my greatest problem wiht this is for the greater good. Many humans i know would in practice save friend far before hundreds of people they dont know and call this ethics. On the ohter hand if we programed a robot to make decisions based on whats good for us instead of what we want then alot of peole would say the decsions made were not ethical. Robots make dacisions based on peramiters, in the case of AI it introduces its own peramiters, in a modern computer those peramiters are set, at the bassist level with a line of IF command.
For a robot ethics are superfluous becasue there peramiters will simpley give a simulated better outcome.
EG. Tell it to cut climate change and a robot would say stop emitions. tell a robot to stop climate change secondairaly to preserving our present rate of ecanomic growth and it will say keep emiiting for the good of the economy.
The "ethical" decisons made by AI's (who equal human intelligence) will be indistinguishable from human ethical decisions.!!!
A robot AI could make an ethical decision based on input from the programmer to begin with. But if that AI see's that Humans were not ethical in all aspects and decided to act on that. Would that then be unethical.
Yes, that's the dream but what actually does that right now? by the way your link has expired.
Originally Posted by sandy
Robots and computers are often designed to act autonomously, that is, without human intervention.
I thank calculators an humans are equaly "autonomous"... humans are jus mor complex an generaly preceived to be mor autonomous... but in reality... nether can vary from ther "program".!!!
Please explain how a human cannot vary from it's program. If the program is defined as Morals and a sense of right and wrong. Humans vary from their "program" everyday. If a true AI self learn machine not autonomous there is a difference big difference in them. As I can cerate a robot that can move on its own but does not learn from the decisions it makes to move around. A AI by definitions learns from its decisions and makes a judgment call based on past experiences as well as it's version or morals and it sense of right and wrong. Also throw in self preservation and you have a thinking machine at its base. So in this context a Robot or AI can make an ethical decision but as I asked before if it makes an ethical decision that is not in line with base ethics is that decision UN-ethical?
Except that "morals" isn't a "programme" it's a catch-all term for something we think is inherent (at least partially).
We have no idea what the underlying structure (i.e. the ACTUAL "programme") is so whether we depart from it or not is unknowable.
Each thang a human does is an effect from a prevous cause.!!!
It depends on whos ox is gettin gored.!!!
Well you could say that but a program is after all a set of instructions to enable a system to navigate variables basically it is far more complex I know but the basic system is the instructions. So in the human case an argument could be made that morals and the sense of right and wrong would be the instructions as well as other primal instructions such as fear and others. So you are right in saying morals is not the program but it is part of the program all humans are supposed to have and humans every day stray from morals as well as other parts of the program so to say they dont every would be incorrect.
Yes it would but if the Ox getting gored was the original source of input then the decision being made would then have to be ethical.
So like i said... it depends on whos ox is gettin gored.!!!
No I'm saying that "morals" is the term we use for something not completely understood.
And since we don't know the entire "programme" then it's impossible to say that we can go against it because we don't know what the programme "allows".
Humans go against stated (visible) morality: there's no way at all you can claim that we go against our "inner programming" since you have no idea what it is.
Separate names with a comma.