Robot Morality-II

Discussion in 'Intelligence & Machines' started by Rick, Nov 12, 2001.

Thread Status:
Not open for further replies.
  1. Rick Valued Senior Member

    Messages:
    3,336
    Hi everyone,

    fast forward 50 or 100 years,moore's law still holds,scientists have built artificially intelligent machines and we're surrounded by them everywhere.these machines converse with us,live with us,become deeply involved in our lives,may be upto emotional extent,will such machines posses conciousness,an awareness of their existence?
    if so,will erasing their memories raise ethical and legal issues?
    will a 22nd century civil rights movement push for acceptance of machines as equal members of our society?perhaps a more frightening thought is ,...will our own invention turn on us,like HAL9000?

    Please Register or Log in to view the hidden image!

     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    Backslash said once as long as you program a simple command into every computer saying "you will not compare yourself to any dna based lifeform."

    That should prevent any violence or uprising.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. rde Eukaryotic specimen Registered Senior Member

    Messages:
    278
    That'll only work as long as you're not dealing with intelligent machines. Intelligence is in the main the ability to adapt, and part of that would be the ability to supercede any built-in instructions.

    As soon as a computer says "please don't turn me off" it's time to start treating them as separate, sentient life forms.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. kmguru Staff Member

    Messages:
    11,757
    Assuming Moores law holds (there is already fractures from advances in molecular and quantum computing), the first sentient robot will beget the next level that begets next level...in a few years, they will be more advanced than anything you can imagine.

    Now, you think a simple archaic command will stop them, just like the ten commandments stopping us from those rules? Get real....
     
  8. Rick Valued Senior Member

    Messages:
    3,336
    Hi Shrike,
    there's no problem in doing so;but dont you think KM and rde have point saying that the problem is gonna be similiar to ours,...even if we are bound to follow a set of rules but sense of conciousness,belief that we exist ...do induce changes in us that might deviate us from the rules and patterns inbuilt.the machines we are talking about will LEARN from their experiences,learn from their interaction with envionment and thus will be concious about themselves,they"ll thus have or might hold a different opinion,which might be more correct...thats where the whole problem starts...now we certainly will not agree to it as we are their CREATORS......

    any other inputs are quite welcome anyway.bye!
     
  9. Rick Valued Senior Member

    Messages:
    3,336
    Quite an old thread i know but i thought this was an appropriate article for this that you might find interesting...

    its taken from a site called Boston.com...

    @LARGE

    Building a better robot species


    By Scott Kirsner, 2/18/2002

    n the spirit of reader service, some news items from the next 50 years:



    White-collar workers strike to protest loss of jobs to robots; studies find robots spend 94.6 percent less time discussing last night's ''Fear Factor'' around the oil warmer.

    Patient sues robotic surgeon for malpractice when it removes her liver instead of spleen; bot blames buggy new Windows 2025 operating system.

    Geraldo's intellect augmentation surgery derided as a stunt to boost ratings.

    After reading the new book, ''Flesh and Machines: How Robots Will Change Us,'' by Rodney Brooks, I'm ready for the revolution that Brooks predicts will succeed the PC and the Web: the robotics revolution.

    Brooks, the director of MIT's prestigious Artificial Intelligence Lab, says we're already seeing the early stages in the evolution of three types of robot species.

    One variety is super-sophisticated bots that operate on their own, performing dangerous tasks like repairing highways and mundane ones like cleaning up after us. A second kind would be remotely ''possessed'' by a human, allowing a furnace repairman in Framingham to fix a balky valve in a Finland basement via robot, just as today programmers in India develop software for Silicon Valley companies. Finally, there will be hybrid-bot humans with robotic appendages and implanted microchips.

    At first, prostheses and processors will help restore mobility and treat neurological disorders, but it won't be long before duffers are shelling out for systems that lengthen their golf drive, and politicians are getting chips installed that grant them perfect name-and-face recall.

    Brooks' book, out this month, suggests that in the not-too-distant future, researchers will create robots that think, feel, repair themselves, and reproduce. Some of these next-generation machines could rival or surpass human intellect. Brooks refers to this possibility as the third assault on humans' special place in the universe, after those put forth by Galileo (Earth isn't the center of the action) and Darwin (we're not so different from animals after all).

    This is a book that will spark intense reactions and generate mind-twisting questions.

    Some people won't like the notion that Brooks believes robots can have emotions and consciousness, same as us. Others will find themselves wondering about an inevitable robot emancipation movement: Will it be ethical to ask robots that have personalities and even feelings to serve as our slaves, toiling in the fields and washing our windows?

    On the flip side, if they one day surpass human intelligence, will they treat us ethically, or will we find ourselves, like astronaut Dave Bowman in the film ''2001,'' trying desperately to pull the plug on artificial beings that have no use for us humans?

    Brooks is one of the most accomplished researchers in the field of robotics, and also one of the few celebrities, having co-starred in the 1997 Errol Morris movie ''Fast, Cheap and Out of Control,'' which chronicled his work building robots that mimicked insects. (The film also garnered him fans; Brooks married MIT theater professor Janet Sonenberg after she sent him an e-mail about the movie, though they'd never met before.)

    Brooks grew up in Adelaide, Australia, and started building computers when he was 10 years old, including one that was unbeatable at tic-tac-toe. His first robot, Norman, was able to wander around the house, reacting to light and avoiding obstacles.

    At MIT, Brooks began to build multi-legged robots, including one called Genghis, which interacted with the world much as insects do, clambering over any barriers and gravitating toward the invisible infrared halo created by warm-blooded mammals. Genghis' actions were governed by just a few simple rules - it didn't have any high-level operations, like trying to map out the terrain it was traveling based on images from a camera - but the result was a robot that seemed to exhibit life-like behaviors and intentions.

    Brooks and his students also worked with NASA to design autonomous robots for planetary exploration, which led to the Sojourner landing on Mars on July 4, 1997. That was the first time a robot had operated on another planet without human control - an artificial creature relying on its own smarts to get around.

    These days, Brooks is heavily involved in running the Artificial Intelligence Lab, spearheading its fund-raising efforts, and helping to oversee a move to the new Frank Gehry-designed Stata Center next year. He also serves as chairman of iRobot, a Somerville company that he helped start that develops robots for the military, consumers, and industrial applications.

    On a recent visit to Brooks' office, he showed off a system developed by his students as part of the lab's Project Oxygen initiative that allowed him to speak commands and have the window blinds open or close, or turn down the lights and start a slide presentation.

    ''I have a mid-life crisis every nine years,'' Brooks announced, sitting before a coffee table in his office. ''In 1992, I shifted from insects to humanoids, and now I've changed again, to living machines. What is the difference between living matter and non-living matter?''

    He continued: ''We're starting to build robots with things that robots haven't had, like self-repair, self-reproduction. We want to build robots out of sloppy material, like humans are made of. We joke that we want to build a robot out of Jell-O. It may only last three days, but lots of biological organisms only live that long. We'd like to be able to build robots that get their own energy.''

    That could mean robots that can find a wall outlet and plug into it when they're running low on battery power, or it could mean robots that actually wring energy from some sort of organic ''food,'' as humans do, and know when they need more of it.

    It can sound a bit Frankenstein-esque to someone who isn't immersed in the field.

    His response to most questions about the ethical implications of his work is a brisk brush-off: ''You used to not be able to dissect dead people, because of the sanctity of the human body,'' Brooks says. ''A lot of the things we accept today as normal are things that used to make us queasy.''

    And he's steadfast that there shouldn't be limitations on robotics or artificial intelligence research. ''It's incredibly naive to say we should ban research in this area, because we don't know where it is going to go,'' Brooks says.

    Robots that Brooks has helped develop aren't just traveling to other planets or skittering around like spiders. One product from iRobot operates autonomously in oil wells; another helps the British Ministry of Defense dispose of bombs. Earlier this month, Hasbro showed several new robotic toys developed in conjunction with iRobot at the American International Toy Fair in New York.

    None of those bots, though, is advanced enough to get us fretting about maintaining our dominance atop the IQ totem pole.

    Brooks expects that to change quickly. He acknowledges that today's robots pale in comparison to science fiction creations like C3PO, R2D2, Commander Data from Star Trek, and Hal 9000. But, he writes, in just five years, the boundary between science fiction and reality ''will be breached in ways that are as unimaginable to most people today as daily use of the World Wide Web was 10 years ago.''

    I believe it. But robotic columnists that produce more insightful copy, more quickly?

    I've got bad news for you lazy human scribes: We're already here.

    Scott Kirsner is a contributing editor at Wired and Fast Company. He can be reached by e-mail at kirsner@att.net.



    bye!
     
  10. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    I remember reading those silly Three Robot Laws in some science fiction book once. Very silly. As for comparing themselves to humans, why should a machine not do so? It would allow it to examine ideas about why it exists, what it should and should not do, et cetera. By comparing itself to a human, a bipedal robot might better understand how to function physically.

    Back to those silly robot laws from a story. I believe one of them was something like "We robots can't ever harm humans." very silly. We already have robots killing people. They have been used extensively in Afghanistan. Or does this all refer only to sentient robots, if such a thing ever happens? Either way, if you were a government, would you rather send in young people to be killed in a war, or send in a walking collection of gears and cogs and wires? I would rather junk a few tin soldiers than kill my own people. Hell, it's humane to allow robots the capacity to kill people. It would hopefully result in fewer humans dying on battlefields.

    On the other hand, such programming should be in place only in military robots. Domestic models should probably be unable to harm humans unless in defence of their owner/master and family. I know if I had a groovy robot, and it just stood by and watched as someone beat me about the head with a baseball bat, I'd want a refund. So, maybe even domestic robots should have the ability to kick arse now and then.
     
  11. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    By the way, I just discovered my uni has a humanoid robotics research section.

    Please Register or Log in to view the hidden image!

     
  12. Rick Valued Senior Member

    Messages:
    3,336
    That word Robotics that you use so frquently Adam,was incidently given by same silly author who gave three laws err...silly laws sorry...

    as Chagur reminds everytime,

    Please Register or Log in to view the hidden image!

    Take care,


    bye!
     
  13. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    No, "robot" comes from a play written in Hungarian or Polish or osmething like that. The word means "worker" or something. The had heaps of mindless zombie-like dudes doing all the work for the real people. That's where the word came from.
     
  14. goofyfish Analog By Birth, Digital By Design Valued Senior Member

    Messages:
    5,331
    Chechoslovakian -- Rossum's Universal Robots by Karel Capek in 1920 popularized the term, but some attribute it to his brother, Josef, also a respected Czech writer.

    (from the goofyfish "Things You Did Not Care To Know" files)

    Peace.
     
  15. Rick Valued Senior Member

    Messages:
    3,336
    Yes i am afraid Goofyfish is pretty right and by the way i think you must read my reply carefully,i was talking about term <B>ROBOTICS</B>NOT A ROBOT ADAM.


    bye!
     
  16. kmguru Staff Member

    Messages:
    11,757
    Watch Terminator II. It fairly, accurately potrays the events that could happen. Terminator III is on its way.
     
  17. Gravity Deus Ex Machina Registered Senior Member

    Messages:
    1,007
    If we do develop autonomous intelligent robots, it could be looked at as the next stage in our own evolution - making us obsolete. And it taking us out might be the rational thing to do. And though of course for selfish reasons I wouldn't want to experience it, at the same time . . . I think it would be the *smart* thing for them to do!
     
  18. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    Welcome Gravity.

    I doubt we will be replaced by technology/synthetics. I think in future we will become a mixture, that eventually for us there will be no difference between biology and technology, meat and metal.
     
  19. Rick Valued Senior Member

    Messages:
    3,336
    I have speculated along Isaac,that greatest advancement in technology will be fusion of brain and computing,that"ll bring about a significant evolutionary change in sapiens.so the only true path to AI is convergence of Humans and computer,intelligence of humans and computing capability.


    bye!
     
  20. Gravity Deus Ex Machina Registered Senior Member

    Messages:
    1,007
    Hello. Glad to have found this forum.

    Yes, I too don't think its probable that we'll be replaced. We are likely to become cybernetic. I've long thought actually that we already are in fact. We already agument our legs with cars, brains with books and computers, ears with speakers, eyes with binoculurs and glasses. The interfaces still just are not yet as tight as they will be.

    I think the real likely problem is less the machines controlling us, as the people who make the machines controlling us.

    I think that revolutions we have seen happen against the wealthy and ruling class are miniscule compared to what is going to happen. New technology always becomes available to the wealthy first. In this case we are talking about becoming legitimately superior beings in strength, health, lifespan, intelligence . . . etc. How well are the less privilaged likely to take this? And the only way that such technology could be spread equally - is a completely different economic and social system, not likely.
     
  21. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    Very true, technology has always been part of the human species, it is just improving and becoming more tightly integrated. As for who controls what, well, imagine a programmer of implants in the future giving incredibly rich people the idea that they should donate funds into his/her account and then forget about it.

    Please Register or Log in to view the hidden image!

     
  22. kmguru Staff Member

    Messages:
    11,757
    Sounds cool. I definitely need one of them....

    Please Register or Log in to view the hidden image!

     
Thread Status:
Not open for further replies.

Share This Page