Will A.I construction truly take off under the scrutiny of public????

Discussion in 'Intelligence & Machines' started by sargentlard, Apr 1, 2003.

Thread Status:
Not open for further replies.
  1. sargentlard Save the whales motherfucker Valued Senior Member

    Will Upbringing of A.I be condemened just like human cloning and stem cell research. If the concept of replicating the functions of a human Brain sans flaws such as emotions, so in turn the A.I is truly superior to human intelligence, is a reality at some point will the mass majority turn face to such a idea and reject it outright, seeing as how their humanity is threatened by a higher form of being that comes from them. Also then applying the intelligence to practical everyday applications to improve effciency without any external human input can cause some people to be scared of such a concept????. What do you think? Do you think we will ban such a undertaking or will it face major conterversy and continue on it's path???

    Please Register or Log in to view the hidden image!

  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Persol I am the great and mighty Zo. Registered Senior Member

    I'm going to try an be optomistic in saying that no public outcry will occur against AI development. Rather, there would be a backlash against giving sufficent AI equal-rights. Some people would argue that it only emulates a soul/humanity, and doesn't actually poses such things.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. ChildOfTheMind So dark the con of man Registered Senior Member

    Books such as Michael Chrichton's,PREY, and other publibicity factors, greatly contribute to the negative aspects of A.I.

    So I guess, technically we will never know because it is impossible to predict the future of this, but I am leaning toward the fact that people will believe that it is unethical, and will protest against it. However, scientists will continue with there research in some isolated chamber...
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Blindman Valued Senior Member

    I think that people do not understand what we are trying to achieve with AI... It will not be human.

    A.I. does not need Fear.. Love.. etcetera.

    WE are not out to create human or any type of conscious being.

    To produce a problem solving machine is our goal..

    So please stop using A.I. We are after an Artificial Intelligence not a conscious machine.
  8. kmguru Staff Member

    Of course it will not be human. When is the man a monkey? AI will be something else - but definitely not human except when it behaves like a human . Like they say - dont monkey arround....or dont clown like a monkey....etc...

    It would be the next step in evolution....
  9. MFrobotH43D Registered Member

    The more human we can make it, the more usefull it will be for situations where human interface is needed.

    I disagree that consciousness is not the goal. If I were smart enough to make the thing, consciousness would be my ultimate goal.

    As a tool for understanding our own consciouness, it would open up the final frontier -- the mind.

    Anyway, the more human it is, the more creative and flexible it will be.

    Love and fear are evolutionary tools that have proven very effective. Never underestimate the engineering mastery of Nature. We would do best to emulate as much as possible from such a master.

    Now back to the topic,

    Yes, of course there will be resistance to this at first. As soon as it is real enough to validate a panel of experts on the evening news, people will start to fear it. But then about five or ten years will pass and everybody will forget what the big deal was -- by which time AI will be a part of everyones life.

    The same thing happens with any "playing god" technology. Fear of powerful and mysterious forces will trigger panic in the moronic mass mind.

    Fortunatly, people don't control technology, so the hysteria will be short lived and futile.
    Last edited: Apr 2, 2003
  10. AntonK Technomage Registered Senior Member

    I dont think though that books such as these should stop being written. It is not the author's responsibility to bend to humanity's stupidity. I think if it makes a good book...write it. Simple as that. I love sci-fi, even ones that put science in a negative light. I think everything has its uses for good and evil, and if a novel explores the evil, then if you have a problem, simply write one exploring the good.

  11. BigWill Registered Senior Member


    I'm gonna have to go a little Trekky on everyone, because it's how I figured out a little about this AI stuff. I think the one thing we can all agree on is that AI is wanted to trouble shoot our problems quicker and more efficiently. Where the controversy will arise is the types of problems we ave. Some AI won't need emotion to complete their jobs (factory worker, driver) whereas some will (human interface). Because of the diversity of the jobs, AI will have to be just as diverse. The problem factor that scares people is when AI can be better than the human. Already computers whoop our ass at math and other number crunching tasks, but we control what it does so it doesn't scare us. I believe that AI, even in a humanoid form, won't be threatening because it will only do what it's told. However, those AI that can make decisions on their own can pose a threat, and scare the masses. sargentlard said "the A.I is truly superior to human intelligence" if emotion is taken out of their programming. I agreed at one time, but Star Trek made a cool point about being human. Without our flaws we wouldn't be capable of the things we are now. In one episode the crew of TNG came across a colony of people who had taken the emotion out of their genetic make up, eliminating hate and anger. They also genetically altered their children so there wouldn't be any handycap citizens....but to what end? Jordy ended up saving the planted with technology designed for his visor to help him see...if he was never blind, and the visor was never made, they couldn't have saved the planet. So yeah, human flaw seems like something we want to do without, but it makes us who we are, and changing it would ruin that. Emotion is critical, and will always put us above machines...but like I mentioned before, once we're able to simulate that (and I'm not sure we can.....ever, at least in the way we posess it) we'll have problems...
    To sum that up, an emotionless machine will never top humans because our emotions aren't mathematical. IF emotions were possible to "put" in machines that's when we should all shit ourselves...
    I feel like this was all covered in the movie AI.
Thread Status:
Not open for further replies.

Share This Page