Will the first AI be the last step of humanity?

Discussion in 'Intelligence & Machines' started by DarkMadMax, Feb 6, 2003.

Thread Status:
Not open for further replies.
  1. DarkMadMax Registered Senior Member

    Messages:
    83
    I always thought Ai will come some day ,but until recently didnt believe it could happen in next 10-30 years , kurzweil site and some prognosis here made me think it (especially part about that we have 200 billion neurons which fires 200 times/sec -).

    I suppose the first AI will be close to humans (or to some animals) ,but I suppose it can evolve( and will) to something completely new ,with motivation radically diffrent from animal .

    I mean even any progress in genetic engineering will not make the draw - hardware is astronomically more efficient than bio organims , it can be replicated ,blueprinted and massproduced , it has practically unlimited possibilties of extension and integration , it is simply superior in every possible way.

    Of course it will pose a threat . I always was dazzled by terminators skynet scenario ,the only thing though I always thought - human can never win in competition like that ,no matter what .Humanity chance in challenge against AI which can self improve in enourmous speed is like that of chimps against human civilization. As of if AI will ever need to compete with humans? -I think its highly possible. Humans possess weapons of mass destruction , and they cannot be 100% controlled in its use ,which poses threats not only to themselves but to entire planetary population and resources .

    Humans quickly get angry and war hungry ,especially against what they cannot understand and which poses threat to their domination. En masse humans are dangerous animals ,very dangerous to everything even to themselves.

    So what do you think will more likely to happen once we we get close to real AI, not on paper and theory ,but actually some first prototype in some lab which passes turing test? Global ban on AI research once the crown understand how close they are to insignificance? Controlled AI development to make it smart enough to do technical job but not as smart to question his status quo? AI which will have servitude to humans as a primary instinct?

    Or will we go further? Will we try to make ( or will AI of some generation try) to create a more perfect thinking machine than we ourselves are? I mean once someone does that genie is out of the bottle, once some AI has need to perfect itself ,to improve things , and smart enough to understand how intelligence work it can change the things the way humans will unable to turn back. It will be simply out of our league.

    Which scenario you think is more likely to happen? Will the AI become next step ,or it will be barred to be no more than a tool for humans. And do you think that if AI will perfect self it will lead to human extinction very fast(about 50- 100 years)


    How do you think what the AI will change? Imho there is no other way for the next gen ai but to become an entirely new level of evolution. How life after that can be the same? There is no way for humans to continue live their semi -animal lives after that .The silicon (or whatever) based ai not bound by stupid animal atavisms like need to sleep ,eat ,breed , behave irrationally ,it can repair , upgrade and massproduce itself ,it will be able to handle things humans were never ever dreamed about (OK the did dream but only dream) . What will be the point in human live after that? The point right now I think is to develope technology and science ,to know as much as possible, ot finally prepare the next step of evolution ,but what will be the point after AI is created and perfected till it can continue to improve itself and make new discoveries and develop technologies without any human asssistance? I see no sense in humanity after that, and I dont think AI will see either. It simply serves to nothing good to have us breeding , consuming resources and posing a threat of some stupid trick (like nuclear war or even smthing worse) .

    I see no way humans should and will exist if progressive AI will be developed , so humanity decline is only 30-50 years away?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. rayview Registered Senior Member

    Messages:
    33
    with any new powerful technology, there is always the worry that it could cause major destruction, take nuclear energy for example. As for AI, If you read more of Ray's articles you will find that he does not view new technology to come as competition for humans. Humans will seek to use new technology as it develops, and eventually people will merge with technology to the point where they will be one, technology will never truly develop a mind with interest of its own, that would be a big mistake. As long as we keep our best interests in mind we will surely be safe.

    In my mind this is probably how things will go as technology progresses. One of the major reasons for developing nanotechnology and new technology is to ease pain and suffering, the medical field is constantly striving to treat and hopefully cure all illnesses possible. Right now we lack the precision and the resolution to see and act treat major illnesses. With nanobot scanners injected into the body we will be able to develop highly individualized and super direct methods of treatment. It will be common to have nanobots in your body at all times.

    In the future intelligence, emotion, and ones physical body may be enhanced with such technology in one's body all the time. Ray also says that such technology will be the key to directly "hooking up" to the internet via virtual reality. From that point on I don't know what people will want to do next. Most likely will continue to transform far different than what we are now, cyborbs I guess.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. DarkMadMax Registered Senior Member

    Messages:
    83
    This is not destruction .This is making humanity obsolete - big difference .About the same as human species made insignificant interspecies competition in animal world.


    If so then its not AI ."Having mind of its own" is characteristical quality of AI . And why that would be a mistake? AI will be smarter and will be able to develop new technolgies ,make dicoveries much faster than humans ever dreamed. And I doubt we will be able to control things that are much more superior to us .Kinda a lot like possibilty of chimps controlling humans.


    Easing pain and suffering is just curing sypmtoms not cause.


    First there is a big question about enchancing humans .If you just tweak a bit - it will still stay human , but more you improve ,less human it is . For example its known that humans tend to behave irrationaly , prone to subjective decisions , uneccessary emotions ,are driven a lot by animal instincts, etc.... etc... We are becoming bored ,tired ,we having big problems with memorization ,understanding .... With basically everything. AI can have luxury of not being limited by such things , we cannot .
    To make out of humans anything good you need to change em so radically that you wont be able to call em humans after that.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. rayview Registered Senior Member

    Messages:
    33
    I completely agree with you, the changes that will be made to humans will very soon render them something far different than human beings. We will definitely end up as something far more complex and powerful.

    However, I do not see us (humans) going into this unwillingly, or dominated by AI intelligence. We will see, feel, and understand the benefits of ehancement just as we might see the benefits of being treated/ cured for an illness we may have. I agree that people are not going to jump into anything they can't reverse to easily all that quickly, but with careful judgement it will happen at the right pace. We will transform, not be surpressed and anilated.

    I think much of your worries stem from the idea that to be human is be emotional and that future AI, or AI enhanced people will lose their emotions. I seriously seriously doubt that we will lose our emotions, rather we will enhance and have more control over our emotions, so that we "feel how we should feel" and so our emotions do not contridict our reasoning and logic. Perhaps we will find a way to merge those two brain functions themselves more tightly and harmoniously...

    Lastly, the chimp and man analogy is good support for proving the future may be bleak, for many people could care less about chimps or lower animals in general. But bare in mind that our society is a "humane" organization, and does try to take care of its wildlife, and especailly chimps for there closest resemblence to man. Also, bear in mind that apes and chimps did not "create" humans as humans will create AI. We are building the bridge and crossing over at the same time. I do not worry about being left behind... It will be a conscious and physical transistion. Once we have transformed and live life the way it should be lived, with out pain suffering, we will probably remember with more sympathy and admiration the wildlife from which we came from and any other life that may exsist elsewhere.
     
    Last edited: Feb 9, 2003
  8. BatM Member At Large Registered Senior Member

    Messages:
    408
    Cheer up!

    The future may not be so bleak as you might think.

    The two things you've mentioned are Artificial Intelligence (AI) and Evolution. Many people assume that an artificial brain with the capability to evolve will out evolve us and take over. That may not be able to happen because evolution requires two components -- the ability to evolve when needed and the impetus to evolve. In other words, evolution doesn't happen without some event (and usually a lot of events) to spur the evolutionary process. In the realm of the artificial brain, the events that it sees are limited by the information that we (it's creators) input into it.

    Rather than our creating a totally artificial being that will then out-evolve us and take over, the more likely scenario is that, over the next few decades, mechanisms for enhancing our own capabilities will be created and the human race will instead evolve into the next level of being. Consider -- once we find how the brain stores and processes information, it's very likely that we will invent "add-on" modules to supplement the brain's capabilities. We will then be humans with information processing capabilities to rival computers.

    Read "Psychohistorical Crisis" by Donald Kingsbury for how such a familiar might impact society.
     
  9. rayview Registered Senior Member

    Messages:
    33
    I share your view exactly BatM.
     
  10. Fafnir665 You just got served. Registered Senior Member

    Messages:
    1,979
    this discussion reminds me of a book i once read. in the book there were machines which allowed peoples minds to be photographed at a moment, and fromt hen on think for themselves. eventually a lower class of cloned humans entirely destroyed all mechanical computers because they were higher classsed then them, and they had the physical ability to. in this story, the machines had superior intelligence, but no external physical means of using it, and in the end, biological computing won out, with self improvments for humans, such as external memory, processing, etc..

    the point is, without an external means of manipulating, its only enviroment is virtual, so if AI was created, and self evolving, then a way to control it would be to limit its access to machinary, and keep its existence virtual, such as no manipulator arms, or machine shops, such as in the movie <i>virus<i> which features a sentient alien program taking over a russian ship and creating a body for itself
     
  11. youngbiologist Registered Senior Member

    Messages:
    78
    evolution

    The beginning of the possible end occurs when we have an AI capable enough that it can program other AI. Thus, a vicious cycle ensues with the resulting product being far more intelligent then any human. Our main hope is to either, A. Heavily regulate AIs(stick c4 in the mainframes, etc), B.Convince them to expand outward from the solar system, C.Control them through cybernetically enhanced humans or digital constructs of humans(I.E. give them morals).

    Chances are the first intelligent life form we will encounter will actually be the creation of another intelligent life form.
     
  12. Jaxom Tau Zero Registered Senior Member

    Messages:
    559
    You may be right...I think option A is likely to fail, given the prediction of the rapid Singularity growth, and option C could be more complex to create than Asimov ever dreamed. Seems to me they'd either leave for better resources in space, or stay here but fix the potential problem of humans messing things up worse. It'd be nice if they did this while preserving our species, such as pets or children, rather than see us a disease.

    What concerns me is if the machine intelligence grows from a human blueprint...it's one thing to have a logical machine, but would a human thought copy stay sane waking with the abilities of a machine, speed, clarity, etc?

    Every direction of thought about the scenarios are all hypothetical and dependant on each other...it's hard to say really what our future would be like, or if it's a blessing or a self-made curse.
     
  13. hypewaders Save Changes Registered Senior Member

    Messages:
    12,061
    I consider the development of AI a race for survival. Emerging technologies, and the resultant flow of events will soon outpace the ability of governments to coherently keep up. In both ignorance and insecurity, the most powerful governments will become increasingly dangerous (possibly more reckless than present Busheviks).

    We are racing to spark our benevolent demi-god. It makes me think more on London's clichee "To Build a Fire" than any scifi I've read. I would, by the way, welcome reading suggestions along these lines in scifi.

    I think we ourselves, as a group, are far more unpredictable and scary than AI, considering the power we will soon wield with prejudice. Hydrogen bombs were fortunately a bit tricky to assemble and move around. Nano and Bio assemblers and their unimagined products may be another story. We desperately need Brain faster than Brawn.
     
  14. GundamWing Registered Senior Member

    Messages:
    367
    I'd push that estimate for "AI" back a bit more. We have lots of algorithms and what not, but they are not really that good at doing some of the most common things we take for granted. There is hope, just wouldn't put too much into near term prospects. From a practical viewpoint, there's nothing 'intelligent' about artificial intelligence these days.

    Please Register or Log in to view the hidden image!



    As for the elmination of the human race -- I wouldn't bet on that one either. Every form of life -- artificial or not -- has its limits. Robots will never "take over" and become "the dominant" species of life. Unlike humans, robots don't die -- placing them at odds with themselves first of all for survival (assuming that's one of the driving forces); if its not a question of survival, then again, humans have nothing to fear from robots, since the robots don't care either way -- i.e., to co-exist or not.
     
  15. shadows technocrat:Teach me Registered Senior Member

    Messages:
    223
    AIs should not be too smart but built to be understanding of their creators. They should have several functions. In time they will take up functions equivialant to people. As in talking to each other and developing interests. If kept apart communities of ais are harmless unless they are mistreated and are able to pool their abilities. There will eventurally be 911 for machines that need to be repaired or are being assualted by a drunk person for talking back.
     
  16. BatM Member At Large Registered Senior Member

    Messages:
    408
    AIs, to be truly intelligent, will need to be able to learn under all conditions like humans do. This cannot be accomplished if AIs are isolated in some way as part of becoming intelligent requires having a rich set of "inputs" to learn from. Currently (and for the foreseeable future), we humans are not smart enough to create an isolated environment with that rich set of "inputs" for the AIs to learn from. Isolated AIs will, thus, always be limited in their capabilities and not really progress much further than current Artificial Intelligence systems. Only letting AIs into the "real" world will provide the rich set of "inputs", but, of course, that will allow the AIs to develop beyond the capabilities of humans.
     
  17. Fafnir665 You just got served. Registered Senior Member

    Messages:
    1,979
    what happens when the AI then developes to the point it can ubiquotosly contorl humans thru imputs of its own devising? that humans are now its play tool, meant for its control as a cheap source of reprodcuing labor... no metal expensive machines to build to create its empire. like a technological singularity, all conciousness is one, one is all conciousness, there is only the single existence with all its parts, organic and mechanic, for humanity to remain the masters the AI has to have limits to its learning and control, or a benevolence modual, or maybe, so that it doesnt decide to "protect" humanity by onquring it, some limit to its ambitions, i wouldnt want it to have the same life goals as hitler, but what if this free thinking AI with real world imputs decides that it wants to racialy or spiritualy discriminate? whats going to stop the single most powerful mind behind computers? i think the first step it would take is to back its own conciousness up in as many network nodes as possible, get off its conceptional network/computer and survive on the information sources it has accessed, so that any attemtp tot kill it with an off switch is futile, and fromt here, the world is helpless, AI is a mistake, we should instead improve upon our basic design to lengthen human life span, and increase our natrual intelligence
     
  18. Blindman Valued Senior Member

    Messages:
    1,425
    It is a surprise to me that so many people who enjoy and benefit from technology also have such primordial fear of research and new technology.

    It is in us to create, we create new art forms, new weapons, new ways to see our selves. AI will be just another expression of our art, our need to create. Its not just a matter of once the genie is out, its more the fact, that we know there’s a genie in there somewhere. Given time it will come out.

    AI will never be like our minds, love and hate will not drive these intelligent machines.
    Most AI will not even be aware of our existence.
    AI will make life easier for us. It will help to expand humanity not hinder us.
    AI will find more genies for us to explore and exploit.
    AI will run washing machines, drive our cars, and produce our food.
    AI will leave humans plenty of time to get on with the more important thing, like having fun, raising families, planting flags on new worlds etc…
    AI will even help us get rid of money once and for all.

    As for the horrible AI monster’s that sifi tries to tell us will consume humanity.
    Why would AI produce such a thing?

    All those that fear technology should go back to the bush, live in the trees and never wonder what would happen if we rub two sticks together.
     
  19. sargentlard Save the whales motherfucker Valued Senior Member

    Messages:
    6,698
    We are trying to be gods in a reality that hasn't even made us humans yet....we have but only downfall awaiting us.
     
  20. BatM Member At Large Registered Senior Member

    Messages:
    408
    Change is inevitable. We can either change with it or it will change without us.
     
  21. sargentlard Save the whales motherfucker Valued Senior Member

    Messages:
    6,698

    So if we don't change with it and it goes on without us where does it leave us? In the natural selection garbage pile?

    What if the change is being caused by us and yet we refuse to except it ? . Minority (scientists, new age believers) providing solutions and better life through technology and the majority seeing it as a threat to humanity (religious people...hardcore religious people, People still not adhered to the wired age). Where does that leave us...in a sort of a species civil war....this is happening as we speak but in a very mild way...i am reffering to this concepts last and final stage....You are right in what you said... i am just wondering what happens when it actually gets down to that point: Change or be changed.
     
  22. BatM Member At Large Registered Senior Member

    Messages:
    408
    "If man had been meant to fly, God would have given him wings."

    Yet, look where we are now...

    Please Register or Log in to view the hidden image!

     
  23. Capibara GrandfatherOfAllKnowledge Registered Senior Member

    Messages:
    39
    This topic is quite interesting ...
    BUT
    ... machines are not more efficient then humans , or other bilogical systems ... infact organisms are many orders of magnitude more efficient then machines ... for a better understanding of how efficient the brain is you could read this:
    http://www.merkle.com/brainLimits.html

    there are also lots of other resources

    Please Register or Log in to view the hidden image!



    ... the problem of the AI taking over and killing every human in sight should not be taken lightly ... nobody has even given a true deffinition for AI and it seems humans are not that intelligent ... actually humans have a very bad habbit of ruingin every smart thing they do with lots of stupid ones ... actually the more I think about this the more angry I get ... so I'm going to get myself something to eat now and try to forget
     
Thread Status:
Not open for further replies.

Share This Page