I always thought Ai will come some day ,but until recently didnt believe it could happen in next 10-30 years , kurzweil site and some prognosis here made me think it (especially part about that we have 200 billion neurons which fires 200 times/sec -). I suppose the first AI will be close to humans (or to some animals) ,but I suppose it can evolve( and will) to something completely new ,with motivation radically diffrent from animal . I mean even any progress in genetic engineering will not make the draw - hardware is astronomically more efficient than bio organims , it can be replicated ,blueprinted and massproduced , it has practically unlimited possibilties of extension and integration , it is simply superior in every possible way. Of course it will pose a threat . I always was dazzled by terminators skynet scenario ,the only thing though I always thought - human can never win in competition like that ,no matter what .Humanity chance in challenge against AI which can self improve in enourmous speed is like that of chimps against human civilization. As of if AI will ever need to compete with humans? -I think its highly possible. Humans possess weapons of mass destruction , and they cannot be 100% controlled in its use ,which poses threats not only to themselves but to entire planetary population and resources . Humans quickly get angry and war hungry ,especially against what they cannot understand and which poses threat to their domination. En masse humans are dangerous animals ,very dangerous to everything even to themselves. So what do you think will more likely to happen once we we get close to real AI, not on paper and theory ,but actually some first prototype in some lab which passes turing test? Global ban on AI research once the crown understand how close they are to insignificance? Controlled AI development to make it smart enough to do technical job but not as smart to question his status quo? AI which will have servitude to humans as a primary instinct? Or will we go further? Will we try to make ( or will AI of some generation try) to create a more perfect thinking machine than we ourselves are? I mean once someone does that genie is out of the bottle, once some AI has need to perfect itself ,to improve things , and smart enough to understand how intelligence work it can change the things the way humans will unable to turn back. It will be simply out of our league. Which scenario you think is more likely to happen? Will the AI become next step ,or it will be barred to be no more than a tool for humans. And do you think that if AI will perfect self it will lead to human extinction very fast(about 50- 100 years) How do you think what the AI will change? Imho there is no other way for the next gen ai but to become an entirely new level of evolution. How life after that can be the same? There is no way for humans to continue live their semi -animal lives after that .The silicon (or whatever) based ai not bound by stupid animal atavisms like need to sleep ,eat ,breed , behave irrationally ,it can repair , upgrade and massproduce itself ,it will be able to handle things humans were never ever dreamed about (OK the did dream but only dream) . What will be the point in human live after that? The point right now I think is to develope technology and science ,to know as much as possible, ot finally prepare the next step of evolution ,but what will be the point after AI is created and perfected till it can continue to improve itself and make new discoveries and develop technologies without any human asssistance? I see no sense in humanity after that, and I dont think AI will see either. It simply serves to nothing good to have us breeding , consuming resources and posing a threat of some stupid trick (like nuclear war or even smthing worse) . I see no way humans should and will exist if progressive AI will be developed , so humanity decline is only 30-50 years away?