Singularity

Discussion in 'Intelligence & Machines' started by Dan Foley, Apr 26, 2012.

  1. Dan Foley Registered Member

    Messages:
    4
    Reading "The Singularity is Near." Problem: Intelligence does not equal computation. Intelligence grows out of need, and need is tied to biology. Thoughts? The final frontier...will we really make it our bitch? Check out wfnt dot com "Singularity and the new $600 Man."
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. superstring01 Moderator

    Messages:
    12,110
    Kurzweil is a genius and I love "The Singularity is Near" (I'm actually re-reading it again). The dates may be off, but I don't think his predictions about where we're heading is wrong.

    And your claim about intelligence being purely an occurrence of necessity, I doubt. There is not fixed rule for that. And, to be certain, there is nothing "mystical" about our brains. They are computers. Very, VERY sophisticated, organic computers. We are building computers that--by 2035--will have more computing power in a single laptop than the entire human race. That's following Moore's Law which has proven MORE than true for the past--oh--fifty years.

    ~String
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Dan Foley Registered Member

    Messages:
    4
    I'm enjoying it a lot myself! My wife brought home Transcendent Man and it piqued my curiosity.

    To clarify: I distinguish between "need," my word, and "necessity," your word. I need to eat, but it isn't necessary that I eat a bowl of ice cream. (Although it does sound delightful!) There is latitude when it comes to filling needs. There is no latitude when it comes to necessity.

    I was just at Kurzweil's AI dot net site and commented on the blog post there, "When creative machines overtake man," and I fleshed this out more:

    From the article:

    “Creative machines invent their own self-generated tasks to achieve wow-effects by figuring out how the world works and what can be done within it. Currently, we just have little case studies. But in a few decades, such machines will have more computational power than human brains.”

    But my question is, will creative machines experience “wow effects” AS wow effects? Humans experience wow effects AS wow effects, which is WHY we are able to detect and create them. Will creative machines not require “more computational power than human brains” to get to wow effects, precisely because for these machines there are NO wow effects as we experience them?

    A similar question presents itself regarding this statement:

    “Now you say: OK, computers will be faster than brains, but they lack the general problem-solving software of humans, who apparently can learn to solve all kinds of problems!”

    I would say the question is NOT about the problem-solving “software,” or ability. The question is about experiencing a problem AS a problem. Humans have problems, and machines help us solve them. But will machines ever have their OWN problems? Problems as we experience them. Problems that grow out needs, needs tied to biology. Will machines have needs? That is the question. Not whether machines will ever be capable of meeting needs that needy creatures experience, for problem solving or creativity. What will machines NEED? Intelligence grows out of need, and does not equal computation.

    If you have a chance, check out wfnt dot com, my "Singularity and the $600 Man." Thanks for your reply!
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Read-Only Valued Senior Member

    Messages:
    10,296
    This is rather interesting.

    Please Register or Log in to view the hidden image!



    Yes, machines have needs. One of those is identical to a human need - energy. Without energy, neither man nor machine can function. And I have to wonder just how many decades it will take before machines can actually establish and maintain their own energy supply without human assistance.

    And I'm also a bit doubtful if a machine can ever become truly intelligent. For one thing, despite our best attempts and plans, how can a machine be given the ability to generate true, original thought? That's the one factor that truly divides humans from the rest of the animal kingdom. And I don't see how we could ever give that "ability" to a machine - regardless of how large it's database or how fast it can crunch numbers.
     
  8. Dan Foley Registered Member

    Messages:
    4
    Thanks Read-Only!

    I share your doubt and agree that machines need energy to function. But do they need to function? Would a machine struggle to survive, e.g.? Would it fight for energy, if energy were scarce? Right now, we use machines to help us do things we need or desire to do. The machine needs energy to fulfill this task. But we supply it with the task. The originating source of motion is OUR need.

    Another example. In “The Singularity is Near,” Kurzweil talks about how complex it is to reverse engineer the brain and then with this knowledge, design a machine that will "understand and respond to emotion," which is also very complex and will require a vast expansion of computational power. I don’t doubt that such a machine is coming soon. But the question is whether or not the machine that can understand and RESPOND to emotion will FEEL emotion? If it doesn’t, then the only reason to make a machine that can understand and respond to emotion will be for the sake of biological human beings, who do FEEL emotion. We understand and respond to emotion with far less computational power than computers BECAUSE we feel them. If a machine does not feel emotion, if a machine cannot be elated or dejected, head-over-heels in love or heart-broken, proud or ashamed, confident or fearful, etc., how can it be said to UNDERSTAND these emotions? I have no doubt we will build machines that will get better and better at calculating how to respond to the emotions it detects human beings experiencing, through voice analysis, brain scanning, etc. But human beings are much more efficient, require FAR LESS computational power to understand and respond to emotion, BECAUSE we FEEL emotions. From an engineering perspective then, it would seem to make sense to engineer machines that feel pleasure and pain, and everything else we feel and need them to respond to, based on the principle: no superfluous complexity. Only as complex as needed to solve the problem. Is it more complex to build a machine that FEELS emotion? Is it even possible? Or, is this a benefit to have intelligence that does not truly experience emotion, pain, pleasure…? We only need machines that respond to emotion in the mean time, until we move beyond them ourselves. Is that it?
     
  9. superstring01 Moderator

    Messages:
    12,110
    Well, right now we don't have even the faintest clue as to how it will happen.

    But consider the leap. My desktop isn't ten times more powerful than the most powerful PC in 1990. It isn't a thousand times more powerful. It's something like 1.5 million times as powerful. And the rate of increase is--itself--increasing.

    A single $1000 laptop in 2020 will be more powerful than a human brain. A single laptop in 2035 will be more powerful than all human brains. We've already reverse engineered parts of our brain. It took scientists 2 years between 1988 and 1990 to map 1/10,000th of the human genome. The consortium that funded the HGP gave them fifteen years for the rest and those scientists--blind to the advancing computer technology--were furious. Guess how much time it took to complete the other 9,999/10,000ths? Seven years. Guess how long it takes scientists to map any genome now? A couple months.

    It took a supercomputer, in 1985, a full year to calculate how to fold a protein accurately (something your cell does in a second). A supercomputer did it in a month in 1995. A supercomputer did it in a day in 2000. My new desktop can calculate folded proteins now in a couple minutes. The newest supercomputers can calculate folded proteins in milliseconds now and can calculate millions of them simultaneously.

    I just don't get how you can "doubt" that a machine could become intelligent. Are you implying that there is something magical about organic computers? That's preposterous.

    The reason it's called a "singularity" is for the same reason a black hole is called a "singularity": It is the point, beyond which, we cannot fathom, imagine, estimate or predict.

    Technology doesn't increase along an intuitive plane. It isn't increasing at "10% ever two years". Its increasing parabolic and the rate of parabolic increase is, itself, increasing. Doubt me? Look at any study at the rate of increase of the increase of the increase of technology. It's increasing exponentially.

    We will reach a point where the creative designs in machines are created by other machines (Intel and Motorola has virtually taken humans out of CPU design architecture--it's all done by supercomputers anyway). The time from design to production has fallen--in accordance with Moore's Law--exponentially. Eventually the time from design to production will be on the order of minutes, then seconds. At that point the increase of technology in a single year will be greater than all time previous to that year.

    I think the reason people argue so passionately against it, is because we/they are passionate about our bodies, our humanity, our world essentially remaining "human". There is a saying, "Men will defend, most passionately, that which he doubts the most." And it's true here. It is frightening to imagine no more humans; the end of humanity. Possibly the end of our world as we can envision it. By arguing and--in any event winning an argument--on where we will go will assuage their deepest fears and buy them peace, even if it isn't true. It doesn't matter. Emotional comfort is rarely rested in the truth or reality (just look at religion, they can't ALL be right, yet people still stick to them passionately).

    And the truth is, there are people here, on this website, right now, who will live to see the singularity. Whether we survive it or become one with it nobody knows. But it's coming and it is completely unavoidable.

    ~String
     
    Last edited: Apr 28, 2012
  10. Dan Foley Registered Member

    Messages:
    4
    How will technology become more than a tool?

    It seems to me, String, you miss what is crucial. The development you lay out at length can all be granted. What you describe is the development of a tool. A tool is FOR something and can only be understood in relation to that for the sake of which it is a tool. It may be that what this is all for will change post singularity, or be viewed in a different light, based on the needs of what we become, or the needs of the intelligence we create. (And if we or they have no needs, how could it be FOR anything?) But as you admit, whatever THIS might be, we can have no idea of it now: “It is the point, beyond which, we cannot fathom, imagine, estimate or predict.” In the mean time, we ourselves are driving it forward to serve needs that we have, the need for survival, e.g. Unless you are suggesting this is being directed from above by some sort of providential order? Or is it not the outgrowth of human beings striving to meet needs that human beings have? You say people who resist do so for passionate reasons. But don’t those who also seek to assist this development? Are we not all, at least for now, still moved by passion?

    For example, have you seen Transcendent Man? It’s a documentary on Ray Kurzweil, and death is powerfully present in it from the opening. Death is no longer something to be resigned to; such resignation is no longer the only rational approach to death, according to Kurzweil. We are passionately concerned with avoiding death, and before long, death will be overcome through scientific progress. This desire is behind our pursuit of the various technologies, growing exponentially, with which we will conquer death in the near future. We read about advances in this direction all the time.

    The reason I like Kurzweil so well is that he looks at the big picture of this development. But there are problems, it seems to me. He views technological advance as an extension of biological advance, i.e., evolution. He takes as given the pinnacle of biological evolution, human intelligence as a tool that aids our survival. Right now, human intelligence as a tool that has evolved for the sake of human survival, directs ALL technological advance toward the goal it has served since it emerged hundreds of thousands of years ago. It’s older than that, even. Survival is THE biological goal, and human intelligence is but a tool that has proven extraordinarily useful in the pursuit of this goal.

    Yes, I understand that computers now assist us enormously (and have for some time, and increasingly so) in all areas of technology creation. But they are no more than an extension of the original tool and model, human intelligence. This in no way replaces the goal for the sake of which we apply EITHER tool. The end for the sake of which we create technology creating technology remains the ancient goal of biological existence, survival, our survival, and even our individual survival. Will this change? In the Singularity is Near, Kurzweil creates dialogues among various characters, including some post singularity individuals. In these dialogues it is clear that the morality that developed in the course of biological evolution, presumably also for the sake of survival, somehow remains operative post singularity. But how can this be if post singularity, the needs for the sake of which this morality developed no longer exist? If survival is no longer at stake?

    If human intelligence emerged for the sake of human survival, and is guided by this need, what will guide an artificial intelligence without this or any other need? Or what will it need? Will we have to build the need as well? Will we try to engineer AI that feels pleasure and pain? That fears for its survival? That loves? If AI has no such motives, what will drive it? Right now, the most that Kurzweil envisions are machines that understand and respond to human emotions, i.e., that will be able to detect emotions, with whatever array of physiology reading sensors, and responding in a way we’ve programmed as appropriate given the emotion it detects. But I want to know if these machines will FEEL emotions? And if they don’t, how can they really be said to understand them? What will they live for?
     

Share This Page