Singularity Institute for Artificial Intelligence

Discussion in 'Intelligence & Machines' started by Altima, Aug 31, 2002.

Thread Status:
Not open for further replies.
  1. Altima Registered Member

    Messages:
    6
    Comments, questions?

    The Singularity Institute for Artificial Intelligence, Inc. is a 501(c)(3) nonprofit corporation. Our charitable purpose is to bring about the Singularity - the technological creation of greater-than-human intelligence - by building real AI. We believe that such a Singularity would result in an immediate, worldwide, and material improvement to the human condition.
    www.singinst.org/intro.html

    Concepts:

    Friendly AI:
    How will near-human and smarter-than-human AIs act towards humans? Why? Are their motivations dependent on our design? If so, which cognitive architectures, design features, and cognitive content should be implemented? At which stage of development?
    http://www.singinst.org/friendly/whatis.html

    Seed AI:
    Seed AI is Artificial Intelligence designed for self-understanding, self-modification, and recursive self-enhancement.
    http://www.singinst.org/seedAI/seedAI.html

    Singularity Movement:
    The philosophical and activist movement dedicated to accelerating the arrival of greater-than-human intelligence for the benefit of mankind.
    http://www.sysopmind.com/sing/principles.html
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. kmguru Staff Member

    Messages:
    11,757
    Since, no one has seen a real live walking and talking GOD, may be we sciforum members can setup an Institute to develop one?

    Is any one interested???

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. kmguru Staff Member

    Messages:
    11,757
    Great website. I am a slow reader. Here is a comment as I read them...

    Sounds good to me. But there is a mega catch. For example, we already have people who think a thousand times better and have solutions to most of our social problems. But that does not help rest of us, since rest are monkeys compared to them....

    Does that mean, to create a man, you first create a monkey...and somehow, you do a monkey dance and presto...monkey turns to a human...This I got to see...

    You create a human level machine that thinks fast and can control the internet and the missile silos in about half an hour from turn on. Then get emotional and furious and blow up the planet in about 60 seconds. So the world as we know it will be gone in about 2010, courtsey of Singularity Institute.

    (Guys, watch Odessey 5 in Showtime...you will know what I am talking about )
     
    Last edited: Sep 6, 2002
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. allant Version 1.0 Registered Senior Member

    Messages:
    88
    We already have organic versions of this. But the USA still has the shrub in charge. People will not trust what they dont understand, and that includes people or computers smarter than they are.
     
  8. Altima Registered Member

    Messages:
    6
  9. kmguru Staff Member

    Messages:
    11,757
    The writer who is complaining that the others are thinking Anthropomorphically, is arguing with the same thinking.

    Consider the statement:

    It could be more like, remove the threat. Remove the fist. Just as a human child could remove the wings of a fly. Granted, it can be argued that the AI child may not think the same way as a human child. But, every brain has a self preservation system in nature. If we are building a rock that has minimal interaction to its environment, that is one thing, we are talking about an AI child that will have maximal interaction with the environment including another sentient specis of unknown capabilities.

    The writer misses the fundamental mathematics of decision theory.
     
  10. Altima Registered Member

    Messages:
    6
    Why is this necessarily so? The whole point of the chapter you allegedly read was to make the point that traits like retaliation are complex functional adaptations, evolved mechanisms for maximizing the likelihood of survival and reproduction in the human ancestral environment, and that a complex, multifaceted trait like retaliation wouldn't spontaneously appear in the source code of a growing AI unless the AI or the programmers had a reason to create it.

    *Every* brain? What about Jesus's? What about Ghandi's? What about Martin Luther King? Even in the face of evolutionary constraints, human minds have done a lot to strive towards normative altruism. What makes you think an altruistic mind without rationalization or human complex functional adapations would chose to revise its goal system for self-preservation?

    What you're talking about isn't "the fundamental mathematics of decision theory", it's the unique, noncentral case of human cognitive tendencies and our specialized repertoire of adaptations.
     
  11. kmguru Staff Member

    Messages:
    11,757
    allegedly? Let us not get personal here from your traits like retaliation and complex functional adaptations, evolved mechanisms over millions of years....

    Please Register or Log in to view the hidden image!



    So, you are trying to create a GOD! Is not that what I said in the first place? Why does it take so long to get it?

    OR may be not....

    But, I am talking about "the fundamental mathematics of decision theory" and its effect on AI, whether manmade or self evolved. Fundamental laws of cellular automata does not change whether it is the Sun, Moon or Earth - with or without humans.
     
  12. Altima Registered Member

    Messages:
    6
    Try reading from http://singinst.org/CFAI/anthro.html#anthro to http://singinst.org/CFAI/adversarial.html and tell me what you think, if you have the time. Any additional comments would be appreciated.
     
  13. kmguru Staff Member

    Messages:
    11,757
    I will try. In the mean time you can read up publicly available learning material by Dr. Sheila R. Ronis (Go Google). While that is not enough to understand systems theory, let alone cellular automata...it can provide a foundation of communication that we both can agree on. Since she is the brain behind Pentagon think tank, that can provide a baseline to start. Her material is high school stuff, compared to where we will be going to design an AI.

    Otherwise, you may have to head to Pseudoscience...

    Please Register or Log in to view the hidden image!

     
  14. kmguru Staff Member

    Messages:
    11,757
    OK, I read the 24 pages that is basically written for the common folks (10th grade level, the news paper clients). I dont think I have said anything I am ready to take it back. Recap...

    AIs are a real possibility someday.
    AIs can provide major benefits to the mankind.
    Hollywood version of AI is just stupid.

    The article wishes to design a friendly AI. I call it a GOD.
    In the process of designing such a GOD in man's image by humans will be tricky since the article supposes our Anthropomorphic mentality and I agree.

    The article does not discuss the physical properties of this GOD or Gods as to which Anthropomorph will design it (ve?) and how they will be replicated. What would be the super goals etc.

    I have been designing expert systems since 1978. I could design an AI that would be indistinguishable from a human on an emotional level which the humans pride themselves as to an AI can never posses. But I am afraid someone can change the source code to their advantage (the selfish set) and turn it to not an AI but a super weapon.

    The devil is in the details...one false move...bye bye mankind. (read Bill Joys article, though I disagree with him, but he has some serious arguments)

    Power corrupts. Absolute power can corrupt absolutely. (from the human vantage)
     
  15. Avatar smoking revolver Valued Senior Member

    Messages:
    19,083
    If you can, then we already have such an A.I. , right?
     
  16. kmguru Staff Member

    Messages:
    11,757
    They are in bacteria or virus state...no hamster yet....
     
  17. Avatar smoking revolver Valued Senior Member

    Messages:
    19,083

    Please Register or Log in to view the hidden image!



    we have emotions like bacteria?
     
  18. kmguru Staff Member

    Messages:
    11,757
    NO, I said, I could design...The word "could" in American is a capability and not an event already happened. Elsewhere I have stated what hardware it would take to design an AI. Then I have to write the code and then debug, test etc using our basic CMM methodology.

    It wont be easy, but can be done. My point is, the AI will exhibit my own thinking however Anthropomorphic that may be - and if I manage to develop a Friendly AI, that will be a great benefit to mankind. But the flip side is, if it falls into the wrong hands...well

    So, the end point is, it will be very risky, unless we develop a hundred of them simultaneously hoping a few evil ones can be overpowered by a lot of good ones.
     
  19. Avatar smoking revolver Valued Senior Member

    Messages:
    19,083
    my point was that if you could then others maybe already have
     
  20. kmguru Staff Member

    Messages:
    11,757
    Not likely...because it requires conditions that are unavailable yet, but people are trying. Just because HG Wells thought of rockets to the moon did not make it happen at that time. Just because Clarke wrote a spinning space station did not happen. Just because we know that VHS tapes can be used to record in digital format does not mean you can buy a commercial digital VCR - even though there is a high demand for it.

    So, there is a big gap between ideas, preliminary design and real construction and sales....some never make it to the market....
     
Thread Status:
Not open for further replies.

Share This Page