"The Artilect War"

Discussion in 'Intelligence & Machines' started by Esoteric, Jan 3, 2004.

Thread Status:
Not open for further replies.
  1. Esoteric Tragic Hero Registered Senior Member

    Messages:
    307
    "My name is Professor Hugo de Garis. I'm the head of a research group which designs and builds "artificial brains", a field that I have largely pioneered. But I'm more than just a researcher and scientist - I'm also a social critic with a political and ethical conscience. I am very worried that in the second half of our new century, the consequences of the kind of work that I do may have such a negative impact upon humanity that I truly fear for the future."

    http://www.cs.usu.edu/~degaris/artilectwar2.html
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. kmguru Staff Member

    Messages:
    11,757
    So, Esoteric

    What do you think about the professor's brains?

    Anyone seen one work? OR is it one of those fictionalized documentaries? My BYU friends in the EE department think, he is smoking or might have done too many loops on the roller coaster....

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Esoteric Tragic Hero Registered Senior Member

    Messages:
    307
    From what i have been reading on the subject artilects seems plausible. The terrans/cosmist stuff is pretty much pure fantasy, albeit entertaining.

    heres another site on artilects.

    http://www.artilect.org/
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. kmguru Staff Member

    Messages:
    11,757
    Definitely interesting. But all these pundits forget that one just can not linearly extrapolate how a million times powerful AI would behave. They just try to extrapolate their own behaviour if they become Gods and then get scared only if their co-worker becomes first.

    As life goes....do you know that there is a whole different plant kingdom out there that are gradually getting smarter thanks to our genetic meddling? We should worry more about that.

    And perhaps there are a lot of natural hyper AI computers out there the size of pulsars....just imagine...
     
  8. kmguru Staff Member

    Messages:
    11,757
    From Ray Kurzweil site:

    "Once computers achieve a level of intelligence comparable to that of humans, they will necessarily soar past it. For example, if I learn French, I can't readily download that learning to you. The reason is that for us, learning involves successions of stunningly complex patterns of interconnections among brain cells (neurons) and among the concentrations of biochemicals, known as neurotransmitters, that enable impulses to travel from neuron to neuron. We have no way of quickly downloading these patterns. But quick downloading will allow our nonbiological creations to share immediately what they learn with billions of other machines. Ultimately, nonbiological entities will master not only the sum total of their own knowledge but all of ours as well."


    This assumes that the complex process of AI is a simple process of downloading information to an artificial brain even though such does not work for we humans. If that is so easy, how come K-Mart is going bankrupt? All they need to do is download the software and all their supply chain logistics will hum like Wal-Mart. What if the software is too complex to emulate the hardware? What if you need a real hardware to develop the AI. So, the downloading the next upgrade is more like building the next 3-D cube.

    What burns me is that these physicists who spend better part of their life learning material science and physical laws suddenly become experts in information science while tinkering with hardware. When you ask them, they say - oh! just a little software, I can not do it, but my 12 year old son could do it. We can not find God, but we surely can build it.

    Well, we have been building God since Man came out of the cave. And when we make a Monkey translate Chinese to English, then we will have the software to do it. At least, we will learn the mechanism....
     
  9. eburacum45 Valued Senior Member

    Messages:
    1,297
    Here is our fictional take on the artilect/ai situation;
    http://www.orionsarm.com/sophontology/archailects.html

    who Knows if this will ever come about? What will be the status of humanity in such a universe?
    Is it, or will it soon be, possible to detect the infra-red emissions from an alien artilect thousands of light years distant?

    don't worry- the software for artificial intelligence will emerge, sooner or later, one way or another; will we even recognise it as such if it is radically unlike human consciousness? And as Kurzweil points out, another thing to worry about is how friendly will it be...
     
  10. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    One of the first things this guy appeals to is Moore's Law... most of the stuff he mentions in this article is neither cutting edge nor particularly clever.

    Maybe that's a little unfair but, getting past his weird advertisement of his beliefs about gender relations, I find that he seems strangely non-technical in a lot of fields he draws from.

    He states that "If Moore's Law continues unstopped until 2020 or thereabouts, it will be possible to store one bit of information (a zero or a one, a "0" or a "1") on a single atom. " I hesitate to make this point, but Moore's Law is not restricted by time... either we'll find out how to do that little trick (storing information on a single atom, which in terms of current photolithographic techniques is not very likely) or we won't. No law of even progression will govern this.

    I won't say much about Avogadro's number other than the fact that it's the number of atoms/molecules in a mole of an element/compound, not the number of atoms in "an object of human scale".

    In terms of physics, he says "In physics, the concept of "entropy" is used to measure how disordered a physical system is. For example, ice has a lower entropy than water, because it is more ordered, less chaotic."

    Now since the heat-death of the universe - the ultimate expression of entropy, I suppose - will probably be extremely orderly, this description of entropy is rather backwards. However, this is not the most serious problem here...

    He begins equating information to energy, in the belief that a system with no information loss will also generate no heat. This seems at odds with the physics of the situation as I understand it...

    In general, there's a lot of "Moore's law will save us" assertions -
    "All this information could be dumped into a "hyper-computer", that Moore's Law will make possible" - apparently ignoring the fact that Moore's law is not a law, but just an observation of a trend. His descriptions of artificial embryology betray no knowledge of biology, and his maunderings about "gigadeath" and "artilect gods" are no different from plenty of other speculative fiction.

    Any reasonably smart high school student could have written this document, the misconceptions about various scientific fields are certainly in line with the regular public misconceptions. His descriptions seem imprecise and at times impossibly hopeful, and he still has not made any attempt to explain how making a computer bigger will make it intelligent; he is simply following the old science fiction saw that any computer that is big enough will become sapient, probably by accident while no one is looking.

    After this he goes off into a long wander about Cosmism, the reasons why everything will be the way it will be, and a bunch of tiresome ideological claptrap which I didn't bother to do more than skim through.

    From the quality of his composition it's hard to tell whether this guy is even a real university professor. I sincerely doubt that this document will provide any insight for anyone into artificial intelligence.
     
    Last edited: Jan 14, 2004
  11. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    After looking into it for 10 minutes or so, he seems legit. He's a prof at a university. I've seen nothign to say it works, or is even actually promising, but I'm pretty sure he's somewhat legit with computers and AI. His appeal to Moore's law could easily be viewed as an attempt to relate to readers. Most nerds have herd of it, so he captures interest that way? I dunno. Quite interesting I suppose. I'll have look into it more.
     
  12. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    Sorry Wes, I edited quite a bit.
     
  13. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    from the linked text above:
    I think that sums it up. I'm with Penrose on this one, and this dude's baby may be programmable, but until other avenues of research offer up key info, I doubt there's little worry of a machine becoming self-aware... unless of course it could happen by accident.
     
  14. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    Wes, that's crap. That's the CS version of waiting for the Rapture.
     
  15. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    You suck. Hehe. Not a problem. I still stand by my comment!

    Please Register or Log in to view the hidden image!

    Oh, and I think I was much more succinct in my assessment as well, so

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!

     
  16. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    What exactly is crap? Penrose's point?

    You mean the part about it happening by accident? Oh man yeah that's a seemingly remote possibility for sure. I didn't mean to present it as otherwise.
     
  17. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    No, waiting for another field to provide an answer to the AI problem. Why bother even studying it in the first place?
     
  18. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    Oh, well that's silly. You're serious?
     
  19. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    Okay, assuming you are serious (pardon if that was obnoxious), I'll offer the following simple notions.

    There is no way to know if Penrose is correct. (like I said, I think he is though) If you don't think he is, you'll pursue AI via a computational route. A number of positive things remain even if penrose is right:

    you study nueral networks and get to know the mechanics.
    who knows what stuff you can learn from that


    Hmm..

    how about this. since you don't know how consciousness works, you can't say what is the right route to figure it out. so all routes that seem reasonable shoudl be pursued, especially if there are side benefits, like those offered by current AI research. Better computer games, stock market prediction and all kind of other AI apps that don't require a conscious entity. could be that the conscious entity could be comprised of a myriad of these sub systems... so if you don't pursue this route you still couldn't do it even if other areas of science offer up the fundamental conditions for conscoiusness.

    oh and "waiting for the answer from somewhere else" is the silly part. you don't wait, you look as hard as you can in your area. maybe you don't need the other answers, maybe you do, but you'll never know if you just sit around waiting. i didn't intend at all to imply "sitting around and waiting". I just mean that I think that ultimately you won't have a choice if you're seriously trying to make a conscious machine. you still should research all you can in the mean time.

    oh and further, maybe the scope of AI research is too narrow eh? rather, it's possible that for it to actually result in intelligence, the scope of the research may have to expand to solve the problem.
     
    Last edited: Jan 14, 2004
  20. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    Of course I'm serious. You don't devote your life to studying a problem, and then add a footnote to your efforts that says

    1. This part is somebody else's problem.

    when you're referring to the fundamental pursuit of your study. Making computers really really large and complicated is just clouding the issue; the real point of the study of AI is to try to develop an intelligent/sapient computer, or come to the conclusion that it cannot be done for a concrete reason, be that quantum effects or whatever. That doesn't mean that you can just cite an apparent problem as a reason to sit on your backside and wait for someone else to fix it.

    The nature of intelligence is such an unknown that it's not really possible for us to just say "Well it's obviously quantum" because we don't know. To do so is to effectively appeal to an unknown force.
     
  21. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    Hehe.. I like your footnote.

    Yeah I agree wholly, as you can likely see from my preceding comment. Note that I didn't imply that one shouldn't pursue that AI research is pointless or that they should stop, even in the post to which you objected. I merely stated that I agree with Penrose. I'm pretty sure AI research will not produce conscious machines until they get a bone thrown to them from somewhere else. I don't think that means their efforts are wasted.
     
  22. kmguru Staff Member

    Messages:
    11,757
    It is not meant to. The hope is a brute force machine that runs on a Bayesian or similar network will hopefully, suddenly become self aware. If that does not happen, we are back to square one. Another thought is that we can have a machine that improve and design another machine and so on, until several generation down the road, the machine becomes self aware.

    I think such is possible when machines start interacting with humans for a prolonged period of time with some serious adaptive algorithms. Like that Robin Williams movie....
     
  23. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    Of course Penrose could be wrong. I'm sure he's aware of it. Have you read at least skimmed "the emporer's new mind"? I read most of it a while back. Seems like a pretty strong case.

    Regardless, I think that when you're tryign to study something about which little is really known, any rational approach should be taken if you have the resources to throw at it.. so I think the CS approach to AI is a good route, but again.. I think it probably won't work because of how I think consciousness works, which I've been tryign to figure out how to explain since I've been here.

    Please Register or Log in to view the hidden image!

    Of course, it's most likley that I'm wrong.. but that doesn't mean that I think I am.

    Please Register or Log in to view the hidden image!

     
Thread Status:
Not open for further replies.

Share This Page