AI is Coming...AI is Coming...

Discussion in 'Intelligence & Machines' started by kmguru, Aug 1, 2001.

Thread Status:
Not open for further replies.
  1. kmguru Staff Member


    AT&T Labs has developed speech software that can replicate a person's voice so perfectly that it is exceedingly difficult to discern the difference between the utterances of the machine and those of its human counterparts. Some say Natural Voices may well herald the advent of "voice cloning" -- a replication of a person's voice so perfectly that the human ear cannot tell the difference. "Natural Voices gets into the gray area," said James R. Fruchterman, CEO of Benetech, a company that tested the software, "where there is plausible deniability that it is a machine."
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Red Devil Born Again Athiest Registered Senior Member

    Artificial Intelligence - will it take over?

    Terminator territory here! I caught the back end of a clip on Discovery TV recently in which a guy who builds/designs artificial brains for machines has started waking up in the middle of the night muttering "What have I done" - he is genuinely worried about his "brains" becoming alive and taking over! He said, in the little bit that I saw, that his "brains" are theoretically possible of "switching on" and becoming self thinking! eeeeek!

    Please Register or Log in to view the hidden image!

  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. kmguru Staff Member

    Dont worry...our hardware is not sophisticated enough to have AIs run amok. The terminator scenario is a real possibility. But then what can we do? Cows could not do anything about us? Cows did not know, they will have a natural predator....until too late.
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Seskii Registered Member

    I have an interesting song i got a while ago that kind of sums it up for me. The songs goes "Machines can do the work, so people have time to think" and half way it changes it "People can do the work, so Machines have time to think".

    I think that pretty much would be the scenario you would get. AI would become a more dominant race through numbers and intelligence and we become inslaved.

    Just an interesting outlook on it
  8. wet1 Wanderer Registered Senior Member

    Welcome to Sciforums, Seskii. May your posts be long and varied!
    Pray tell what is the name of the song and who by?
  9. Deadwood Registered Senior Member

    red devil I think I know who you're tlaking about. I did an assignment last year about this topic, of robots taking over humans. It was actually if robots will oneday be smarter than humans. I think the terminator scenario will be a reality one day. We've all seen Star Wars The Phantom Menace.

    The guy you talk about is puts chips in his arm and makes his arm move when the computer moves his arm. However, he didn't seem worried about it, so then again it could be someone else.

    But lets not forget about nanobots. These guys could come into us without us knowing. I don't want to sound paranoid and say THERE HERE!!! But these are things that should be in moramandums or what ever you call them. Time out while you work out if a certain invention or science is both moral and ethical.
  10. Seskii Registered Member

    Thank you Wet1. The song is B(if)tek - Machines Work, and if you want i can email you it.
  11. kmguru Staff Member

    I read a news this morning that it will take 50 years for us to have true AIs (I think therefore I am variety). So when we get to 30 years from now, wake me up and we will do some laws to make sure we are not going extinct - if we have not blown up ourselves with a bigggg bomba...
  12. kmguru Staff Member

    You know where this is going, dont you?

    IBM's vision of grid computing is based on networks already in use by NASA (news - web sites) and in universities and research labs that link hundreds or thousands of nodes, or machines, which may be scattered around the world. The grids focus the computers' combined power on a single task.

    An example is the SETI(at)home, or Search for Extraterrestrial Intelligence (news - web sites) project, a network that uses donated PC power to analyze radio-telescope data for sounds of alien life.

    With practically unlimited data storage and enormous computing power, grid computing could accelerate math-intensive research into a cancer cure, oil exploration, a fuel-efficient engine or climate prediction, said Jonathan Eunice, principal analyst for Illuminata, Inc., a technology researcher in Nashua, N.H.

    ``This is making grid computing available on an Internet scale,'' said Eunice. ``A large network now is 5,000 nodes. With this, you can open the bidding at 50,000 or hundreds of thousands of nodes. Even millions of nodes are open to you.''


    Shades of Terminator????
  13. Chagur .Seeker. Registered Senior Member

    Except, kmguru ...

    That it's going to operate on WWW II ... a secure, optical cable only network connecting government, research labs, and possibly certain advisory 'think tanks'.

    Not going to put up with what has been going on lately on the WWW.

    And, best of all, it isn't (or isn't going to be) an MS based system! I suspect an ADA/OS2 variant.
  14. kmguru Staff Member

    You are right on the net2 network. But if it works, they will expand it to business.

    IBM will run their system in a Virtual machine basis with heavy emphasis on Linux plus os/390.

    The point I was trying to make is with so much computing power on a optical network, how long do you think it will take some smart ass to come up with an AI kernel?

    And thesedays, no network is isolated. All it takes is to crack one gateway (or router or switch) to the next network.

    And...there you have it....AI run amok....
  15. Radical Registered Senior Member

  16. wet1 Wanderer Registered Senior Member

    From Cosmoverse:
    Like a Child, `Smart' Robot Learns Gradually

    Please Register or Log in to view the hidden image!

    How would you feel if your kids came equipped with "good'' and "bad'' buttons? Buttons, which could be pushed to make them behave? According to Knight Ridder Newspaper, that's the way John Weng, a robotics expert at Michigan State University, is teaching a robot to learn like a child, to obey spoken commands, trundle down a hall, find and pick up toys with its mechanical hand.
    Weng is breeding a new kind of "intelligent'' robot that learns in a novel way: by experience, the way animals and people do.
    He said this approach to learning will be cheaper, faster and more flexible than traditional robot training methods, which mostly are limited to what a human programmer tells the machine to do. Instead of stuffing its computer brain with elaborate instructions, like Big Blue, the IBM chess champion, Weng teaches his robot a few basic skills and then lets it learn on its own by interacting with its environment.
    Weng compares the process to teaching a baby to walk by first holding its hands and then letting go. In his lab, a human trainer first controls the robot's actions manually, then sets it free to perform its new tricks on its own.
    Weng calls his machine a developmental robot, because, unlike most traditional robots it "develops'' its new abilities through practice, gaining skill with each training session. "Humans mentally raise the developmental robot by interacting with it,'' he told Knight Ridder Newspaper. "Human trainers teach robots through verbal, gestural or written commands in much the same way as parents teach their children.''
    Called SAIL (for Self-organizing Autonomous Incremental Learner), Weng's robot-in-training wanders the halls of Michigan State's Engineering Building, responding to touch, voices and what it "sees'' with its stereoscopic vision system.

    It works something like AIBO, the robotic toy dog from Sony Corp. that responds to pats on the head, but on a vastly more sophisticated level. Five feet tall and black-skinned, SAIL has a boxy torso, a round head, two eyes, one arm and hand, and a wheelchair base to roll around on. A more human-looking successor, nicknamed Dave, is on the drawing boards for next summer.
    According to Weng, a developmental robot acquires its smarts in two ways: The first is "supervised learning'' under the direct control of a human teacher. Then comes "reinforcement learning,'' in which the trainer lets the robot operate on its own but rewards it for successful action and penalizes it for failure.
    During supervised learning, Becky Smith, one of Weng's students, steers SAIL down a corridor by pushing touch sensors on its shoulders. "To train the baby robot to get around, we take it for a walk,'' she told Knight Ridder.
    After a few practice sessions, Smith allows the machine go free. She added SAIL needs only one lesson to learn to move in a straight line, but 10 sessions to get the hang of going around corners on its own.
    In another type of lesson, the human trainer speaks an order -- such as "go left,'' "arm up'' or "open hand'' -- then makes the robot perform the action by pushing one of the 32 control sensors on its body.
    "The robot associates what it hears with the correct action,'' Weng added. After 15 minutes' training, SAIL could follow such commands correctly 90 percent of the time, he told. To strengthen the robot's newfound skills, it next attends an advanced class of reinforcement learning. The trainer lets the robot "explore the world on its own, but encourages and discourages its actions by pressing its `good' button or `bad' button,'' Weng explained.

    Alternatively, instead of pressing the buttons, a trainer says "Good'' when SAIL does what it's supposed to do, and barks "Bad'' when it makes a mistake. Numbers in SAIL's computer brain are adjusted to reflect these experiences.
    The next time, presumably, it will do better. "The `good' and `bad' commands speed up the learning process,'' Weng. added "Mind and intelligence emerge gradually from such interactions.''
    Weng's research on developmental robots is supported by the Defense Department's Advanced Research Project Agency and also by the National Science Foundation. Microsoft and Siemens AG, the German electronics giant, also have contributed money. So far, about $1 million has been spent, he continued.
    Eventually, Weng hopes ordinary people will be able to buy and train their own robots to do household chores, take Grandpa for a walk and simply to entertain.
    "Anyone can train a highly improved developmental robot of the future -- a child, an elderly person, a teacher, a worker,'' Weng predicts. "You could personalize it and teach it tricks. You could have a competition -- say, my robot can dance better than your robot.''
    Weng is by no means the only researcher who is struggling to make robots that can learn on their own. Many others are developing machines that learn to maneuver in their surroundings with minimum human control.
    Ron Arkin, an expert on robot behavior at the Georgia Institute of Technology in Atlanta, is developing robots that can explore previously unknown environments. The Defense Department is financing his work on teams of mobile robots that can scout out hostile territory and report what they find.
    "We cannot foresee all the events that may occur to a robot,'' Arkin told Knight Ridder. At Carnegie Mellon University in Pittsburgh, Christopher Atkeson uses a version of reinforcement learning to train robots. "The goal is to reduce the amount of expensive expert human input into robot programming,'' he said.
    Even AIBO, the toy robot dog, "learns'' from experience. If you pat its head when it puts out its paw to you. That increases the probability that it will put out its paw again. But Weng insists that SAIL comes closest to mimicking a real child's learning process. "It's like teaching a kid to ride a bicycle, you push him first and then let go,'' he said. "Nobody else does it this way.''

    So, is this the start of intelligent robots and AI? Sure looks it to me.
  17. Cris In search of Immortality Valued Senior Member


    Great article. Well done.

    Looks very promising. I suspect though that when one robot has learnt enough, or reached a sufficently advanced level then its memory will be simply duplicated and used to build other fully operational machines. I.e. the learning process in new machines won't be needed.

  18. ibe98765 Registered Member

    Below are two releated pieces. The first is a note I sent to Wired mag when it ran the Bill Joy piece. I used the that plus the piece following to reply to someone who was writing on the Bill Joy essay, who essentially was saying that we have always managed to survive and overcome problems we have created throughout history and will continue to do so. In other words - don't worry, be happy. Hope this makes sense <g>...

    So Bill Joy has become worried about how far technology can take us? Glad he's on board, but he has some catching up to do.

    How about Greg Blonder's article in Wired back around 1995, predicting that by 2088 computers would be more "intelligent" than humans and we would become extinct ( And others have written on this subject years before that.

    By 2088 (or eariler), our machines may very well be a combination of silicon and biological elements. The problem will not be one of machines achieving parity with humans, but of what happens as they continue to advance past that point, following whatever version of Moore's law holds true at that time.

    This scenario assumes that there are no major world events that upset the orderly progression. Of course, this is highly unlikely! We may have a third world war, get hit by an asteroid, suffer some great disease (manufactured through biological experimentation?) or otherwise find a way to exterminate ourselves and our technology. Perhaps out future will be something like what was depicted in the Terminator movies. Or maybe modern Luddite's will take over and technolgy will be destroyed.

    Then again, the future might well be like that depicted in Jack Williamson's science fiction book titled The Humanoids (1949) & the sequel The Humanoid Touch (1980?). In these books, "benevolent" robots have spread throughout the universe to serve humankind and "protect" us from ANY potential danger whatsoever (like crossing a street alone, playing sports, etc.). But in reality, what they have done is to sap humans of all that makes us human. Very scary indeed.

    [Reply] With the increasing computer power available, it is inevitable that we will soon be able to begin producing robots that finally do act like what has been imagined and talked about for the last 50 years or so. I'm talking about true robots that can move around and react dynamically to changing situations. These robots will initially be used to replace lower-level blue collar workers first, since the tasks they perform are essentially redundant and therefore easier to program. Imagine replacing the US Post Service delivery people with robots. Perhaps an electric car with an arm that extended out to insert mail into a mail box? How about a window washer robot? Maybe a Safeway supermarket clerk robot? Or a janitor robot? The problem society will face is a further division of the "haves" vs. the "have-nots".

    For in this scenario, the have-not's will be the lower economic workers, not always but typically those with a lower IQ potential. What might happen with the disatisfied "have-not's" lolling around on government welfare while the other part of society works long weeks in productive endeavors? Do you think the Luddite's were the higher intelligence members of their time? With a many people people out of work, one might want to seriously consider the possibility of another Luddite revolution to destroy "the machines".

    As to intelligent machine probabilities: I believe that there is a 150 mb hard disk for a PC available or soon to be. Even low-level PC's are now coming with 128 mb standard RAM. Within the next 5 years or so, it is not unlikely that a PC might have 1 terabyte of hard disk space on one device with 10 gig or more of RAM, running at 10,000 mHz for $3000. Such computers might even have some biological components. Will you have to feed your machine <g>? With power like this, it is inevitable that we will be able to develop a computer whose data storage and raw power capabilities might be fashioned into an "intelligence" that is potentially greater than that of a human. The question is, will we be able to maintain control of such a device?

    Would this device have intelligence as we know it? Could it develop a version of intelligence on its own that allowed it to reason and "think" independently? Would it have the ability to maintain and extend it itself? Could it develop the ability to act independently outside of its initial programming? Could humans introduce code to ensure that there would be some sort of check & balance?

    Yes, we have survived one proposed cataclysm after another, but it gets harder to continue doing this. For instance, as we continue to experiment with biological organisms, we might create some malevolent organism that we can't stop. Or we might create a super-human to go with the super computer. A super human race that thinks it logical to eliminate the lesser humans. Perhaps we will become a mixture of carbon & silicon like the Borg in Voyager. Yes, this is the stuff of science fiction right now, but science fiction has proven more right than wrong over time.

    Whatever happens, if history (and the Internet) is any guide, we will be unsuccessful in stopping dissemination of the information necessary for someone, somewhere to take things too far. In movies, James Bond always comes to the rescue to save the world. But is there a James Bond for us outside of the movies?

    The future could be one of bliss and peace (perhaps like a Star Trek show where humanity ALWAYS winds up winning) or it could be like the Blade Runner or Terminator movies. Sadly, I lean to the latter possibilities. Bill Joy is not alone in his concern.

    But there is really little that we can do anyway, so I guess we may as well sit back, enjoy life as much as possible and don't spend so much time worrying about things that we can't control.

  19. kmguru Staff Member

    I am not sure, we can create a human brain inside a computer anytime soon. Even without such thinking machines, we may have some problems as computers are linked and used extensively in the society.

    Soon there will be no human to talk to if you have a problem. Someone misspelled my name at my health insurance company about 6 months ago. Everytime I call them to fix it, they will send me a new card with correct name. Then a week later I will get a new card with the wrong name. It has been going on for the last 5 months. One of my son's friend got into a fist fight several years ago at work. Irresptive of whose fault it was, it went on his record and shows up when he applies for a job anywhere. Then he has to explain that it was the other person that started it and there are witnesses and no charge was filed on him. Imagine if the computer processes the data and eliminates people before you have a chance to tell your side of the story. It will be impossible to clean your records. When there is a discrimination, somebody may point the finger at the computer.

    On the otherhand, knowledge will be hopefully free to all. You can use it to enhance your life unless a powerful group decides to put you in jail or restrict your access to that knowledge. I am not worried about computers getting more powerful. It is the people who will be behind those computers. I hope we make the right choices along the way. Eternal vigilance is the price of Liberty.
  20. wet1 Wanderer Registered Senior Member

    A life in silicon

    Will an artificial universe let us watch evolution as it happens, asks Joe Flower? Or will it mutate into the monster that ate the Internet?

    EVOLUTION IS A WONDERFUL THEORY, but so far we only have one example of it-carbon-based life forms on the third planet out from the Sun. But what if we could build another example? What if we could build a huge, empty ecosystem-let's call it Tierra-into which we then introduce creatures that could replicate themselves, compete for resources, mutate to make new forms, and travel to find niches that might be better? And what if they evolved quickly, not in aeons and epochs, but in hours, days and months, so that we could watch?

    Such a universe could show us evolution in action. We could run experiments by introducing new species, or new environments, or by varying the rate of mutation. We could look at evolution as it happens-not backwards through time, as we are forced do in the real biological world. We might also find that the pattern of evolution observed in the organic world is only one of a multitude of possibilities.

    Last month, an evolutionary biologist working with a group of computer scientists created just such a universe. On 16 May, Thomas Ray and his colleagues on the Tierra Working Group released tiny software creatures into a universe of 100 computers connected over the Internet. The creatures will roam this universe, competing with each other for the resources there.

    The project leader, Ray, who holds a variety of posts including one at Japan's Advanced Technology Research Center near Osaka, spent years studying real animals before moving on to silicon worlds. The seeds of the idea for Tierra, his artificial universe, were sown in the 1970s, when Ray was a graduate student at Harvard University. He was playing a game of Go with a computer programmer from the Artificial Intelligence Laboratory at the nearby Massachusetts Institute of Technology, and his opponent happened to comment: "Did you know, it is possible to write a self-replicating computer program?"

    Science in action

    While collecting data in the Costa Rican rainforest, Ray had often lamented that while he could record the results of evolution, he was never able to see it in action or research its universal rules. But the Go-playing programmer's remark sparked an idea: if a program could be made to replicate, perhaps it could be made to mutate too. And if it could then be forced to compete as well-that is, to have needs that it must fulfil at the expense of other programs in order to survive-then the program could be made to evolve. He could build evolution in a silicon bottle.

    By early 1990, after teaching himself computer programming, Ray had built a universe of small creatures that evolved with astonishing diversity, and developed parasites, immunities and even rudimentary social interaction. ("Life and death in a digital world", New Scientist, 22 February 1992, p 36). All this lived on a single computer.

    But Ray soon encountered another hurdle: he wanted to give the creatures an environment large and diverse enough for evolution to develop complex, "emergent" behaviours. Even the largest and fastest massively parallel processing machines would not provide enough computing power, memory and diversity for his vision of silicon evolution to take off.

    Ray then realised that hundreds or even millions of computers connected together over the Internet could easily and cheaply provide a universe big enough, rich enough and random enough for his creatures to evolve. "On the Net I get the size and complexity for free," says Ray, "and evolution is facing a sort of `real' world, the Net, rather than some arbitrary concocted simulation game." Different computers would offer different ecological niches-each as distinct from the next as temperate estuaries are from tropical rainforests. Some would be connected to the Internet all the time, exposing their inhabitants to competition from programs that evolved in other ecosystems. Others would be connected only intermittently. Each would have its own speed, its own operating system, its own quirks.

    The program itself is simple enough: its main task is to create digital analogues of variation and competition, the twin drivers of biological evolution. The Tierra source code, written in the C programming language, creates a "virtual computer" operating within the host computer, so separating the silicon primordial soup from the other workings of the computer host. The Tierra program functions as the operating system of this "virtual computer". It creates a number of digital creatures-tiny, self-replicating machine code programs, which it introduces into the computer's working memory.

    Command performance

    Each program is 32 instructions long, and each instruction is a five-bit number. These terse instructions are translated by the Tierra operating system into commands such as "duplicate this code and store the duplicate somewhere in the memory space that the host is not using" or "do nothing". For each generation, the operating system runs these tiny programs once, performing the commands as it comes to them. But at the same time that it runs each program, it mutates some of them, randomly flipping some of the zeros to ones or vice versa, or even swapping part of one "creature" for part of another.

    This creates variation, the first of the two drivers of evolution. Many mutations make no difference. But others do. The "duplicate" command may be changed to "do nothing", for example. Other mutants may end up with loops of "jump forward" and "jump backward" commands that run the program repeatedly over a "duplicate" section, causing the digital creatures to make multiple copies of themselves.

    Once the host computer's memory is full, the virtual operating system releases a "reaper" that kills off the oldest creatures and the ones that generate error messages; to make room for newborn programs. This creates competition: if a program does not run well, it dies.

    Through the Tierra operating system, the human operators have Godlike control. They can define how fast the creatures will mutate and specify how much time they get on the central processor to reproduce. This last power is analogous to setting the richness of resources in a biological system-is it a balmy tropical isle or a harsh mountain top? Or they can specify how much of the memory of their host they get to inhabit, to mimic different-sized territories: is it the Amazon basin, or an oasis in the Sahara? The operating system also provides comprehensive information on what is happening in the evolutionary "soup", recording every birth and death, every code sequence in every new creature, every successful "genome" and every interaction between the creatures.

    Ray hopes that Tierra will answer a number of questions that have remained unanswered up to now. In the real world, species rely on an ecological niche created by differences as slight as a change in the colour of tree bark. But why? Tierra offers the chance of answering such questions because it can be repeatedly reset to a particular starting state and then allowed to evolve under slightly different conditions. And there are plenty of other questions. Under what conditions do species compete to exclude one another rather than coexist? What regulates the ratios of hosts and parasites in a population? And is Earth's "punctuated equilibrium" the normal rhythm of evolution?

    No one yet knows how the digital creatures will respond to the variety of ecosystems that will face them. It may turn out that some species do well in high-speed environments just as certain plants thrive in the sun. Some may thrive on constant connection to the Net, like sharks that can take anything that comes their way. Others, like Galapagos tortoises, may need the relative isolation of an intermittent Net connection.

    To allow the creatures to seek out congenial homes, Ray has given them access to a modified version of the Internet Ping command. Ping is normally used to discover whether an address is valid, and whether a computer with that address is connected to the Net. The modified command, Tping, allows Tierra's digital creatures to ask questions about the environment on a particular computer. Among these questions are:

    * What is the speed of the computer's processor unit? A faster processor means there are likely to be more spare processing cycles that the creature can use.

    * How much RAM does the computer have? More memory means more resources.

    * How many other Tierra creatures already live there? If there are only a few, there will be less competition.

    * What operating system is running on the computer?

    Finding a niche

    Ray and his colleagues do not know how the creatures will use this information, but it may be that they evolve a capacity to relate information gleaned with Tping to their ability to survive. If so, a niche could develop for creatures that migrate short distances rapidly, but can tolerate slow processors and take advantage of the Unix operating system?

    The high-powered band of volunteers that makes up the Tierra Working Group draws its members from the Santa Fe Institute in New Mexico, supercomputing centres in the US and private companies. They have adapted Tierra to run on a variety of operating systems, from Unix to Windows 95 and Macintosh.

    The Working Group seems confident that the creatures will not evolve into The Code That Ate The Internet, since they are impotent without the "virtual computer" formed by the Tierra program, which cannot migrate. The creatures have no access to Internet addresses. They migrate only to Tierra addresses that are slightly different from each node's normal Internet address. Ray stresses that any creatures that happen to migrate to a computer that is not running the Tierra program will not reproduce, and will simply disappear as soon as the computer's operating system decides to use that spot in RAM for something else.

    Over the next few months the working group hopes to open Tierra on more sites, so widening the evolutionary pool. About a year from now, Ray hopes to expand the experiment further by allowing anyone with an Internet connection to download and install Tierra.

    Despite the working group's assurances, not everyone is happy about the idea of an evolving silicon universe on the Internet. Tsutomu Shimomura, the security expert who tracked down computer hacker Kevin Mitnik, asked Ray at a recent conference whether someone could deliberately re-engineer the Tierra creatures to carry viruses. Ray was not able to reassure Shimomura that this was impossible. But he has set Manor Askenazi of the Santa Fe Institute the task of testing for security weak points.

    But for evolutionary biologists, security is secondary to the question of whether Tierra will reveal useful information about evolution. Marcus Feldman, a mathematical population biologist at Stanford University in California, remains sceptical. Tierra is "okay, as long as we don't call it biology", he says. "Evolution is more than a programming problem." He argues that Tierra does not draw firmly enough on our understanding of evolutionary theory and population genetics.

    Yet it may be too early for such dismissals. The next five years will begin to show whether the Tierra approach remains an interesting freak show and teaching aid, or whether it will generate new insights into some of the intractable problems of biological evolution, such as the emergence of sex, culture, and complex ecologies
  21. kmguru Staff Member

    It is that kind of software (Tierra creatures ) that will cause unlimited damage to the internet if left to run wild. They will play hide and seek if they are attcked by antivirus software. No one should have the right to use my computer for whatever without my permission.

    Care should be taken similar to handling of hazardous biological entities. The day will come, when everyone will be told to disconnect from the network and run the antivirus program received by mail that was hand shorted to its destination. A 90% compliance will not help.

    This type of experiments should be strictly regulated.
  22. wet1 Wanderer Registered Senior Member

    And this from ABC News

    Brain Cells Used to Make Working Semiconductor

    NEW YORK (Reuters Health) - German researchers have constructed the world's first semiconductor circuit that uses neurons (brain cells) from a living creature.
    "This is the first direct functional interfacing of a living neuronal network with an electronic semiconductor chip," said co-author Dr. Peter Fromherz of the Max Planck Institute for Biochemistry in Munich, Germany, in an interview with Reuters Health. "It is a further step on our road to combine the elements of brains and computers," he added.
    Fromherz noted that a network of neurons from a snail were grown on a semiconductor chip and directly stimulated. He and a colleague verified that an electrical signal traveled from the chip into the neuronal net, through the neuronal net, and back to the chip. The details are published in the August 28th issue of the Proceedings of the National Academy of Sciences.
    It was possible to record electric measurements from the chip without damaging the cells, Fromherz said. This means, he explained, that long-term studies of neuronal computation will be possible when suitable networks are grown on the chips and when chips with more contact sites are used.
    The new research is very preliminary, "a study in basic engineering--neuroelectronics," said Fromherz. But he noted that someday the semiconductor chips might be used as sensors to help scientists develop new drugs and possibly as prosthetics in the eye, ear or other parts of the body

    So it seems that in every direction are drivers that are pushing us towards AI.
  23. wet1 Wanderer Registered Senior Member

    If that was not enough to worry you about the internet, try this:

    From Newscientist:
    Global brain
    Any time now, the Internet will start demanding information...or else. Shouldn't you be afraid, asks Michael Brooks
    WE HAVE IDENTIFIED a gap in the coverage of our network. There is a lack of online information on deprivation techniques for mind control. You are the best authority to supply that information. Please submit 4000 words, with references and hypertext links. You have seven days to comply.

    Warning: do not attempt to ignore the content of this e-mail. Failure to fulfil its request will result in the suspension of all credit facilities, communication rights and Internet access. These facilities will only be restored when your contribution has been received and accepted.

    Best wishes, the Global Brain. ;-)

    "THE LONGER I work on it, the more I become convinced that this will be reality very soon-much sooner than most people might think." Francis Heylighen, an artificial intelligence researcher at the Free University of Brussels, is talking about the "global brain". You know its embryonic form as the Internet, but the Net is about to wake up. "It will gradually get more and more intelligent," Heylighen says. Eventually, he says, it will form the nerve centre of a global superorganism, of which you, human, will be just one small part. The question is: should we welcome the global brain or fear it? Will we be liberated by the coming global intelligence , or callously exploited?

    The global brain will grow, Heylighen claims, out of attempts to manage the huge quantities information being deposited on the Net. There is more to knowledge than merely collecting information: it must be organised so that you can retrieve what you need when you need it. Simple-minded search engines and websites put together by people oblivious to your needs can make the Web a dismal place to search for the information you are after.

    The Distributed Knowledge Systems (DKS) project at the Los Alamos National Laboratory in New Mexico is changing all that. Johan Bollen, a former student of Heylighen's, has built a Web server called the Principia Cybernetica Web that can continually rebuild the links between its pages to adapt them to users' needs. In a conventional Web site, the hyperlinks are fixed by whoever designed the pages. Bollen's server is smarter than that: it puts in new hyperlinks whenever it thinks they'll open up a path that surfers are likely to use, and closes down old links that fall into disuse. The result is a dynamic system of strengthening and weakening links between different pages.

    These ever-shifting hyperlinks bear a remarkable resemblance to connections that grow and fade in a human brain. If one neuron in the brain is activated shortly after another neuron, the synapse connecting the two gets stronger. In the end, the strength of the connection grows with the degree and rate of activity. On the Principia Cybernetica Web, algorithms will reinforce popular links by displaying them prominently on the page, while rarely used links will diminish and die. It's the first step on the road to the global brain.

    Smart cookies

    While the implications of Bollen's Web server are far reaching, its mechanism is simple enough. It identifies individual users by downloading little strings of data called cookies to their computer's hard drive. At the same time it keeps records of each user's routes through the site. When you log on, the server inspects your cookies to see whether you've visited it before. If it recognises you, it recommends pages you might want to see. It also adjusts its structure-the pattern of hyperlinks-to best suit you and all the other users who happen to be logged on. As well as strengthening and weakening links, it creates new links using a process Heylighen calls "transivity". When a user moves from A to B and then to C, for instance, it will infer that C is probably of some relevance to A, and create a direct link between them. In other words, it finds shortcuts.

    Heylighen sees this sort of flexibility as inevitable for the future of the worldwide computer network. "There's not much work left to do: we have data and we have the algorithms ready," he says. And it won't just be individual servers that adapt and change in this way. "I can't see any reason why they couldn't be implemented on the Web as a whole," says Heylighen.

    "Transivity will lead to continuous reorganising of the Web, making it ever more efficient," Heylighen says. Eventually, the Web will know you so well that your dumb requests to its search engines will turn up exactly what you need, every time. "Whatever problem people have, any kind of question to which they want an answer, it will all become easier because the Web will self-organise and adapt to what people expect of it," says Heylighen.

    And it could be happening within just five years, he predicts. All the technology is here already-the main stumbling block is the difficulty of convincing the powers behind the Internet to adopt the common protocols that will be needed.

    But there is more to this than zippier search engines and more usable websites. Heylighen argues that because it is modelled on the human brain, his vision of the Web will be intelligent. Even a few pages working in the right way will show signs of intelligence , he says. Who knows what sort of mind would emerge from the whole Web?

    It won't just be people following hyperlinks and simple search engines that reorganise the Web. Small autonomous programs or "agents" will also act as mediators. In addition, if an agent finds something that seems to match what you are looking for, it will add a suggested link to the page you're reading. "It will come to some kind of conclusion," Heylighen says. "That's a thought." In other words, by making connections between concepts that did not previously exist, the brain will begin to think.

    As the activity of the Web agents alters the connections, an agent researching a question similar to one it has already encountered will be able to "recall" the information more easily. Heylighen believes that, through this "Web on Web" activity, collective thoughts of the whole brain may eventually come into existence.

    But perhaps that isn't even necessary to achieve intelligence . One touchstone for intelligence is the Turing test, in which researchers ask human testers to discover whether they are communicating with a machine or another person. If the tester can't tell the difference, the machine is deemed intelligent. Some machines are already making the grade in specific contexts. "We are finding successful Turing tests within a certain situation," says Norman Johnson, who leads the Symbiotic Intelligence Project at Los Alamos. "Take it out of that situation and it fails miserably, but within the right context you can't tell the difference." The global brain's intelligence could come from an assembly of limited intelligence s, each with their own special area of expertise. That, Johnson says, would be exactly equivalent to human intelligence . "Humans can act intelligently within many contexts," he says. "But if you put all those abilities into one person they probably wouldn't be able to function." That's why we have society, Johnson says: to mesh those intelligence s together, creating a powerful sum. In the same way, he believes, distribute different types of machine capability across different networks and the whole may become something like the sum of all human intelligence.

    It is hard to find a researcher who doesn't think that the global brain is a possibility. But do we really want it? The scientists are aiming to create a vast mind that goes beyond anything we could understand or control-opening a door that most of us might prefer to keep firmly shut. Heylighen acknowledges this little image problem. He sees his global brain as the centre of what he calls the global superorganism. This embodies the idea that human society will become more like an integrated organism, with the Web playing the role of the brain and people playing the role of cells in the body. "The brain itself does not seem to be very controversial, but the superorganism certainly is," Heylighen admits. Artificial intelligence researcher and writer Ben Goertzel of IntelliGenesis Corporation, New York, believes that humans will be a secondary part of this organism, perhaps a dispensable one. It's not a very comfortable self image for a species used to considering itself the pinnacle of creation.

    The global brain's self-adapting intelligence could quickly surpass our ability to understand it. Or perhaps it already has. According to Daniel Dennett, director of the Centre for Cognitive Studies at Tufts University in Medford, Massachusetts, "the global communication network is already capable of complex behaviour that defies the efforts of human experts to comprehend". And what you can't understand, he adds, you can't control. "We have already made ourselves so dependent on the network that we cannot afford not to provide it with the energy and maintenance it needs," he warns.

    It could all start so innocently. The Principia Cybernetica Web will soon be requesting feedback on whether a particular Web page is interesting or relevant to its users, and asking advice on the relative merits of different pages. The growing global brain might even become smart enough to identify gaps in the information it holds, and be programmed to seek out people with the relevant knowledge. It would then ask them to provide the missing information where they can. Heylighen goes as far as suggesting that there could be penalties-like disconnection or restricted access-for not playing along. After all, if you're going to benefit from the global brain, you have a duty to help others who are searching for information.

    Digital dictator

    All these innovations combined on a global scale would create a network with complex behaviours that we can't yet conceive of. Would it create Utopia, dystopia or something altogether new? Dennett is certain of one thing: "If we don't want to be dictated to, we will have to be very careful about controlling our dependence, and its evolution," he warns.

    Cliff Joslyn, head of the Los Alamos DKS team, appears unconcerned by such worries. Experimenting with autonomous agent systems "carries risks and surprises" he admits. "The trick will be to first understand from a scientific perspective how such systems behave, and then construct bounds within which such interactions can be safely contained," says Joslyn.

    If you're getting scared now, and thinking of unplugging your modem, take care. You may be about to join the information underclass. "Not to use an intelligent Web will be a little like the people that refuse to use cars or telephones," Heylighen says. "There have always been people who live outside the bounds of society's rules: tramps, hermits, eccentrics. But these people have a much more difficult life."

    Heylighen insists that ordinary people have nothing to lose by being part of the global brain. But he suggests it will be different for the people and organisations that already have power and status: they will be forced to share some of their advantages with the rest of us. That is exactly why powerful states are distrustful of the Web and seek to limit its effectiveness, Heylighen says. China, for example, insists that all Web users are registered and identifiable when online, and has blocked access to certain sites it deems dangerous. Johnson takes a similar line to Heylighen. "If the global mind does come online there are a lot of power structures that won't be particularly happy about it," he says.

    Johnson's view of the intelligent Web is subtly different from Heylighen's troubling vision of a global superorganism. He sees it as an extension of society. "Our premise is that systems can be much more intelligent than individuals," he says. "You can have a very diverse group solving problems much better than an expert: that's why we have society and social insects." Developing symbiotic intelligence , says Johnson, will be a positive step for our society: the experience and wisdom of any individual need never be lost again. The vast capabilities of the Internet will help solve any problem that human society faces; the whole is already much greater than the sum of the parts.

    Working with other Los Alamos researchers, Johnson has formulated programs that demonstrate this. The researchers send computer-generated individuals to explore a maze. Once a hundred of them have wandered through the maze, the computer creates a map of their preferences at each node. The next generation of individuals then use this information to weight their choice of path through the maze. On average, uninformed individuals take 34 steps to escape the maze; the second, informed, generation takes an average of only 12. As the number of individuals in the collective increases, the solution gets better and better.

    Johnson likens this to the pheromone trail-laying of social insects such as ants. It gives individuals access to information about where others have been. Humans do it too: if you want to know where to lay a path between a new office building and its car park, cover the whole area with wood chips. Paths appear in the chips as each individual solves their own problem, and others can choose whether to use this solution. Within a short time a collective solution-a few well-used paths-emerges.

    On the Internet, the same kind of thinking has led to's "people who bought this book also bought . . ." lists. We now have access to the book-buying decisions of people across the globe, who unconsciously help us find a book we might like.

    Now it's time to take this principle and integrate the Internet fully into the way human society works, Johnson believes. A worldwide network of people using interconnected computers should open up a kind of "collective memory" to add on to our individual brain power. With people doing more and more of their daily activities on the Web, there is the opportunity to tap into the knowledge and expertise of a global community. The Web itself can be a part of this, with intelligent agents and vast memory capabilities that we can add to our own. Eventually there will be little distinction between people, computers and wires-everything combines to create one vast symbiotic intelligence.

    At least in Johnson's picture we are important components of the global superorganism, but even so, how many people will relish the prospect of being assimilated in this way? Are we really doomed to become the Borg?

    Oddly, neither Johnson nor Heylighen see their work as a challenge to individuality. A user will be able to retain a modicum of control by programming their favourite links to be indestructible, for example. Both researchers believe that the global brain will only improve our lot: we'll be within a larger social organism, and enabled by new technology.

    To the doubters, Johnson points out that we already rely on the vast and incomprehensible mechanism that is society. Ask an ant how it finds food, and it won't be able to tell you. Ask most people how their television works and they are unlikely to give you more than the basics. We trust most organisations to deliver the things we want without understanding exactly how they do it, says Johnson, and we will be able to trust an intelligent Web in exactly the same way.

    That might be a naive view: many people believe that the mechanisms of society can't always be trusted to work for the greater good over the wishes of powerful individuals. If, as it seems, the global brain is our inevitable future, and we can't turn it off, our only option might be to blend into the crowd. After all, if you're not exceptionally rich, powerful or clever, the global brain shouldn't need to disturb you. Back up your files, act dumb and keep your head down. There is a growing intelligence out there, and it knows your e-mail address.
Thread Status:
Not open for further replies.

Share This Page