Discussion in 'Intelligence & Machines' started by wesmorris, Oct 2, 2003.
Isn’t this answered as “to take more time using faster processors”.
Log in or Sign up to hide all adverts.
BetweenThePoints: Do you have any idea of the basic concepts underlying the program you described? It might merely be programed to mimic problem solving capabilities.
When you see Deep Blue or other high level chess playing programs in action, you might believe that there is intelligent behavior directing the machine play. When you understand how the program works, you realize that it is a very simple minded number cruncher and not even close to AI.
BTW: Perhaps the human mind would not seem to be intelligent if we understood how it worked. At least I am sure that it does not play chess the same way that Deep Blue does.
I don't think it's much good arguing that computers can become self-aware because we will instruct them in how to do it. We haven't got a clue ourselves. 'Take more time and use faster processors' to do what exactly?
You are missing the point by a wide margin. It is not that we instruct computers how to do something we do not understand ourselves but we create a construct that allows computers to do it themselves.
One of the most promising approaches in AI is the development of a learning seed. Once launched the computer then learns for itself, much like a new born baby will grow and learn. In human children self-awareness usually begins between 1 and 2 years of age. At that time the neural and synaptic connectivity has reached an appropriate level of complexity that allows self-awareness to emerge.
Quite some years ago (early 1990s) I watched an AI program develop the rules for solving square roots. The technique was entirely based on forming neural networks. The training was based on some simple text based mathematical equations that were provided as input. The program was allowed many hours to learn and then was given new equations with no answers and was asked to solve the problems. What I found ‘spooky’ at that time was when the designer said that he didn’t know how the program had achieved its result.
It's not that simple.
The baby learns through multiple sensory inputs + other consciousnesses and the "neural net" that is the brain already has certain despositions that we as of yet do not know how to manucftacure.
The idea of a learning seed is to get the process to be able to respond to different varaints of that input. The most important thing however is that the input and variable space of responses are known ahead of time. The cannot be said for self awareness. I think if we are going to achieve AI, we must not try to emulate the brain but what makes the brain able to do what it does.
Have you come across Rodney Cotterill's current work? He is trying to explore consciousness by means of a computational model of an infant brain, dubbed 'Cyberchild - a simulation test-bed for consciousness studies'. A recent paper describes progress.
There isn't any. The team haven't even been able to define what they're trying to create.
He writes "The present investigator feels that working with such a simulation is a salutory experience. There is naturally the implicit assumption that consciousness requires no mystical ingredient, beyond what is comprised in the simulated circuit. But it takes a lot of faith in the reductionist canon to believe that consciousness will one day emerge from the blinking lights..."
I'll say it does. He goes on:
"Above all, direct interaction with such a simulation leads one to appreciate the profundity of Searle's speculations about the importance of meaning."
In other words, there's a problem replying to Searle's arguments.
Stan Franklin (Uni of Memphis) is working on IDA, a software agent he describes as conscious in a sense. He also admits the impossibility of working on something that can't be defined scientifically. Thus he uses 'consciousness' to mean 'functionally conscious', in other words no mention of phenomenal consciousness, self-consciousness etc., just observed behaviour. This is the approach of all researchers in his field. Functional consciousness (whatever that might mean) is what they are researching and trying to create. Nobody knows how to work on consciousness proper.
As Franklin points out, IDA is a piece of running software, and is thus immaterial. In this case what is there to be conscious? He also writes:
"Subjective consciousness seems to be such an inherently first-person phenomena as to be impervious to any form of proof. Following Chalmers I'm beginning to view phenomenal consiousness as a fundamental process of nature comaparable to mass or energy... Deducing structure from behaviour is notoriously difficult... in software agents such deductions are theroretically impossible..."
A good article is Stephen Harnards 'Can A Machine be Conscious? How?' - it can probably be found at http://www.u.arizona.edu/~chalmers/people.html
He concludes not.
There will be no progress in AI or a conscious computer-like device until a totally new computer architecture is invented or developed from something currently in some remote laboratory.
More memory and a faster CPU using current archetecture will not do the job, even with a huge connected array of processors.
The human brain is fundamentally different from a modern computer, and that difference (whatever it is) is likely to be what makes the brain so remarkable.
If you show a human a photo of somebody, he can almost instantly tell that he/she never saw that person, often coming to this conclusion in less time than it takes to recognize the face of somebosy not known well. A computer will invariable take longer to reach the negative conclusion. This suggests soimething, doesn't it?
A human does not have anything remotely resembling a meory divorced from processing functions, and it certainly does not use anything similar to an indexed data base. Human memory is like having a computer rewire its circuitry to add data.
The human mind stores data about a flower in poroabbly a dozen different places: Shape/outline in one place; Color in another; Name elsewhere; Motor nerve data for saying the name in an other place. The human mind seems to be like a complex network of hyperlinks, with processing and data germixed.
You see this is what I've been thinking about for a long time and I feel that I'm dancing around it.
The problem is that an algorithm has no depth. It is not a volume, it's two dimensional and cannot see itself. Certainly you can teach it to peform tasks, but it cannot contemplate why it is performing these tasks..
That is why I insist on "the abstract" as the inner dimension where meaning lies... and consciousness itself is the phenomenon that acts as a gateway to that dimension.
Why? I tend to disagree.
Why not? But I agree it is unlikely if we emulate a brain using SMP (Symmetrical Multi Processor) architectures, memory will be a bottleneck. However, infinitely scalable MMP (Massively Parallel Processor) architectures are essentially identical to the brain. Instead of thinking of the brain as a computer think of each neuron as a processor in its own right with its own local memory. Each of these computers then links to others via a set of SANs (System Area Networks). All these processors are free to operate asynchronously. This is the brain. MMP shared nothing systems do the same thing.
I hope I have now corrected that incorrect impression.
Your comparison fails because we have yet to build a computer system with equivalent power to the human brain. I put the ratio of somewhere like 10,000 to 1. Try the test again when we have an MPP with the power of 10,000 current day super computers. But note you are talking about visual recognition and there has been substantial progress on this – see Hans Moravec’s home page where he describes the latest commercial enterprise using 3D visual recognition – http://www.frc.ri.cmu.edu/~hpm/
This is an MPP architecture and has been well understood since the 1970’s.
Just like an MPP machine.
Please Register or Log in to view the hidden image!
Please Register or Log in to view the hidden image!
Chris: It seems to me that you are giving so called MPP architecture more credit that it deserves.
I remember what we called array processors which were used by the Spook Shops for breaking encrypted messages and for solving partial differential equations via relaxation iterations. That technology was nothing like a human brain. It was merely hundreds or thousands of CPU’s using common memory which was partitioned so that many processors could access the memory simultaneously most of the time. A CPU would only have to wait on memory if another CPU was accessing the same bank. The OS and programming capabilities of those systems were primitive compared to that required for an AI system.
Could you provide more details of the MPP architecture you mentioned? Might you remember the type of applications for those systems from the seventies?
BTW: Sorry for all the typo's in my previous post. I was in a hurry and did not have time to proof read it.
MPP systems are currently in widespread use as major processing systems. AOL runs its entire email system on one. The NASDAQ stock exchange is a massive MPP system. All significant stock exchanges in the world run on MPP systems. 90% of bank ATM systems worldwide are processed via MPP systems.
HP NonStop Servers are currently the world leader in linear scalable fault tolerant MPP systems. Systems can be built from 2 to 4080 processors where each processor has its own main memory and communicates with others via a SAN (Servernet). These systems form the backbone to many major business enterprises throughout the world.
Hope this helps a little.
Chris: As I expected, the MPP systems you mention are not even close to the architecture of a human brain.
What you are describing is no more than a big LAN-like system. The array processors used by NSA for code cracking, and for various relaxation calculations (EG: Weather forecasting and other partial differential equation processing) are closer to the architecture of the human brain, but fall short of the architecture required for AI.
You missed the point of the example I gave of recognizing a human face. Computer algorithms take longer to respond with a not known response than the time required for a recognizable hit. This is because the computer algorithms require checking every index entry or (worst case) every item to decide that a pattern is unknown. The human brain often replies faster with a not known response than for a pattern not seen for a long time.
When I go to a college or high school reunion, I almost immediately catalogue the ones I never saw before (they graduated before or after I attended), but I often take several seconds to recognize the more obscure members of my own class. Similarly, when I see an old movie on TV, I usually can immediately identify some actors as those I never saw, but often take more time to decide that I once saw some actor with a minor part who never made it in a significant role. All the computer algorithms of which I am aware take longer to respond when the pattern is unknown. This definitely suggests a fundamentally different algorithm for human versus computer pattern recognition.
Again, I point out that the human brain does not have data memory distinct from processing functions, while all current mainstream computers do have memory data, file (or database) information), and processing (program) data as distinct entities. The human mind does the equivalent of rewiring the hardware to store new data, while the computer merely adds data to a file or a section of memory used for data storage. Data and programs are germixed in the human brain in a fashion fundamentally different from that of current computer architecture. There are some primitive attempts at neural network architecture which might develop into a suitable AI architecture, but there is nothing that remotely functions the way a human brain does.
BTW: I expect true AI to be developed someday: In 10-20 years if some genius comes up with a revolutionary idea; In 50-200 years if it is done due to slogging along by people who are just very intelligent and focused.
At present, we do not have a clear understanding of what functions are hard wired into the human brain. It has only been about 30-40 years since we discovered the basic nature of the mechanism built into bird brains for seasonal navigation. Experiments in planetariums have shown that many migrating birds have hard wired functions for noticing the point around which the stars seem to rotate (Polaris for several thousand years).
Until we have a far better understanding of the human brain, we are unlikely to build an AI device. In any event, I do not expect it to have an architecture similar to any current computer system. More memory and an array of faster CPU’s is not going to do the job. A single blazingly fast CPU will not even come close. At present neural network architectures looks like the best bet, but the best of these are primitive.
Neurons operate independently and in parallel connected to each other via a relatively low speed communications network – synaptic connections are not electrical. The analogy with the HP MPP systems is essentially identical, but needs another 15 years to catch up with processor speed.
Partial image recognition is also understood, but again they need the knowledge base, and the equivalent processing power of the human brain. I don’t see that you have a valid point. Video image transmissions now focus on only sending changed frames instead of full images. And fuzzy logic and probability matching also enhance computer processing techniques to come closer to the way the brain operates.
But any comparison you make with current computers simply cannot compare with the massive parallel processing power of the brain – it is important to keep that power ratio in perspective.
As for memory storage, I agree, the brain uses a different approach. The issue then becomes do we want to copy exactly how the brain operates or be able to accurately emulate brain function? The primary technique required in either approach must be massive parallelism. Storage techniques can differ if the same end result is the same.
I think the architecture of the brain is memory based. There was a PHD student at my undergraduate school who I believe was working on an architecture centered around memory. I will see if I can get a link...
EDIT. Link to the research group he belongs: http://www.capsl.udel.edu
Separate names with a comma.