Strong AI

Discussion in 'Intelligence & Machines' started by Blue_UK, Feb 7, 2005.

Thread Status:
Not open for further replies.
  1. Blue_UK Drifting Mind Valued Senior Member

    Messages:
    1,449
    Here are some thoughts I had, any feedback would be appreciated.

    Machine Intelligence
    <hr>
    Alan Turing postulated that if a machine could hold a conversation with a human without giving itself away as artificial, then it could be considered that it thinks and is intelligent. This may not be a good way to describe intelligence, as a conventional machine does not really think or know anything about what it does, even if the symbols they process have meaning to us.

    We struggle to define intelligence, because there are so many things that influence what think of as intelligent. It’s natural that we model intelligence on our own consciousness, as it’s taken for granted that we certainly are. The main fallacy with this is in the comparison of machine versus human brain architecture, as although both can be reduced to simple deterministic pathways, the macro-structure is entirely different; a machine CPU’s only ‘objective’ is to sort and combine data. Like the brain of a nematode worm, all of its pathways are hard-wired ‘knee-jerk’ reactions that can be predicted quite easily. It receives binary signals for simple arithmetic and logistic tasks, whereas a human brain is an enormous neural network which dynamically interacts with its environment to seek out information and associate meaning. Strictly speaking, neurons can also be looked at as deterministic(1) and also computers can be made to emulate neural networks, but this argument is concerned only with the conventional ‘top-down’ approach to A.I. and the current form of computer design – which differs greatly from ours.

    To understand what we would like intelligent computers to be, we should look at the human system. So what is the objective of the human neural-computer? What state does it try to approach or to maintain? Biologically, the reason we have brains is clearly to increase our success at reproduction but there is almost certainly not one line of code, or a fundamental loop of neurons, that states, ‘Do whatever it takes to survive, reproduce and look after my offspring’(2). Although that would be rather ideal from an evolutionary standpoint, it is computationally unlikely and would require a very large knowledge base to interpret the semantics of such an instruction and effectively execute it. Yet, there is consistency to the decisions made by humans; the states of the inputs do make a difference to the output. Therefore there must be certain preferences, templates and such deeply imbedded into the network. For example, a beautiful woman clearly does actually emit beauty: it is the brain of the receiver that may find her image to be pleasing. This suggests the presence of a template, or rulebook, that every captured image is referenced against. The weighted combination of many thousands of such ‘templates’ or ‘influences’ (motivations, hereforth) could contribute to a total sum used to make a decision.

    The idea of a single total sum, or single prime directive might seem contrary to the fact that a human consciousness is influenced by a great deal of motivations. But what happens when two of these conflict? There must be a rule to determine which has greater weighting and as such everything will eventually boil down to a very basic single prime directive.

    This model has obvious issues to be addressed, such as implication that a person cannot make a decision that they believe will put them is a lesser state of some fundamental sum. However, it is quite obviously reminiscent of a neural network and as such can most likely emulate the truth even if the actual mechanics of the brain happen to be entirely different.

    To give a machine such a design would allow us to program complex motivations in, which might appear to give the characteristic of personality. I.e. Given that the computer has available to it a detailed environment model and in-built likes and dislikes of various stimuli, it could learn to manipulate the environment to satisfy the fundamental motivations. The frivolous use of ‘like/dislike’ is the pivotal point – all that is meant by this is that the system does whatever is remembered to bring a variable or synaptic node to a particular value.

    Ultimately, the greater the capacity to learn and make associations, the closer this machine will be to intelligence as is meant with reference to humans.

    <hr>

    1. Neurons may not be completely deterministic even if they are close enough, as they can be influenced by noise and chemicals present in the blood which are not considered part of the logical circuitry. It should be noted that although there maybe randomness at the quantum level, such fluctuation will be negligible on the cellular level and not likely to influence whether a neuron fires or not.

    2. Richard Dawkins, The Selfish Gene.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. HonestJohn Registered Member

    Messages:
    28
    I agree with most of what you say, except that I think computers are not predictable. There is a random function in most program languages, often based on a calculation using the number of microseconds elapsed since the machine was turned on.
    Similarly the human brain is not compelled to choose the option that fits the values that human has selected. Many humans make mistakes, act in a contrary way, and choose to behave randomly on occasion.
    I make these two points to show neither the machine nor the human is deterministic in any absolute sense.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Dilbert Registered Senior Member

    Messages:
    361
    you are absolutely wrong.
    computers are not random one bit, and you explained why with your post.

    Humans however, well they are also totally deterministic but they are rather complex and therefore it could be seen as if they are random and unpredictable. Humans make mistakes, but that does not constitute any absence of determinism.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Blue_UK Drifting Mind Valued Senior Member

    Messages:
    1,449
    Well John,

    Yes, we make mistakes: both our senses and our "processing power" are limited and so it naturally follows that such a system that I have described would be expected to make errors.

    As for people acting randomly... people can appear to act randomly, but I would not say this is the case. Sure you can pick a number from the top of your head - or throw your arms about pretending to have a seizure - but I would have it that if I made instantanious copies of you and placed them in identical surroundings with the same physical initial conditions, you would take exactly the same course of action every time.

    Clearly, my philosophy takes the obvious assuptions of causality. Determinism in an absolute sense? - see foot note 1.

    I think one of the things that strikes people as just plain 'wrong' in my arguements is the regidity of what I propose is the decision making system. I would predict that there are an enormous number of templates/motivations and this is why behavour is erratic (many internal conflicts, weightings continously being adjusted), not because we have some kind of ruleless free will.
     
  8. HonestJohn Registered Member

    Messages:
    28
    I take your points, Blue and Dilbert. However, the argument that you can imagine two identical humans, and in that case both would behave identically, is flawed, I think. All it proves is that you can predict what a man is going to do by observing what he does, or what an identical copy does. There is still a possibility that both copies are, taken together, undeterministic.
     
  9. Dilbert Registered Senior Member

    Messages:
    361
    John, there is no way of actually knowing if we are deterministic or not. Any theory is valid, thought that yours might be more highly regarded since most humans like to believe that they actually think by themselves.

    whilst this might matter to Blue; but it does not matter to me, my AI design works just fine allready.
     
    Last edited: Feb 11, 2005
  10. Blue_UK Drifting Mind Valued Senior Member

    Messages:
    1,449
    Yeah, my 'duplicate person' scenario is not meant to be a proof... just to demonstrate my belief.

    The point of my theory, which is introduced in the initial post, is purely a suggestion as to how one would create a machine that could think like us.

    Dilbert, do you actually have a working AI design, or is it just in ur head at the mo?
     
  11. HonestJohn Registered Member

    Messages:
    28
    You make the point that to learn is a key ability for machine intelligence. ("the greater the capacity to learn and make associations, the closer this machine will be to intelligence").

    What if I build a system that works with a limited set of objects, just spheres and cubes for a start. Any object can have its size and material specified. The software is allowed to experiment - push the object around, and see what happens. I plan to let it decide what it wants to try: e.g. push the rubber sphere with x amount of pressure against a cube rotating at y turns per second. I would give it the test results, and if it gains knowledge i would make it describe in words what it has found out.

    This would be a step towards learning, and could be followed by an expanding set of objects, materials and other properties until the software is able to make sense of the real, physical world.

    You made the point that "a machine CPU?s only ?objective? is to sort and combine data" and in this case I feel that a further objective can be built in to the code: to maximize its sum of knowledge about its external world. By justifying its assertions the program would be self-checking and easy to redirect when it goes off at a tangent.

    I have constructed similar code to this in several projects during my forty-odd (nearly 50!) years writing code professionally, and they have all worked effectively, so this one has a fair chance of getting off the ground.
     
  12. Blue_UK Drifting Mind Valued Senior Member

    Messages:
    1,449
    How would one go about implementing desires? i.e. influences to its decisions.

    I would agree that a desire to learn about the world is something that most humans have and would be an interesting thing for an AI to be given, but it is how one would go about doing this that I try to think about. How does one go about creating a neural net and giving it desires?

    As a side note/question, I assume you rank knowledge in some way such that the machine does not hoard uninteresting (to us) data (like the position of every grain of sand on every beach) with equal regard to more useful knowledge?

    --------------------------

    Obviously the human brain consists of billions of neurons, but it should not be necessary to reduce the problem right down to that level. Surely there must be large subcomponents? I consider this, as everyone's brain is not going to be structurally identical, but yet we are all conscious.

    I'm treading on this ice here as I admit that sounds rather like bollocks; my point is simple: Surely there is some minimum, fundemental macro-structure required for there to be be consciousness as we experiance it.
     
  13. HonestJohn Registered Member

    Messages:
    28
    Your opening remark (implementing desires), and the point about the problem of discretion (what info to keep and what to discard) both point to a value system built into the project.

    I anticipate that I will put in a test, such as, 'Does the retention of these facts help my understanding of the environment?'. Computers these days have the spare processing power to go a long way down the wrong track, just to see if their results are worth keeping. 'Worth keeping' in the sense that reportable progress can be recorded towards the goal of understanding its environment.

    My first task will be to program it to put into words what it currently understands. This will help towards defining the structure of its main database. The approach will be neural, but only in the sense that frequently traveled pathways will be reinforced and will acquire greater significance in making hypotheses to test. I'm looking forward to writing the 'contemplation' code, where it scans its record of 'significant' events to gain further insights.

    I note that others in AI have seeded the database, helping to get the process going, and even helped along the way, by manipulating its knowledge rather than just providing facts. I plan to avoid this, if possible.

    Thanks for your feedback - it is appropriate and welcome.
     
  14. nexus Registered Member

    Messages:
    24
    One issue with giving a set of instructions, such as "pick up red squares" is that "behaviors" can result which may seem to be indicative of learning but are in fact an extension of the primary directive. I.e. a robot "attacks" another robot which is holding a red square--the robot's AI didn't learn to attack other robots and steal their red squares, it simply saw a red square and attempted to pick it up.

    Talking about implementing consciousness and desires are getting a bit into wispy-theory territory

    Please Register or Log in to view the hidden image!



    But I don't think that the ability to learn is required for an AI to be considered intelligent, although it would obviously be a requirement for any AI or SI to develop beyond-human intelligence.

    As of yet, I haven't seen as much emphasis on learning intelligences as there has been for logical branching/rules-based intelligences. While the latter may be easier to construct, especially with today's technology, as I've said the former is obviously the future.

    Good thread guys.
     
  15. HonestJohn Registered Member

    Messages:
    28
    I would like to see more emphasis on simulating neurons, in preference to rule-based logic, particularly now that more is known about the process of re-enforcing to move from short-term to long-term memory. I'd prefer to provide as few preconceptions as possible, to avoid the trap you identified (the red square). Timing and repetition are important to train our brains, so the AI of the future could incorporate the technique our brains have evolved.
     
  16. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    I agree, "good thread." and also about "consciouness and desires".

    I also note the changing nature of "desires" E.g. stranded in the hot dessert or having fallen overboard in the middle of one of the great fresh water lakes. - Two very different weights required for the desire for drinking water, so it is not feasible to "build in goals and desires" if you want to move away from John's "toy world"

    New subject: the "chiness box" shows how hard it is to know if a machine not only passing Allan Turing's test is understanding or only faking it. Suprized no one has mentioned this well known problem.

    New subject: there are some types of AI "neural networks" (I prefer the term "connectionist networks") that do learn without any period of corrections or "sample" - they self organize with some "cluster analysis" - that is about all I can remember about them, except I think they originated in one of the scandinavian countries.

    I have read the entire thread. At times John appears to argue against "determinism" by confussing it with predictability. LaPlace's pre Quantum Mechanics universe was entirely deterministic. (see start of my attachment below.) yet he knew full well that it was completely unpredictable, even before Chaos theory and finited accuracy of measurements proved it so. Ignornace alone made it unpredictable. For example, to take a tragic recent one, no one knew the rupture stress limit of the fault on the East side of the Indian ocean. I am reasonable confident John knows the diferecnce, but IMHO, he needs to be more careful in the use of these terms.

    MY view: (a very strange one about intelligence, consciouness, free will, "out of Africa" etc.) is in the attachment; but as some have missunderstood me, I will explicitly state that indeterminate things happen when a mixed QM state is forced (by being observed) into a pure (eigen) state. The equations of QM are fully deterministic - the temporal evolution of an unobserved state discription (wave function or the matrix formulation) is exactly predictable, so obviously it is deterministic. The "quantum uncertain" comes in when systm is observed, but even then the probability of each of the possible observational results is exactly fixed by the oringinal state function. - That is, there is no room for genuine free will in any material object, advanced AI machines included. In this sense, John is on hopless task - they will never, no matter how developed in the distant future, have human free will. (Read on - I believe we do have it, with out Gods or miricles etc. - see attachment for why.)

    I belive (and bet you do too) that I do make real decisions and not just have the illusion that I am when in reality the probabliity of QM has made the decisions at the fundamental level of the complex neuro-chemistry of molecules in my brain. The attachment shows how this is possible, but you pay a price - you can not be material (nothing to do with God or spirits or miricles etc. - just a "paradigm shift" in your thinking about your self is all that is requred.)

    As the more frequent posters in this thread know more about computers etc than I do, I would appreciate your opinion if the "real time simulation" I speak of in the attchment has any problem with being free from the determinisim of even QM. I claim it is possible to have genuine free will, consistent with physics. I am well versed in Physics, but defer to your collective opinions about whether or not I could be only part of the complex, nearly perfect, simulation of the physical world made by the far-advanced parallel-processor (the human brain) I claim I am in the attachment. (It is a nearly perfect sim in that like modern movies made pixel by pixel, the laws of physics need not be strickly followed in the real time sim nor in Pixar's movies. Evolution has forced it to be "nearly perfiect" but occasionally were see things that are not reinal images - hallucinations etc - see attachment for three proofs that the accepted view of cognitive scientist is wrong.)
     
    Last edited by a moderator: Feb 24, 2005
  17. HonestJohn Registered Member

    Messages:
    28
    Hi Bill -

    Your take on the mental images we form of our world is fascinating. Also your analysis of my confused posting is correct. The most amazing thing in your attached article is "we are an informational process in a simulation, not a physical body"! To me, that rings true. But, how does this give us GFW? What I mean is, given that there exists a simulation and we can manipulate it, it does not follow that what we do, and what is done by us in the real world, is GFW. And yet, as you say, most of us think it is.

    Your posting and your article are food for thought indeed.
    Any other references you have would be welcome.

    John.
     
  18. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Thanks for the kind words. more below yours
    In the attachment, please note that I explicity state that I can not prove we have GFW -I.e. I acknowledge that it may be an illusion we all share. My only contribution to the classic debate is to show how it is possible that we might. i.e. to show that GFW is not necessarily a violation of physics.

    On your: "given that there exists a simulation and we can manipulate it, it does not follow that what we do, and what is done by us in the real world, is GFW...." I think I need to adopt my old scheme of using quotes around me, I, we etc. when I am trying to specifically refer to the sub routine in the real-time sim that is "my" personality as distinct from my body in the physical world. In this scheme, I am not sure if in your text the we is we or "we." I think you mean we, just as I have used I in this sentence three times.

    This is what I think: "I" am influenced by my body. E.g. "I" become thirty or feel pain if my body is dehydrated or burnt. Likewise, "I" can influence my body - "I" am doing it now as "I" use it to strike keys. This relationship to my body is personnal and usually unique (Libet and others have done some very interesting direct electrical stimulation of the brain, etc so under some conditions, others can also directly control my body.). To some extent,"I" can control other sub routines of the sim just like a sub routine in one of your hardware machines can make another sub routine "write to memory" etc. Most of the sub routines are beyond "my" control. This control has been prohibited to me by the evolutionary development of the sim. E.g. "I" can not tell the audio processing sub routine of the sim that understands the meaning of English words my brain processes as meaningless neural impulses to "shut down -do not understand" I focus on meaning as I suspect you will agree that never in any of your machines is there any meaning, only symbol manipulations at best and usually in digital machines only two symbols!

    The problem of meaning and how "I" have it is about as hard as the GFW problem and I do not have much to say about it.

    Not sure I have been responsive to your concerns about "manipulation" and the exercise of GFW in the physical world. If not ask again, using the quotes scheme when needed. I will state that since "I" can exercise very considerale control over my physical body (can't fly etc. of course) that yes GFW does exist in the physical world also, but only as a direct consequence of "Me" and "you.") etc.
     
  19. HonestJohn Registered Member

    Messages:
    28
    Hi Bill,
    First, I'll try to restate my question in the inverted commas style: What I mean is, given that there exists a simulation and "we" can manipulate it, it does not follow that what "we" do, and what is done by us in the real world, is GFW. And yet, as you say, most of "us" think it is.

    I think you have answered me very precisely. I plan to repeat and extend your experiment with the moving dot and the buzzer, to see if we get the same results when the dot is removed as soon as the buzzer sounds. I want to know if the time lag is just a physical function of the ear, or if the sim continues, out of step for a moment with the physical world. I've found a way to collect a fairly large number of people to do the test.

    I wonder whether or not we can prove your supposition by further examination of the surprise effect: does it really trigger and rebuild of the simulation? I hope to devise a test to help to support, or not as the case may be, the whole concept, which I find both plausible and attractive.

    John.
     
  20. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    John thanks and welcome aboard. I really appreciate your interest and willingness to test some of my ideas. If you permit me, I would like to direct your attentions to some prior experiments that relate to neural delay and provide evidence for my concept that the simulation is real time by projecting ahead for these neural delay.

    First you should be aware of the “Pulfrich pendulum” experiment/ effect as it is simple and non controversial demonstration of neural delays. (Sorry I don’t have a reference, but the name is relatively unique and I have spelled it correctly so you should be able to find it easily search for more detail than I now give.) Basically you view a common pendulum swinging, but one eye (assume the left one) is looking thru a partially transparent glass. Because of this, the eye’s neurons are delayed in processing the weaker signal so location information from that eye is retarded.

    When viewing the left to right part of a transversely and linearly swinging pendulum path, the left eye’s image is at a delayed position to the left of the right eyes active neurons. (Left eyes neurons needed to intergrate the fewer photons longer to become active.) These two active neuron sets locations are convergent in a space point more distant than the true pendulum path.

    When viewing the right to left part of the swing, the left eyes active neurons give a location to the right of the right eye’s location because that is where the pendulum was a small fraction of a second earlier. These two active neural images locations are convergent on one pendulum bob at a closer than the true location. The net effect is that the linearly swinging pendulum bob is perceived as swinging in an elliptical arc. I mention this as it is a very simple demonstration of neural processing delay effects.

    Much more important and confirming of my ideas is paper: “ The Updating of the Representation of Visual Space in Parietal Cortex by Intended Eye Movements” - J.R. Duhamel, C.L. Colby, M.E. Goldberg; Science 255, p 90-92 (1992) (All my reference are at least 11 years old. I have not keep up with the literature since I moved permanently to Brazil.)

    Parietal neurons were monitored by micro electrodes in monkey‘s brain. And a small light’s location was found to activate them following a learned saccade from a prior fixation point (as I recall the paper). Prior to the practiced standard saccade, different parietal neurons (perhaps not monitored) were actively representing the light location and the monitored neurons were inactive. During the saccade, the monitored neurons become active IN ANTICIPATION - experimental fact. This is true even if the light is extinguished during the saccade! (Everyone , me included, accepts that we inhibit the “blur” of light that might occur during a saccade. That is, we do not perceive anything new during the saccade.)

    If the light is extinguished during the saccade, these neurons that were activated purely by prediction, quickly become inactive following the saccade. The authors of the paper are very hard pressed to explain their observations, that follow obviously from my postulate of a real-time predicitve simulation implemented in the parietal cortex. (I have given a few of the many other reasons why I think it is implement there in the attachment you have read.)

    This is what the authors admit:
    “Parietal cortex both anticipates the retinal consequences of eye movements and updates the retinal coordinates of remembered stimuli to generate a continuously accurate representation of visual space.” If you can, read the whole paper, and you will really know that I am right. My explanation is simple, obvious and theirs is slavishly bound to the conventional theory of visual perception as the “emergent result” of neural transforms. It is LOL material.

    I will be back in touch as soon as I can on your other comments/ questions - house guests just arrived. - I have waited 11 years for someone like you. A few days more will not hurt. Thanks again for your real interest.
     
    Last edited by a moderator: Feb 26, 2005
  21. HonestJohn Registered Member

    Messages:
    28
    Hi Bill -

    The Neuroscience Journal appears to give your conjecture much support in
    http://www.jneurosci.org/cgi/content/full/23/15/6209.

    JD Crawford is very close to discussing a realtime sim when he says 'the brain possesses a sparse but accurate "virtual map" of the visuospatial world, allowing us to seamlessly interact with external space in a way that fools us into thinking that we have a complete internal representation of the world'. You have taken that a fascinating step further.

    What you say is very interesting, and has provoked further research on my part. I will certainly go ahead with the experiments, and keep you posted.

    Clearly there is much for me to catch up on, and I've started doing that already. You pointers, and the clear way you present your case, has helped a great deal. I like the way you are presenting an alternative view, not arguing a case by rejecting other approaches. I can see the benefit of taking such an approach.

    No more questions just yet ? but I can't help wondering about the vast implications of your ideas.
     
Thread Status:
Not open for further replies.

Share This Page