The Toad on the Stove Argument Against Computer Consciousness

Discussion in 'Intelligence & Machines' started by Prince_James, Oct 29, 2006.

Thread Status:
Not open for further replies.
  1. Prince_James Plutarch (Mickey's Dog) Registered Senior Member

    Messages:
    9,214
    An argument I developed recently:

    There are two worlds. One world is identical to ours, where "the apple is red" means "the apple is red" as those words are normally taken. The other world differs in that "the apple is red" means "the toad is on the stove". In both worlds there are humans and AI.

    Let us analyze what would happen in each world internally for humans and AI.

    Humans -

    In Normal World: The "apple is red" would make someone think of "red apples".

    In Toad World: The "apple is red" would make someone think of "toads on stoves".

    Computers -

    In Normal World: The "apple is red" would be evaluated on the level of syntax and computed in accords with the computer's rules. A response would be given in accords with the syntax.

    In Toad World: The "apple is red" would be evaluated on the level of syntax and computed in accords with the computer's rules. A response would be given in accords with the syntax.

    Because syntax is insufficient to provide semantics, and humans think semantically, and computers are evidently incapable even in different worlds to think of different things, humans and computers are not similarly conscious. Indeed, one could even go as far as to say that until semantics can somehow become meaningful to computers (a task that seems impossible) computers are not conscious at all. Furthermore, a semantic computer would cease to be a pure Turing machine.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    But surely in the alternate world, the computer would have been programmed (or in the case of true AI, would have learned from experience) that 'the apple is red' should get it to access information about toads and stoves, rather than colours and fruits?

    That is, the humans in different worlds would have had different life experiences, so I think it's fair to say the same of the computers, in which case they wouldn't react identically.

    How do you know a Turing machine couldn't emulate the same level of 'semanticism' humans exhibit?
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    Did you really develop this argument, or did you steal it from Searle?

    I'm not well versed in this stuff, but I don't think that it is at all clear that algorithms can't contain semantics.
    I don't see a necessary fundamental difference between the way neurons and transistors manage the internal storage of concepts (apple, toad, stove, red, colour, relative position) and linking those concepts to words ("the apple is red").

    For both humans and computers, there is syntactical processing (the words "the apple is read" are heard and parsed) and semantical processing (the recollection of the internal representation of the toad, the recollection of the internal representation of the stove, and the construction of the internal representation of one being on the other).
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Prince_James Plutarch (Mickey's Dog) Registered Senior Member

    Messages:
    9,214
    Pete:

    This is my own argument, although Searle provided inspiration. The Chinese Room is one of the best constructed philosophical arguments I've ever read and I wanted to expand on a similar theme, this time showing how the same letters and words produce different results in humans and AI.

    The idea is that contained within any word is no information as to what it means (excluding etymology which is a different matter all together). The word "apple" meaning the red/green fruit is something cannot be determined from the syntax alone. It requires reference to the semantics.

    Moreover, there is a question as to where any such concepts at all are found in neurons. How does one encode a neuron to be able to consider "apple"? Which to make matters worse, is a category, not a single object. In Searle's Chinese Room we see that problem arise when the semantic content is exterior to the syntax (the programmer of the Chinese room knows Chinese but not the Chinese room attender).

    Well as semantics are not found in syntax, and computers work on pure syntax, how then can a computer make such internal representations, too?

    Zephyr:

    I'll respond to yours when I come back to the computer.
     
  8. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    No neuron considers an apple, but the collection of neurons we call a brain clearly does.

    The word "apple" is a symbol. In both human and artificial brains, it is linked to some internal representation of an apple (or toad, as the case may be).

    It is true that the syntactical attendant does not understand Chinese. But it seems to me that the system of attendant and the program clearly do understand Chinese. Isolating the attendant is misleading, much like isolating the CPU of a computer. The computer is more than its hardware.

    It is not clear to me that computers work on pure syntax. Perhaps I don't understand what you mean.
     
  9. Prince_James Plutarch (Mickey's Dog) Registered Senior Member

    Messages:
    9,214
    Zephyr:

    Two responses depending on the type of system -

    AI Chatbots-like system: No information would be accesssed besides how to make a feasible response to the statement. Therefore, no toad related things would be considered.

    Information responding AI: You are correct in affirming that the answer would include a content referencing "toad" as an internal target for references, yet what is important here is to focus on the syntax. For the syntax remains the same no matter what the semantics imposed upon the system have allowed certain words to mean and every word, regardless of meaning, is treated code-wise (syntaxically) the same.

    The definition of semantics precludes its expression in syntax. A Turing Machine works on a formal system. Formal systems, whilst they can be accorded to semantics, have no semantic quality themselves. Chess is based on a formal system, but the formal system does not have chess.

    Pete:

    Yet the question is: What aspect of the collective neurons gives rise to this capacity?

    And that, too, is a question: What gives rise to this international representation and its qualia experience?

    Well here is something: Anything relating back to qualia could not be answered honestly by the system. "Does wong-tong soup taste good to you?" could not be answered by this system without a complete fabrication of the answer. Moreover, much as in the Toad is on the Stove, neither the person nor the system has any care for the meaning. It answers as a system based on the meaning of the words.

    All formal s ystems are devoid of semantics. Computer programmes are formal systems...
     
  10. Pete It's not rocket surgery Registered Senior Member

    Messages:
    10,167
    "Emergent phenomena" is the throwaway (but possibly correct) answer, but really, I don't know. I think nobody knows, yet. What matters is that the collective neurons do have that capacity, and does have qualia experience.

    I disagree. The honest answer would be "I don't know, I don't have the capacity to taste." Or if the system did have some taste-sensory input, then it seems to me that the qualia of the taste of won-tong soup perhaps could be experienced by a sufficiently complex system.

    That's the same thing in different words. Why must a formal system be devoid of semantics?
     
  11. iam Banned Banned

    Messages:
    700
    the thing is they can't feel and their consciousness will always be limited to a program even if extremely complicated. they could exhibit, make choices or mirror but there is no awareness of individuality and that can't be programmed though this isn't practically limiting. we only understand the processes of how we function but we really don't know what makes us tick. organic life can make things up as it goes. a machine can only go through the motions, it cannot create. Until we can unravel how our own consciousness is formed like peeling layers off an onion, its not possible because all we've witnessed is the effect not affect.
     
  12. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    iam:

    You make a good argument for why WE cannot build a program right now which has true consciousness. However, are you also asserting that this is an argument for why NO program CAN exist which exhibits true consciousness? If so, I think you are falling prey to the fallacy that there is something magic or special about the wetware in our heads. I see no reason why the interconnected little machines, known as neurons, can do something special that we could not build another machine to do. I admit we have no idea HOW to construct this machine, but I assert there is no physical reason it could not exist.

    -AntonK
     
  13. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    Exactly. Maybe formal systems can't 'understand', but in that case if the brain is a simulation in the formal system of physical laws, then brains can't 'understand' either.

    Maybe humans are just good at faking understanding. It can't be too difficult, because they only have to fool other humans, who don't really understand either.

    Please Register or Log in to view the hidden image!

     
  14. eburacum45 Valued Senior Member

    Messages:
    1,297
    But it fails to examine the situation fully, therefore is misleading.
    If you have a system which 'understands' the language which questions are put to it in, and can respond to questions in that language in a way which passes the Turing Test by convincing someone else in a consistent fashion that it is conscious, then that system is conscious. It doesn't matter if there is a Chinese Room at the heart of it or not.

    If it looks like a duck, walks like a duck, and quacks like a duck, it is a duck.
     
  15. Roman Banned Banned

    Messages:
    11,560
    I agree. Can anyone demonstrate that they're comprehending at some level above a sufficiently advanced AI?
     
  16. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    My personal belief is that it may be impossible for us to concieve of what a "higher" intelligence may be. We can of course imagine better versions of what we have. Faster reflexes, better memory, faster memory, better processing, etc. But in the end, its still just the same as what we do.

    -AntonK
     
  17. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Perhaps you should examine how we learn to get a better understanding of how to make a machine learn.

    When we come into the world we wouldn't know the word "Red" let alone "Apple", those words aren't necessarily defined until those we rely on teach us it. For instance a Mother might teach at first the "Verbal" assertion of what "Red" is and what an "Apple" is. The assertion will be repetative occurances of combining the words with their correct syntax in use.

    During this time the neural pathways are new, they form connections with the multiple stimuli that occurs from our five senses. If one mother says "Apple" in a tone as one would when speaking to an infant, you'll find that infant will pick the word up far better than another mother using a more aggressive tone. (You can see how tones are reflected when you look at how animals respond to auditory commands, the frequency of the word is one thing for memory but also the tone can define what they believe that word to mean. For instance you could sit with a dog and say your going to do loads of horrible things but make it sound like your patting them on the head and saying "good doggy".)

    We can see the colour "Red", We might see, taste, touch and smell an "Apple", While eating it we'll also hear a crunch.

    Later when we hit Kindergarten or Primary School we are taught more about the Colour "Red" and what an "Apple" is, We are first pre-emptively taught about alphabetic letters and sounds. The words we know are compressed of phonetics.

    Tuition is again apparent as we are taught through visual and auditory prompting to learn the Written form of an Audible statement.

    I feel the main problem with people that view how to write a working AI is they just want to click their fingers and make it work like a human, kind of like an immediate return on their prospects, however it's not as simple as that.

    We take years to attempt to master our language and accumilate knowledge and even then some of the time we can be wrong, we aren't infaliable, yet we conclude theres prospects to make an AI infaliable if a few points are taken in.
     
  18. Kron Maxwell's demon Registered Senior Member

    Messages:
    339
    I agree with Stryder. Forming an intelligent mind, whether artificial or natural, takes long periods of development. An AI would LEARN what 'red', 'apple', 'toad' and 'stove' mean. Just look at humans. How can a person who was born blind make logical conclusions about object's colours?
     
  19. mackmack Registered Senior Member

    Messages:
    123
    i tap into the conscious and what it is on my site. but let me just explain it briefly. The conscious is something that took 20 something years to form. All the data that is coming into the human being is collected and the brain averages all the data. The end result is the conscious.

    another form of conscious is association. When an object is observed the host activates other objects that the focused object is associated with. For example, if i say george bush. the objects that have association with george bush is: war on terror, president, etc.--Things that we experience around the object george bush.
     
  20. Kron Maxwell's demon Registered Senior Member

    Messages:
    339
    I agree. Also, the phenomenon we know as 'mind wandering' is an emergent product of association. We can originally think of George Bush, then the War on Terror, then Al Qaeda, then Arabs, then Arab bread, then bread, then yeast....
     
  21. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    I'm sure thats been tapped on by one or two books with the terming "Tangent Adaptive Networking".
     
  22. mackmack Registered Senior Member

    Messages:
    123
    not even a pychologist can define what a conscious is. what makes you think that a book about AI can tap into that? I explain what the conscious is and how to build it, that is the intent of my site.

    neural networks are association type of programs where it links one object to another in the network. But the data of the relationship is hidden from the user and can't be seen or manipulated.

    the output is fixed too and has to be manually fed by a user, while the brain learns by examples -- the 5 senses. This limits the neural network in terms of adaptability, self-learning, and storage. The Universal artificial intelligence program is built sort of like a neural network but it gets rid of the three problems that it faces:
    1. adaptability
    2. self-learning
    3. storage
     
  23. kmguru Staff Member

    Messages:
    11,757
    More like what we will do...but the "same" is like saying we are doing the same thing for the last 40,000 years! Are we?
     
Thread Status:
Not open for further replies.

Share This Page