The Computational Theory of Mind

Discussion in 'General Philosophy' started by Nunayer Beezwax, Mar 17, 2010.

  1. Nunayer Beezwax Registered Senior Member

    What does it mean to "think"? This used to be a hot topic within philosophy. Questions in philosophy of mind these days typically involve more abstruse notions such as "qualia", "the explanatory gap", and the ubiquitous "consciousness".

    However, due in part to concerns raised in another thread (Searle's Chinese Room Argument...) and in part for entertainment and hopefully education, I'd like to review one well-established response to our anachronistic original question.

    Most philosophers are sensitive to anthropomorphic criticism, we do not want to limit an activity as neutral as "thinking" to beings which just happen to be this particular contingent result of natural selection on this little speck of a planet. Perhaps animals, or aliens should not be immediately eliminated from the group. However, as soon as we remove the specific substrate (human brains) from our formulation, it seems as though perhaps even a machine may qualify. What machine may be fancy enough to pull off something as advanced as "thought"? Well, many suspect it would be the computer. So, can a computer think?

    I believe Turing is entirely right to dismiss the "common sense" version of the question. If one does not have a pre-determined set of necessary and sufficient conditions for "thought", it's a mug's game. And no one has yet come up with a widely accepted non-arbitrary definition.

    Not to mention, that even if one did, one would still likely be subject to the (painfully overblown) "problem of other minds": even if we know what thinking is, how do we know that any humans do it? And since humans are the paradigm case of doing...whatever this activity we call "thinking" is, one is not in an enviable position.

    Given this conundrum, Turing proposes instead to deflate the issue, by proposing a test which removes the necessity of defining thought in the first place, and instead using a scientific procedure to determine if any agent can accomplish whatever this ability humans call "thinking" can accomplish. He called it "The Imitation Game", it has since become referred to as the "Turing Test". Turing's original description runs thus:

    Many Turing Tests have been attempted in practice. One such, the Loebner Prize, describes its more modern methodology:

    (No programs have yet passed the test).

    As an introduction to the Computational Theory of Mind, we knowledge with functionalism and against anthropocentrism, that whatever our definition of mental phenomena are, they must be substrate neutral. This combined with the mathematical result we call the Church-Turing thesis: that any computable function is Turing Machine computable (all contemporary computers qualify). And we support the methodology of the Turing Test, such that if a computer (formal symbol manipulation system) can chat successfully with a human, we have no reason to deny it entrance to the charmed circle of "conscious".

    It may turn out, in the end, that not only can computers think, but that we are just such computers ourselves.
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Sarkus Hippomonstrosesquippedalo phobe Valued Senior Member

    Just out of curiosity, what is the alternative to us being just such computers?
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Bishadi Banned Banned


    So should any.
    each awaken with a condition; to further, whatever they have 'to do'

    The concept is what i work on at the scope of 'the life' itself. And you are correct, 'no one has yet come up with a widely accepted'.

    The 'accepted' part is what is tough, no matter how much insight, some environments do not encourage a divergence from benchmarks.

    by the emotions; the physical interactions are often what offers the evidence. For example, it is often easier to see a person lie, than by the words themselves.

    This test is removing the emotional venues in which conscious 'life' interacts with; with its environment. To limit with language alone is why the test is tough to crack.

    It is like trying to figure out which gender and species of an animal by hearing its breathing.

    'case of doing'................!

    without 'action' how can you tell if a computer is 'thinking'?

    Notice the reversal of the test; how can that chip be 'thinking' if we cannot observe what affect it is having on its environment?
    Last edited: Mar 17, 2010
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Doreen Valued Senior Member

    I found the OP a little confusing. If I understand right The Computational Theory of the Mind is a specific set of ideas about what thinking is in human minds. But the OP seems to be suggesting that we set aside ideas of what is really going on and side with Turing that as long as it functions like the human mind - cannot be distinguished from it - we will refer to whatever it is as thinking.

    So on the one hand assertions of what is really going on in minds
    and on the other a suggestion to bracket off such concerns.
  8. heliocentric Registered Senior Member

    Doesnt it just come down to Wittgenstein's 'beetle in a box'? There's no way to establish whether anything (even a humanbeing) is actually having conscious mental states, or is infact, just a very good job at faking it.
  9. Cyperium I'm always me Valued Senior Member

    Yes, we could never prove that a computer actually thinks because thinking is a subjective experience, we can't prove that any other being can think other than ourselves (and even that we can't prove objectively).

    I was hoping rather for a discussion on why we think.
  10. glaucon tending tangentially Registered Senior Member

    I would think that the answer to that question would be fairly obvious, taking into consideration evolutionary biology....

    Still, feel free to start a new thread.
  11. Nunayer Beezwax Registered Senior Member

    You're right, this post was far too long and confusing, my apologies. If anyone is interested, I propose this thread be a discussion (at least to begin) of the legitimacy of the Turing Test in regards to the question of determinations of mentality.

    When the question comes up: Does X think? Turing suggests we replace this question with "Can X pass the Turing Test?". I agree. My primary reasons are: (1) "thinking" is undefined, therefore indeterminable. (2) Due to the inherent subjectivity in most notions of thought, it becomes precluded from scientific examination, the Turing Test gives us a way to inter-subjectively agree upon an answer which provides all the important, relevant, legitimate content the question can hope for.
  12. Doreen Valued Senior Member

    Sounds good.

    In situations where the skills of an entity are at issue, I think the Turing Test - or any skill based test - is the proper approach to deciding the matter. I hesitate on issues where we start granting sentience. If we encounter an already present organism - such as a newly found mammal in the Amazon - and it passes various Turing like tests for intelligence and thinking, it strikes me as logical to assume until further notice that it, like other mammals, is capable of certain kinds of thinking.

    If someone designs a device that can pass a limited Turing Test, I would be hesitant to grant that the device is thinking in certain contexts.

    Let's say it was an emergency room doctor robot that engaged in the whole process of diagnosis - including examining the patient, interviewing the patient, diagnosing and then treating, and it did this a professional levels, I would be happy to say it is thinking. If someone suggested granting it citizenship, I would get antsy. If a 'robot/computer' could pass a full Turing test - IOW 'pass' as human in a fuller range of human interactions, I would still have my qualms. Though it would also bother me if people treated such a 'device' as merely a machine - for example throwing it away after harvesting its parts when a new model comes out. Here I would be a troubled agnostic, hesitant to grant them equal status to humans and distressed if they were treated like incredibly expensive toasters. I can also imagine that if sufficiently clear explanations for how it's 'mind' works could be made to someone as ignorant as I would no doubt be about the technology invovled, this might affect how I viewed its sentience or likelihood that it is an experiencer.

    I hope this is not running off on a tangent. It just seems like the context and implications of how the judgment 'it can think'
    affect my determination.

    In practical terms, connecting something to specific tasks, I think the Turing Test is the way to go.
  13. Nunayer Beezwax Registered Senior Member

    "various" Turing Tests? There is one Turing Test. It is described in the OP and many places online.

    "like tests for intelligence and thinking"? The Turing Test is explicitly not "for thinking". The response is this: Until anyone can provide a definition of thinking which makes it amenable to science we ought to ignore it as an issue. Instead, let us take a look at what we really want to know when we ask that question and (so goes the claim) the Turing Test gives us that information.

    I don't know what a "limited Turing Test" is. There is only one TT.

    I do not see how any of this is relevant to discussion of the TT. No one has claimed "If something passes the TT, then it can think." No one is making claims to citizenship. I don't understand why that "tangent" was included...
  14. noodler Banned Banned

    Suppose you could build a computer and "teach" it rules of logic, so it responded logically to any question or input statement.

    Would such a machine be able to decide that any statement that was input or output, by logical process, was more than formally a proposition? Would a logical machine be able to decide more than if a given statement is true or false? What would this "more" correspond to--an awareness that it was able to make such decisions...?
  15. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Usually the Computational Theory of Mind, CToM, implicitly and often explicitly contains an aspect which I think is false at least for human minds. To explain, I will first quote from Wiki’s section on CToMs:
    “…the mind computes input from the natural world to create outputs in the form of further mental or physical states. A computation is the process of taking input and following a step by step algorithm to get a specific output. …”

    Certainly in the human case one of the most frequent “output mental states” is a perception of the so called “external world.” (I am only assuming / inferring it exists because I have this perception of it whenever I am awake. It may not as Bishop Berkeley discussed long ago.)

    The CToMs, especially as elaborated by the Churchland husband and wife team has the neural signal inputs created by sensory contact with the external world successively “computationally transformed”, mainly in the brain, until eventually our perception “emerges.” There is no supporting evidence for such “emergence” of our perceptual experiences. In fact all of the neurological evidence is against that possibility. Consider visual processing:

    Signals from the retina come via the LGN to the “visual cortex” (V1) in the posterior of the head where certainly they are computationally transformed and separate aspects are then sent to other PHYSICALLY SEPARATE regions of the brain. For example aspect relating to motion are further processed in V5 and color aspects are processed in V4. Etc. for all the sensory imputs – they are dissected into “characteristic” which are then sent to physically separate parts of the brain for further computational transforms.

    Never do these separate aspects /characteristics ever come back together as far as is known by neurological research. YET, we experience a unified perception of the external world.

    To simply illustrate how serious this is for the CToM imagine that a white dog is chasing a bouncing yellow tennis ball. “White and yellow” are represented by neural activity in V4 and bouncing and running by neural activity in V5. They never come back to the same neural tissue, yet somehow we perceive a white dog running after a bouncing yellow ball and not a yellow dog running after a bouncing white ball nor a bouncing yellow dog running after to a stationary ball, or several other way these four characteristic could be perceived. That even one of these possibilities is perceived as integrated / unified object is a mystery for the CToM if you know anything about how neural processes / computations are done in widely separated sections of the brain.

    No wonder when it comes to the most common of human experiences, perception, the CToM is reduce to the hand waving phrase “perception emerges” as the end result of many computational transformations stages of neural processing of the initial sensory inputs.

    I have suggested a solution to this dilemma, which has as a by product the possibility that genuine free will is not necessarily inconsistent with the natural laws that control the firing of every nerve in your body. See it at:

    Now a brief comment on Turing Test’s relation to thought:

    The TT is demonstrated behavior and thought, whatever it is, is not behavior. They are less related than apples and oranges. Probably some day a computer will pass the TT, at least in a limited field of knowledge. That does not constitute a demonstration of thought – only of human like behavior. I.e. a knowledgeable zombie (The term’s meaning in psychology & cognitive science, not in Haiti.) with neither feelings nor thought. Only human like behavior.
    Last edited by a moderator: Mar 20, 2010
  16. Nunayer Beezwax Registered Senior Member

    Thanks for the great post Billy T. Some comments:

    I understand that many people seem to think that this is a criticism of the TT, but I just don't feel its force. I remind you: the TT is not a test for thought. Those of us who are proponents of it are ready to admit that it is behavioristic. Our positive point is: anything which passes the TT can do something very important. Maintaining an interesting and unbounded conversation with a human being is not a trivial matter (emphasized by the failure of even modern computers to come close to passing!).

    As I mention to Doreen, there is no "TT in a limited field of knowledge". There is only one TT, and it is unbounded in content.

    The challenge we present to you: define "thought" in a way which is both robust in content and a way in which you can scientifically prove that anything exhibits it. Avowal does not count, a computer could do that as well. If you cannot do that (which no one yet has) we would like to have some scientific test to determine something interesting even if what we are determining is not the presence of thought. That test is the TT.

    I have things to say about your other comments, but I am going to restrict myself to the TT related issues here and now...
  17. Doreen Valued Senior Member

    I was stretching the idea so that tests for a variety of types of behavior could be tested. In my example several Emergency Room Doctors and one AI interview and diagnosis a patient - told what they will find with various hands on tests. I think it is likely that machines will pass such more limited tests before they pass a full Turing Test. Though I get your point. I would say the question becomes Can the device X? rather than can it think in this area of knowledge?

    But the test is used in relation to Artificial Intelligence, the latter word also as undefined at thinking and closely related to it.
    Even Turing, here,
    leads from the use of his version of the question back to the original question 'Can machines think?' and guesses, incorrectly, that we would already have been speaking, without controversy about machines thinking. So his question is a stopgap proposal for the interim.

    If we take the Turing test as merely testing to see if something can pass the Turing test, we have a tautology.

    I certainly could have been clearer and I should have left out 'thinking' until later in my argument.

    But what does it mean to say using the Turing test and the new version of the question is a good idea?

    A good idea in what context. How do we look at the results of using the Turing Test and that question and evaluate whether it is good?

    I would say by the use both positive and negative answers are put to. I see two possible ways those answers will be used. 1) A purely skill based one - people will not know via indirect media that they are dealing with a machine.
    2) The machine is in some essential way equivalent to a human.

    I guess I could pretend that there is no context. That Turing tests are not a new way of determining intelligence by bracketing off, temporarily this idea of equivalence.

    But if the Turing Test leads only to conclusions that something can pass the turing test, it is rather anti-climactic to say the very least when one gets a winner.

    EDIT: which is why I think focused, specific and limited kinds of Turing Tests will be, and really already are used. If we find out that a machine is indistinguishable from an Emergency Room Doctor - as tested by experts - we might only need nurses on staff. At this point we can keep the behavioral model up AND draw conclusions that are both useful and behavioral.
    Last edited: Mar 20, 2010
  18. Nunayer Beezwax Registered Senior Member

    "I'm writing a book on Magic." "Oh yeah? You mean like fortune telling and casting spells and all that?" "No no, I mean tricks, illusions, 'street magic' that kind of thing." "Well, that's disappointing, that stuff isn't even magic."

    Isn't it strange that the "magic" many people are looking for is mystical, strange, unexplainable, and complete hokum? Whereas the interesting "magic", the tricks and illusions, the magic that we can actually do is "not real magic"?

    You have some work to do to convince me that this feeling you have that passing the TT is somehow trivial, is anything different than the astrologer who doesn't appreciate the (currently fictional) software which can pick successful stocks.

    When I hear this kind of talk, it seems both vague and anthropocentric. For some reason, many people have an emotional response that if something doesn't do the task in the right way it doesn't count. And all this without knowing anything about what that "right way" is! This tells us something potentially interesting about the psychology of those individuals, but little more.

    I reiterate: Until you have a definition of "thinking" and a scientifically respectable decision procedure to determine its instances, how is resistance to the TT anything more than stubborn anthropocentrism?

    Other than distaste for the word "essential": yeah that's what we're after! Why would one diminish this?
  19. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Thanks. I hope you will read the link in my post, and perhaps comment on it. (Both the basics concept that perception is by a Real Time Simulation, RTS, in parietal cortex, and the corollary that follows, which at least makes it logically possible for genuine free will to exist consistently with natural laws without postulating a non-material "spirit," which can violate these laws.) I can tell from the OP that you are well versed in this field, so would like to learn your reaction to my essay on the RTS and Free Will also.
    I certainly will not attempt to define thought, in part because I suspect it does not really exist. I.e. what we call "thought" is really just ill-understood neural processing of information, which in principle could exist in a computer.
    We agree that the TT has nothing to do with "thought," whatever that may be, and that it is an interesting challenge to computer scientists and others, especially those concerned with linguistics, but so is development of a Chess or Go playing computer that is as good as human experts or a program that can translate text from one language to another as well as a human fluent in both can. I think the special attraction of the TT (as opposed to many other challenges to computer program designers) is precisely due to the common, but false, assumption that a TT passing program is capable of thought.

    I think you are being too narrow (and somewhat inconsistent) when rejecting the concept of "limited TTs." True Turing's test as stated by him was not limited to any field of knowledge, but it also is of essentially zero interest today. It had three participants. All modern TTs have only two so it is inconsistent for you to allow this 3 to 2 change in the TT and yet reject any modification of the TT which limits the examiner's question to part of the field of human knowledge. For example, why can there not be a "limited TT" in which the questions asked are limited to psychology or physics, etc.?

    I think, if memory serves me correctly, that in at least one such limited field of knowledge a machine has passed the TT and it designers received the prize promised. I forget the details.

    Good, as stated early, I hope you will (and on the RTS & Genuine Free Will essay of the link)
    Last edited by a moderator: Mar 20, 2010
  20. Blindman Valued Senior Member

    It would be interesting to see how many humans would fail the Turing test if computers regularity pass it.

    Intelligence is not just human like. Turing test is old school.

    AI relies on anthropomorphism and in the strictest definition it is human like intelligence. For the lay men, and some philosophy experts, AI must be human like.

    We must step past this and understand intelligence has many forms not part of the human mind (condition)
  21. noodler Banned Banned

    I found an interesting article on symmetry groups and neural signaling. Neurons are cells, and groups of cells correspond to mathematical groups--so that signaling in cellular networks is mathematical.

    The other aspect of brain function which is quasi-symmetrical (my term) is chaotic signaling.

    Chaos appears to be a requirement for cooperative activity, and this relates directly to a kind of "tuning"; recent discoveries have shown that adding a bit of chaos to a circuit can improve its efficiency.
  22. Nunayer Beezwax Registered Senior Member

    I plead not guilty on this one. The reason that the move from 3 participants (in Turing's "Imitation Game", the phrase "Turing Test" was not used until later as far as I know) to 2 can be allowed is that it is an irrelevant parameter. Given any experimental situation, there are factors which are relevant and others which are not. For an example, chosen to be obvious for clarity, the color of the lab coat a physicist wears when he or she observes the readings of his or her instruments at CERN, is irrelevant to the scientific work; while perhaps contamination of the vacuum chamber may be.

    For the Turing Test to achieve its objectives, the number of participants the judge(s) chat with is irrelevant (strictly, that is), two is chosen as a practical matter of experimental design to maximize the effectiveness. But there is nothing wrong, in principle, with conducting the test with 2, 10, or 100 participants.

    We could employ whatever specific conversational boundary conditions in which anyone happened to be interested, of course. I would argue, though, that these tests would be different not in degree but in kind. Turing Tests are based on unrestricted conversation because this is (justifiably?) considered the most sophisticated of our abilities. What the Turing Test is supposed to prove is that computers can do whatever it is we do when we do what some call "thinking".

    A claim could be: Any program which passed a bounded test would be closer to a chess automaton than a human being, and any program which passed an unbounded Turing Test (redundancy aside) would be closer to a human being than the chess program in regards to cognitive capacities.

    As I re-read this, I fear it may lead to misinterpretation, but I am at a loss at this moment to figure out the best way to head this off, other than to baldly state my bona fides as a naturalistic monist. There is no dualism here, I happen to be confident that human "thought" is a computational (information processing) activity, as it seems you do. But I suppose I'll leave it here and see what trouble I've gotten myself into this time!
  23. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Give a link. I find this very improbable for several reasons. One reason is that there are no distinct groups of neurons. Almost all information transferred within the brain is by distributed coding. I.e. the same neuron forms part of the distributed coding of hundreds, if not thousands of differ information transfers. Adding one more (or losing a neuron) that participates in say moving index finger of right hand is of no consequence as that "move finger" command is made with distribute coding. Not all the same neurons will fire the second time you move that finger, nor will it matter if one dies.

    SUMMARY: There are no well defined groups or set of neurons in the sense required to apply group theory.

Share This Page