Command language of thinking

Discussion in 'Human Science' started by baftan, Jun 14, 2010.

  1. baftan ******* Valued Senior Member

    Messages:
    1,135
    When we type “appl” using sophisticated software, the programme underlines the word and gives us alternatives such as “apple” or “apply”. We know that “appl” is not in the dictionary and it is programmed to provide the similar alternatives. We also know that when we type any word, it is not the word itself travels through transistors; it is transferred to binary numbers by software.

    Here is the thought: Could a similar process be going on when we think? When we think of “apple”, how do they travel on brain so neurons can read it? There are two possible ways:

    1. “We have apple neurons”. As soon as we think of an “apple”, the relevant apple neuron(s) are alarmed. I find this possibility utterly useless and stupid as we might have required a separate neuron for every single thing, concept, word, etc. Not only that, we must have separate neurons for every single possible forms of an apple (red one, green one, bitten one, or one that inspires us for gravity; practically endless).
    2. “We have inner translators that decode the representation of a thought.” So thinking of an apple, (or anything else for that matter), will evoke different pieces of more elemental information (roundness, being eatable, fresh/old, colour, taste, etc.) as well as contextual (apple to sell or apple to eat?) and conceptual (apple as a computer brand or apple as a fruit?) steps.

    If we suspect of the second way, we must also ask this question: Would it be possible that thoughts are also translated into some other type of codification (such as binary codes of a computer system) before they are processed by neural activity. In other way of saying, “apple” reaches to neurons with a totally unrecognizable way of representation? Transistor doors wouldn’t understand anything from “apple” but they require some binary codes; and maybe neurons (that work with chemistry) wouldn’t work with concepts, therefore they require a different type of symbolic process language –binary or not.

    And maybe, this middle language makes more sense than being just a translator between neurons and thoughts: As we know from computers, all transistor architecture is designed according to logical doors that interpret 1s and 0s, not for what we type on screeen. Maybe neural architecture is also built up in order to make sense from its inner language rather than concepts of mind.

    Has anyone heard of such a language for brain?
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Bebelina kospla.com Valued Senior Member

    Messages:
    5,036
    Association.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. baftan ******* Valued Senior Member

    Messages:
    1,135
    Association neuron (Interneuron) "is a multipolar neurons which connects afferent neurons and efferent neurons in neural pathways" (Wikipedia). It sounds more like neural junction rather than being a language between neurons and thoughts.

    If you didn't mean anything else by saying this word, of course...
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Bebelina kospla.com Valued Senior Member

    Messages:
    5,036
    The language between neurons and thoughts, a word for it, signals?

    I meant association as in the original meaning.

    But neurons and thoughts are to tightly intermingled that is it possible to polarize their communication?
     
  8. baftan ******* Valued Senior Member

    Messages:
    1,135
    I am not asking for wording; I am asking for the nature of this communication. As I stated in OP, neurons don't understand from sentences, words, or concepts. They simply send electric signals to each other. However, this signalling is the reflection of what we think. For instance, if you think a word, let's say "apple", certain neurons fire up to associate with this concept. But they don't signal a picture of apple, or word of apple to each other; everything happens at electrochemical level.

    I don't want to repeat the OP, but I will use the computer example again: When you type "apple" to the computer, certain electronic activity is triggered within the hardware. These transistors don't understand from "apple"; certain number combination must translate this apple into binary code. Even before the word "apple" become a set of 1s and 0s, it has to be processed within a software protocol (C++, BASIC, etc.).

    What I am asking is this: Using the computer analogy, and depending on the difference between neural activity and thoughts, can we suspect from any software-like system which operates between thoughts and neurons? We don't have to start with human brain; we can take the example of a rat: When a rat sees an apple, we can guess that there is no word of an apple going through its reception mechanism. But some mechanism translates this outside object (apple) to the neural system of our rat and it approaches to this fruit. A rat does not think as we do, yet we can still suspect that there is a simpler version of the similar mechanism is going on inside its brain. It is possible to generate more examples.

    Our task (as human beings) are to discover what nature is doing. You can divide human thought in philosophical level into words, symbols, inspirations, meaning structures, etc. But this still does not mean anything when it comes to neurons, because they don't read things philosophically. Some reliable system must translate these concepts into electrical activity and visa versa: When electrical activity does an unexpected circuitry, the same translator/reader software must be reflecting this activity into the thinking environment so we can "come up with a new idea" or "feel puzzled" or "lose our minds" etc.
     
  9. baftan ******* Valued Senior Member

    Messages:
    1,135
    One step further: Let's imagine that there is a command software which reflects the functions of brain. This might not be "a" software. It can be a different software regimes. The coded language between neurons might be as simple as 1s and 0s of transistors, and this could be very basic and robust that is shared by all other brainy creatures of nature. We know that cells communicate to each other, we know the map of proteins used for this communication, we know DNA ( compact microprocessor units if you like), we will soon replicate the entire map of brain cells with all its specialized compartments; yet we don't know how do they communicate.

    Basic functions that we also share with other animals (seeing, moving, etc.) could be organized through "basic" software modules which have been evolved just like physical organs themselves. More sophisticated brain functions (such as human thought) might be depending upon more sophisticated software language(s). Neural architecture of the brain evolves according to the nature and activities of this software regime. That is also to say, with such a coded communication tools, it would be possible to built functional AI; if not as complex as human brain for a start.

    Maybe that's why a chimpanzee can not speak a human language: Not only because it's neural connections does not allow this; but the software that makes sense out of this physical architecture is not there either. That is to say, if there was such a natural software, and if -hypothetically speaking- we were able to break its code and install it into a chimpanzee brain, its neural activity might have been triggered and chimpanzee could have started to learn talking like humans (after installation was completed).

    There is another implication in here: If there was such a multi level coding is going on in our brains, taking brain scan pictures is similar to measuring electrical activity of a computer and trying to figure out how computer works. We know that it is impossible to understand how a computer works without knowing the relationship between hardware and software. It is equally impossible to figure out how computer works if we look at screen activity alone. Screen activity (or thoughts for that matter) is nothing but the reflection of interdependent process between hardware and software.

    What we see, what we read or what we think eventually change the structure of neural activity, just as typing certain codes will affect the behaviour and composition of a computer software. Some of them radically, some of them not-so-seriously.

    Really, is anyone considering this possibility? Other than repeating the well known "there is a complex relationship between neurons and thoughts" sentence. We all know it's complex; but how complex? What is the nature of this complexity? How do thoughts and neurons communicate, and would the possible architecture of this communication (software) be as important as the architecture of brain in terms of how it works?
     
  10. redwards I doubt it Registered Senior Member

    Messages:
    290
    You're thinking about neurons the wrong way. We identify an apple by a process of neuron firing that may involve untold combinations and weights of different neurons. The number of neurons in the brain is something like 10^11, the number of neuron connections in the brain is something like 10^14.

    So the potential combinations of neuron firings in the brain is orders of magnitude higher than the estimate for the total number of particles in the the universe. There's no problematic limit to having given neuron patterns represent given impressions/objects/phenomena. That's how neural networks work.

    Moreover, your idea that the brain operates on a basis of translation is problematic. Non-human animals and infants, neither capable of using language, neither having been exposed to a variety of different concepts, are still capable of gaining and processing knowledge.

    On top of this, the idea that we translate the various aspects of a given phenomena before understanding them presents a huge problem for some of the things our brains are able to do very quickly. How would you decode a face, such that you could identify the person it belonged to? Think about how much language it would take to describe a nondescript person's face well enough that that person could be immediately recognized by whoever you were talking to. Have you ever had trouble recognizing people that you know very well, even though they have ordinary faces?

    Concrete concepts as a basis for the function of the brain just doesn't hold up.
     
  11. baftan ******* Valued Senior Member

    Messages:
    1,135
    According to this, should I accept the first possibility? Let me remind it:

    Because you are saying that since we have a huge numbers of neurons and even bigger numbers of connection possibilities, it is possible to imagine brain thinking of a red apple and certain neurons firing, then brain thinks of a green apple and another set of neurons are firing. And there is no need for any translation between concept and neurons since:

    And again, according to this all we have to do is to figure out which concept has which connection. Then we are done; we solved how brain works. For instance, if we go further and say "I want a red apple"; all we have to do is to find different set of neural connections for "I" ,"want", "a", "red" and "apple"; find the neurons that provides grammatical connection; and if we want to say the same sentence in another language, we have to find other sets of neurons for the same sentence.

    You have a point: Each brain is wired differently than one another, and certain concepts will occupy a special place in our brains. (Take this “Jennifer Aniston neuron” example:
    http://www.youtube.com/watch?v=Y7BZ...B54E221E8&playnext_from=PL&playnext=1&index=1)

    Not only that, this neuron (let’s say the one that fires up for Jennifer Aniston), does not fire up when person sees Jennifer Aniston with Brad Pitt. Yet, same Jennifer Aniston neuron will fire up if we read the name of her. Similar to this, as neuroscientist Ramachandran noted for mirror neurons, when I reach out and grab an apple a neural activity happens in my brain, which is different than if I grab some other object.
    (http://www.ted.com/talks/vs_ramachandran_the_neurons_that_shaped_civilization.html)
    However, if somebody else reaches out and grabs an apple, same neuron (the one that fires when “I” grabbed an apple) will fire up! What’s going on?



    Although you are partly right as different concepts and usage must fire up different neural connections, I still have doubts on direct representation between concept and corresponding neural activity. However, I will come to this issue later. First, I must clarify what I mean by "language" or "translation":

    That paragraph of yours gave me the impression that I haven't been understood as I expected. Here "translation" is not something brain (or an adult brain) does consciously. I will go back to computer example; here the extract from my reply to Babelina above:

    This is exactly what I am trying to illustrate: When you think "apple", or when you see an apple and brain recognizes it, you have a mental image. With your "conscious" mind, you can be aware of it, you can say that "I am thinking of an apple". But this is not different than computer screen showing the word of "apple"; before it goes to hardware (in computer) or neural activity (in brain), it must (at least I think it should) be translated (if you like you can read it as encoding/decoding process) to a different code language of representation. (So this “language” or “translation” has nothing to do with the language or translation that we use in daily life, it’s a codification process). We call this "software" in computers and no one who understand from computers doubt about it. Even if doubt, it doesn't change the fact that you can not manipulate hardware without using a software language. And in computers, this software language is something totally different than what you write on screen: "Apple" is translated into a series of letter/decimal number combination, then it has to be translated into a set of binary numbers. But user doesn't aware of this process, and s/he has not to be: User types "apple", hardware works and things happen.

    Yet as soon as a problem occurs in computers, let's say you type apple and hardware does not process it, we suspect from a software malfunction before we suspect from a wrong transistor connection (neural connection for brain); we suspect from a software malfunction. However, we don't suspect a similar problem when it comes to brain: Because what we accept is generally this: If brain can not register a concept like an "apple", it could be because of a damaged neural connection, or person's psychological disability of using memory or conceptual mechanism. We don't think the possibility of a problem between decoding the apple concept so neural activity starts to register (or other way around: apple is translated to neural activity, but neural activity can not be read back by the conscious part of the brain). Simply because we don't know the existence of such a software in our brains. I don't know its existence either, but I suspect.

    Your infant and/or animal examples doesn't make any difference for that matter: They also register signals from outside world (and some animals think as well), and these images/thoughts come to the screen of the brain and necessarily results in neural activity. You can go as far as insect brain, and nothing will change.

    I take these kind of activities as automatic registration. For these type of readings, an adult human brain is not different than a insect brain, infant or animal brain: These things happen without using "human language" or even consciousness, that's true. But still we have the similar issue: When we see a face, the mental image must fire up some neurons, I believe we all agree on this. What I'm saying is that this mental image must/should be translated into a such a codification/software so neural activity can be triggered.

    Don't concentrate on "speed" issue: You can experience yourself from your computer -which uses software- how quickly it reacts to your commands, how quickly it process lots of information. There is a good reason for this: Speed of electricity. Same speed used by brain/mind.

    This reminded me a good example of "Cached web pages". You know that if the web page you are trying to open was opened before, it's going to be in the cache, and you will be able open it quicker than the web pages you are opening for the first time. Nowadays computers are too quick, you might not even realize the difference; but up until five years ago this was the case. Even today, you can open a web page from cache even if you are not connected to the internet, because it is registered to your computer. And this difference between previously opened web page and brand new web page is something to do with your browser software; it has nothing to do with your hardware/transistors.

    We know that nature doesn't work that, hardware and software relation is more integrated in the brain than the hardware/software relation of a computer. It is so deep that if you think certain things more often, it cause physical change in your neural path.

    There is nothing "concrete" when it comes to “concepts”; concepts are representations. Moreover, do not take neurons as “concrete” or absolute other than being a temporal/strategic physical activity; they can re-wire themselves through time and conditions as brain has a plastic structure rather than concrete. I recommend this 23 minutes TED talk from Neuroscientist Michael Merzenich on plasticity and re-wiring of brain:

    http://www.ted.com/talks/lang/eng/michael_merzenich_on_the_elastic_brain.html

    It should be some regime going on behind both reshaping the architecture of this wiring as well as registration of thoughts, emotions, concepts and other mind activities to neural architecture.
     
    Last edited: Jun 17, 2010
  12. redwards I doubt it Registered Senior Member

    Messages:
    290
    That misses the point entirely. You don't need a grammatical connection to imagine a red apple.

    I've met Ramachandran recently.

    Please Register or Log in to view the hidden image!



    I would agree that the brain has a capacity for language, specifically, and that incoming language has to be processed before it can register with the brain. But unless an activity specifically involves language, that is, I need to speak or write something, or I am listening to someone speak or reading something, then language-processing is not necessary. When I see the word 'apple' written, I need to translate it. When I see an apple, my brain already knows what that is, and no language processing is done.

    Also, I think your distinction between software and hardware is probably not applicable for a brain. It's slightly more akin to 100% hardware that is constantly re-wiring itself to perform different specific functions, though that's also probably a bad analogy.

    I agree. The point is that language is unnecessary. They aren't 'translating'.


    But it doesn't have to be translated. That's the point. It's an input that generates an output. At no point in the process do we need to sit down and convert our input into some other format for processing.

    The speed issue is important. It's not about how fast neural signals move, it's about how the processing is done. My brain can recognize my wife's face instantly, but, with all of its computational power, struggles mightily to solve mathematical equations. If it were simply an issue of how fast I can process information, a few numbers are far simpler than trying to process an entire three-dimensional space to verify that it matches some preconceived pattern. What's happening is that my brain gets an input, if flies through a bunch of neurons, and the output I get is simply that that this is my wife's face. There isn't any real processing going on, I'm just very good at recognizing faces (because of a few hundred thousand years of evolution that has wired our brains for particular activities).

    It's an amusing example, and I'm certainly familiar with caching, but it's not how a neural network works.
    Correct.

    Agreed.

    Where are we on your initial argument about the two different approaches?
     
  13. baftan ******* Valued Senior Member

    Messages:
    1,135
    Grammatical connection is not required for an apple (red or not); but read what I wrote: Grammatical connection is required when you need to establish a sentence such as "I want an apple". How are you planning to construct a sentence without a grammatical connection? How will you put "I", "want", "an" and "apple" without a grammar? Any suggestion?

    If you are talking about a "conscious" translation of written apple, no, you don't have to make a "conscious" translation either: Your brain will recognize the concept without you need to translate it. Otherwise you wouldn't read at all, at least it would take ages. Seeing an apple and reading the word "apple" is registered by an educated brain more or less similarly in terms of speed of recognition.

    I think you find this analogy bad due to your misunderstanding of software/hardware relation. You say this: "distinction between software and hardware". This topic is specifically discussed in the thread called "Is software physical", if you are interested:

    http://www.sciforums.com/showthread.php?t=101991

    However, I can only say this: You can not separate them: Can you imagine an empty hardware (a bunch of transistors without any software installed) or a software which is not executed on hardware? Would they make any sense? Similarly, would it be possible to design a software without taking the capacity and architecture of the hardware into consideration?

    If you are seriously claiming hardware rewiring itself constantly without a corresponding software, we are in the interesting waters. Imagine the implications: You are thinking of something, and mirror neuron composition changes straight away, without any natural decoding process. Forget about the examples that doesn't require any language use (like the image/concept of an apple), and think about the linguistic entities (words/sentences) which also -necessarily- have neural equivalent. That would only mean one thing: An English speaker has neurons that physically understand from English without this word being translated into electro/chemical activity; and a Chinese speaker has neurons that directly decode a Chinese word. What's going to happen bilinguals? I know bilinguals activate more brain cells, but can you claim that their neurons (biological agents) are language speakers? Because if you are saying this, this must also be true to the other cells of the body: A toe cell which understand from Russian, for instance.


    And I don't agree what you agree as you are mistranslating what I mean by "language" and/or "translation". Read them as software language (like C++) or compiling.

    What you are saying is contradictory. You are claiming that there is a "process" is going on (Your sentence: "It's an input that generates output"). Therefore, we need a kind of translation. Otherwise I will rightfully demand an example from any known process which takes one thing as an input and makes it something different out of it (output). This example of yours can be from nature or from human made machines, I don't mind. And don't forget, in your "process", word or a "mental image without a word", or an environmental signal gets into brain (conscious or not, again it doesn't matter) as input, and some neural activity (electro/chemical process) happens without any translation. This is a hard stand to defend I guess...

    Here is the thing: You are still agreeing on "process", and I don't think your brain cells/neurons are made of some different type of chemistry, yet you realise the structural difference (not being able to solve mathematical equations) of your brain. Question is this: How or why did your brain develop such a difference? Is it because of an accidental architecture of your brain cell connections? What "process" by the way?

    No matter how your brain is wired, at least you can not think faster than the speed of electricity depending upon the resistance provided by your brain cells/synaptic connection paths. This is one thing. Same as your computer, no difference whatsoever. Secondly, the speed of your computer depends the ability of your software architecture. What I am saying is this: Your brain architecture is also important in terms of speed. And conventional understanding says that this is the architecture of neural pathways. And I don't refuse this. Yet I am asking, what is the regime behind the establishment of these neural pathways?

    I believe entire community of neuroscientist and/or philosophers would disagree if you say "there isn't any real processing going on" thing. They prefer to say "we don't know what exactly is going on yet". I agree with the evolution part, and I don't think anything out of evolution. And my assumed translation process (thought to electricity) must also be evolved parallel within the mind regime. No doubt about it.

    This example was not designed to explain entire neural network system. It was intended to explain how certain things can be recognized more easily even without any requirement for physical connection and using only the memory of a software.


    I don't know about you, but I still stand at the same point...
     
    Last edited: Jun 18, 2010
  14. baftan ******* Valued Senior Member

    Messages:
    1,135
    It would -or would not- be helpful, but I would like to introduce another example to clarify my point.

    BCI (Brain-Computer Interface) devices work on a simple principle: A human made computer reads neural activity, then signals are translated into hearing, sight, movement of a robot arm; mostly "motor functions" due to the current level of technology (Wikipedia article on this subject: http://en.wikipedia.org/wiki/Brain-computer_interface).

    We know that there are some special software behind our computers which are able to read the neural signals and translate it to commands. We also know what sensors of this computer reading is not the concept itself (not what we consciously think), but the electromagnetic signature of the brain activity. Here is the might-be-confusing bit: When I think of moving my arm, I am aware of moving my arm and this command/request comes to my consciousness as a concept ("I want to move my arm towards right/up/left/down direction"); however, the computer does not understand that, it reads the mirroring neural activity and directs the robot arm accordingly, because of the human made software. If I didn't know this process, I might have thought "Oh, I thought of moving my arm and computer understood that, magic!" No, computer didn't understand me at all, it's not magic, its software translated a signal into action, that's it.

    I know it sounds like repeating the same thing again and again, but what if we have naturally inbuilt software in our brains to translate our thoughts into neural activity and visa versa? At the end of the day, a conscious thought can be understood as a shaped end product, an action. When we ask someone "Try to imagine an apple", this person will start to think about the concept and mirroring neural activity will take place. This person might not be physically moving a robot arm, but a conscious effort to think about a concept is also a work, an action, or a labour. Moreover, if I push the idea and say "now think of a blue(?) apple", "square apple", or "walking apple", mind starts to combine weird thoughts. Therefore, thoughts themselves (even without being put into practice) can be analysed as "process required actions" or "products which are categorically different than neural activity" within the mind.

    So, couldn't it be any natural translator between concepts and neural activity just as BCI software?
     
    Last edited: Jun 18, 2010

Share This Page