Why do neurons send their information thingys to so many other neurons?

Discussion in 'General Science & Technology' started by Betrayer0fHope, Dec 28, 2008.

Thread Status:
Not open for further replies.
  1. Betrayer0fHope MY COHERENCE! IT'S GOING AWAYY Registered Senior Member

    Messages:
    2,311
    Why? It hardy seems efficient for neurons to be sending so much information, so why does this happen? Also, we don't have a neuroscience (not a word?) board, and I wasn't sure whether to post this in biology or this board.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. EntropyAlwaysWins TANSTAAFL. Registered Senior Member

    Messages:
    1,123
    Multiple redundancy is good, it makes a system more resilient to damage.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    Analog field programmable gate matrix: The more possible connections the more it can do.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Same reason that computers, called "neural networks" fan out the output of each input unit to all of the intermediate layer units (cells). Digital computers which do not do this require a control system and a "program" it can run. There is no corresponding control system, at least not a specific one, nor a control system. AFAIK, only with this fan out can any system learn stimulus response reactions, without a controlling program.

    Also error correcting coding does something quite related. I forget just now* what it is called, but will give a simple example: suppose you want to send a msg of 10 bits. You could send 10 bits, but if any is corrupted in route, the entire msg is lost; However, if you send say 13 bit, none of which corresponds to the individual bits of the msg, you can lose a sent bit (probably even 2, if the loss is random) and still recover the entire original msg with 100% accuracy.

    This is achieved, for example, by making the bits 1, 4, 7, 10 of the sent msg have their + or - value be determined by 25% of the value of the msg bit 1.
    The other 75% of the sent bit 1 is made of three other mgs bits, say 3, 6, and 9.

    Thus if the first sent bit was + but got changed by transmission error to - When the sent message is processed at the receiving end, bits 4, 7, & 10 "vote" that the first bit of the msg is + and sent bit 1 votes it is - but is overruled by the other three which are still correct. i.e. they contain, in distributed fashion, a greater part of the information of msg bit 1.

    I hope you get the idea and see that by distributing the output discharge of each nerve to approximately 1000 others as inputs, not only can the system learn (by make new connections and/or decreasing the strength** of existing one.) but it is also immune to huge failure rates. (Every second thousands of nerves are dying.)

    Memory is still a great mystery (although HM, who died last week) advanced it a great deal, even if he could not remember anything new for more than about 10 seconds.) Once, decades ago, some suggested that each nerve held a fact / a bit, much like the conventional digital computer does with it memory. This came to b e known as the "grandmother" cell concept of memory. Fortunately each bit of memory is very distributed over thousands of cells so if a 100 in that memory net dies, the memory lives on, much like the msg coding system I described above can restore the full bit with 100% accuracy.

    Why memory fails and or cannot be recalled at will is complex and not well understood.
    ------------------------
    *Can someone tell me what this error correction coding method is called? I think it is very widely used now days, for example in digital TV – Why there is never any “snow” but occasionally a whole block of the picture is lost when the decoding cannot recover a section of the information correctly.

    I once spent a few hours as homework manually making a simple distributed coded msg – all I tell above is from that exercise, as I remember it. I really would like to know the name associated with this clever scheme. I have not been able to recall it for more than two decades. I bet when someone does tell me the name, I will recognize it – thus indicating a clear case of “recall failure” not “storage failure.”

    **One of the early methods used for connection machines (what I prefer to call man made "neural networks") was the Hebb rule. Hebb wasa biologist and investigated learning at the neurological level in animals, I think. Basically it is that if "A" & "B" nerve outputs both stimulate nerve "C" at essentailly the same time then their çonnection strength to "C" is increased. In biologic system this miggh be by greater release of the neurotransmitter. In Connecition machines, typically the resistance in A & B's fan-out path to "C" is decreased.
     
    Last edited by a moderator: Dec 28, 2008
  8. EntropyAlwaysWins TANSTAAFL. Registered Senior Member

    Messages:
    1,123
    A parity bit perhaps? not entirely sure what you mean.
     
  9. Read-Only Valued Senior Member

    Messages:
    10,296
    My question to you is one that I think should be obvious: since we are still so far from understanding just exactly how the whole system operates, what makes you feel certain that the amount of information sent is somehow excessive??
     
  10. Crunchy Cat F-in' *meow* baby!!! Valued Senior Member

    Messages:
    8,423
    EntropyAlwaysWins's answer is quite correct.
     
  11. Giambattista sssssssssssssssssssssssss sssss Valued Senior Member

    Messages:
    4,878
    Like .par (parity) files for large data transfers. It's interesting, in theory and application!

    Please Register or Log in to view the hidden image!

    :m:
     
  12. Giambattista sssssssssssssssssssssssss sssss Valued Senior Member

    Messages:
    4,878
    I know very little about neurological processes, but is there a differentiation here between individual neurons and neuronal pathways? From what I know, just about any input/output goes to more than one area of the brain. Each area of the brain functions like a separate processor with coprocessors, and that all input is sent to at least one or more areas which all function under their own programming.
    Is that not what you're referring to? That you get as much in/out from as many specialized areas as possible, and that somehow, the sum of the values from the different areas determine the overall result?
    Obviously some regions of the brain aren't designed to handle each and every bit of sensory input that they might receive. I'm guessing there's also restrictions on neural impulses relayed to certain areas.

    I do know that this reminds me of the story (it was an article I read and found fascinating) that there was a guy with above average intelligence but considerably smaller than average brain size because of undiagnosed hydroencephalitis or something like that. And the thing was that scientists were puzzled because most people in his position would be impaired in some way. I'm sure I've made reference to that case before on Sciforums, but I haven't been able to relocate the original story from several years ago.

    Maybe you're referring to something else, in which case I know nothing! :bawl:
     
  13. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    If this is about my post, no. Not a parity bit. Parity bits only tell that an error exists (and not even that if an even number of errors have occured.) They do not allow perfect recovery of the msg, despite errors during transmission (or even transfer within one processor). Read my numerical illustration about using 13 bits to send a 10 bit msg again and ask specific questions if you still do not understand.
     
  14. river-wind Valued Senior Member

    Messages:
    2,671
  15. EntropyAlwaysWins TANSTAAFL. Registered Senior Member

    Messages:
    1,123
    Sorry, I'm not familiar with this technique. It merely reminded me of a parity bit and I wondered if, perhaps, the two techniques were related in some way.
     
  16. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    only very vaguely. River-wind's link, if followed thru several sublinks leads to:

    http://en.wikipedia.org/wiki/Hamming_code

    This is probably more complex than most will want to read, so I suggest you just go to the last section, the 7/11 exapmple. I.e. A seven bit message is converted into an 11 bit tranmitted word which even though the transmission channel is bad (has frequent errors) give a very high probability of recovery of the original 7 bit message error free.

    BTW, my speculation about my memory failure (that it was one of recall, not storage) was correct. As soon as I read the name "Hamming" in some of the other sub-links leading to the above "Hamming coding" link, I remembered that this error correction coding was invented by Hamming. AFAIK, it is still the very best and I continue to think it is widely used in many digital transmission - such as Digital TV. - Why there is never any single bit errors that would show up as "snow" in the picture.

    Hamming coding is a case of "some garbage in, but perfection out." His contribution (circa 1950) to modern life in the information age is too little recognized. (Too complex for most to understand, I guess.)
     
  17. Betrayer0fHope MY COHERENCE! IT'S GOING AWAYY Registered Senior Member

    Messages:
    2,311
    Thanks for the answers. To read-only, I don't know, I just saw some science channel special and noticed that something seemed odd. I was more wondering what would happen if we didn't send all the "extra" information, the "3 bits" that Billy mentioned. Ah well, thanks guys.
     
  18. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    The fan-out is typically on the order of 1000, not 3 extra bits. I just talked about Hammon coding to illustrated that an infromation transfer system (That is what the brain is.) can be very self correcting with lots of redundancy. If the brain were more like a digital computer - efficiently used its bits instead of throwing them to the four winds as the brain does to see if anything useful will develope, then whenever a cell died, you would lose something.
     
  19. Betrayer0fHope MY COHERENCE! IT'S GOING AWAYY Registered Senior Member

    Messages:
    2,311
    Ah, cool. Thanks, this thread has been entirely answered.
     
  20. gluon Banned Banned

    Messages:
    512
    I'll give you a few reasons why. First reason is coherence. The wave function of a system must have some particular coherence with another system, even if it is controlled by uncertainty or any complimentarity. Taking this into account, the wave function of my body has practically deflated, or if you like, each particle in my body has decohered in presence of one another.

    The matter we are made of is largely fermionic, and therefore these fermion particles must obey the fermi-dirac statistics. Since these statistics depend upon on a notion called spin, the spin is actually simply the angular momentum of the particle through space and time. But energy can only ever be transported via angular momentum, so in time of \(c<t<r<0\) one atom could exchange energy with another atom, such as a photon or an electron. But even information is exchanged upon the spin of the particle.

    It seems tht neurons are simply large clusters of living matter which are made up of billions upon billions upon billions of little systems all working together to form one thing. And, contructed as they are allows each neuron to act like a tiny computer, processing, recieving and sending messages in the form of electrical signals. Because of the way these little neurons are designed by the genetic makeup of the human as a blueprint, they are able to cohere with each other quite well, by giving up or sharing some of their energies.
     
  21. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    To Gluon:

    Your post 17 has very high (>90%) percent of nonsense
    because the brain is not a quantum system, but entirely classical.

    Read a little about how nerves fire: I.e. about the "sodium pump" restoring the "resting potential" (~ -70mv) after the Na influx has propagated down the axon etc. The capacitor like function of the fatty Gila cells (insulation)* that wrap around the axons and the function of their nodes. Etc.

    Neural discharges are well understood in classical electrical terms although the moving charges are ions, not fermions (electrons) as you falsely assert.

    The specific (selective) sites on cell stoma, mainly, for the various neurotransmitters that can both trigger or inhibit firing in collective net action are also reasonably well know now in details. These neurotransmitters simply diffuse across the synaptic gaps. Etc.

    SUMMARY: No quantum physic anywhere in this system.

    ------
    *Multiple sclerous is often associated with a failure of the Gila cells to do their insulation job.
     
    Last edited by a moderator: Feb 3, 2009
  22. Hercules Rockefeller Beatings will continue until morale improves. Moderator

    Messages:
    2,828
    Well called, Billy T. :bravo: You're completely correct and, as per usual, gluon/Reiku is talking a load of nonsense.


    Yes, multiple sclerosis most often occurs as a result of an auto-immune response against your own oligodendrocytes (the glial cell type that ensheathes axons and provides insulation for saltatory conduction).
     
    Last edited: Feb 4, 2009
  23. Cyperium I'm always me Valued Senior Member

    Messages:
    3,058
    The quantum laws are different from the classical in that the classical laws are easier (they are the quantum laws made simpler since most of the quantum laws isn't needed to describe the classical laws). Is it impossible to think of a situation of higher laws that are emergent from the classical, just that they are even more complex.

    Suggesting that we are in the middle of two very complex worlds where our world is made simple by ignoring much of the complexity of the other worlds since they are not needed to explain our world.

    What I'm trying to say is that there is one world of the quantum (which we can't describe with merely classical methods), one world of the classic (in which we don't fully understand how it emerges from the quantum, or chooses to strip away the quantum because the accuracy is high enough anyway - and it would only lead to very complex calculations) and another world that involves more complex classical situations (in which quantum effects might not matter much - but perhaps if the signal is distributed so that the very very small effects of the quantum has an effect also perhaps due to the brainwave frequencies which can be like a measuring zero point to show the subtle differences).

    This is only speculations of course, but without speculations we won't find anything new either.
     
    Last edited: Feb 7, 2009
Thread Status:
Not open for further replies.

Share This Page