Yes. The accepted understanding of human perception by at least 99% of the cognitive science professionals and communities
is entirely wrong, IMHO.
That standard POV is that the sensory neural impulses come to the brain and there are separated into their different "characteristics" which are sent to different regions of the brain for more computational processing* (up to this point I agree 100%) and then after considerable computational processing the unified perception of a 3D world out there "emerges."
No explanation is ever given as to how or where these characteristics, processed in entirely different regions* of the brain are ever "reassembled" into the unified perception we all experience. - It is just an act of faith that it "must happen" somehow.
---------------
* For example, if memory serves me correctly for visual neural inputs:
(1) The visual cortex region called V5 is where the three different retinal colors sensitive signals (peaks responses in red, green and blue) are computationally transformed (by well known equations) into the three axis color space vectors used in the brain. (I.e. the yellow/blue axis, the green/red axis & and the white/black axis are computational constructed in V5. For example nerve cells of the "yellow/ blue axis" are more or less than "random active" to indicate yellow or blue. The "random active" firing rate is about half way between the max and least possible rates of firing.) This is why if you stare at a bright red spot after a few minutes those cells will become fatigued and then when looking at a white wall they are not able to make the "random firing rate" so you see a green spot of the same shape that is not actually there on the white wall.
(2) The speed of motion is computed in V3. (I hope my memory does not have V5 & V3 reversed - it has been many years since I was an expert in all this.)
(3) The location of objects is computed in V2, mainly (but some in V1 also). V1 is mainly used to parse** objects from the continuous visual field and to separate the various "characteristics" of them.
For example, since motion and location are processed in different regions of the brain it is possible to experience motion of an object which is stationary! (This is called the "Waterfall effect.") I.e. if you look at a set of horizontal bars steadily moving down on a computer screen for a few minute you will fatigue the cells in M3 that respond to that speed. Then when the motion of the bars is stopped, for up to 30 seconds or so you will experience them as moving up at that same speed, even if you put your finger on a now stationary bar.
Physicists compute motion by the change in position in a fixed amount of time, (dx/dt =v) but that is NOT how the brain does it. (The brain works more like a speedometer in a car -directly detecting speed.) I.e. the brain has motion detection cell in V3; different sets with peak responses at different speeds. Just as you can see / experience green spots that do not exist after fatiguing certain red detection cells you can see / experience motion that does not exist while looking at a stationary object after fatiguing certain motion detection cells in V3.
Note I also agee 100% with this footnote's above standard cognitive science facts and have empirically confirmed many more of them, including the waterfall effect. I simply think that these physically separately processed neural computation results about object's various characteristics are never and nowhere "reassembled." - I have a different theory of how our unified perception is achieved. I do not need any act of faith that they are "reassembled" somewhere, somehow and there is no evidence that they are, despite a great deal of searching by neuro-scientists.
Not only do I offer very specific proofs that the standard cognitive science POV about perception is very wrong, I also offer a new theory of how human perception is achieved, and even show why one should expect to have been evolved by evolution as it gives a great survival advantage to the humanoids who first perfected it - I.e. perhaps the bigger brained, stronger Neanderthals did perceive the world as described by modern cognitive science's "emergent perception" instead of my theory's Real Time Simulation, RTS, model, but they would be at a great disadvantage in ducking a thrown rock, etc. if they only perceived it where the rock was a fraction of a second earlier (Up to 0.3 seconds delay is required for the many successive stages of "neural computations" to complete before perception can "emerge.")
For more complete description of my RTS model of perception see:
http://www.sciforums.com/showpost.php?p=905778&postcount=66
That article is mainly about how the RTS model allows genuine free will to exist - not be conflict with the laws of physics, which determine the firing of every nerve in your body. The RTS is explained and evidence supporting it is given. Why I suggest the RTS is located in the parietal cortex is also presented. I also give several proofs that the standard "emergent perception" POV is nonsense at the above link.
Note that post 66 is from 11/11/2005 but idea was published in the
Johns Hopkins Technical Digest years earlier. (See ref. 1 of the linked post.)
A few young cognitive scientist have accepted my POV, but most have too much invested (many papers etc. which become nonsense if I am correct) to do so.
I.e. I offer a major paradigm shift.
** That
Johns Hopkins Technical Digest paper, Ref 1 of post 66, also explains how objects are parsed from the continuous visual field of retinal stimulation AT THE NEURAONAL LEVEL, not as then usually done with some higher level hand-waving jargon. I.e. using only known properties of V1 neurons, their receptive fields and detailed characteristics about their pair-wise mutual interactions. (Similar oriented receptive fields mutually reinforce firing rates and orthogonal receptive fields mutual inhibit firing rates.) This aspect of my paper was not so threatening to the established cognitive scientists and is now generally accepted, but others, more well known in the field than this physicist, also came to this conclusion at nearly the same time – Shortly after the coherence of the then so called “50 hertz oscillations” were observed with fine micro electrodes in neuron pairs a few mm apart in V1. My neuronal explanation of parsing naturally explains why camouflage works to hinder object parsing and the Gestalt law of “Good Continuation.”