Edge detection

Discussion in 'Intelligence & Machines' started by domesticated om, Jul 4, 2010.

Thread Status:
Not open for further replies.
  1. domesticated om Stickler for details Valued Senior Member

    Messages:
    3,277
    Since I'm working on my own robotics project right now (which is actually turning into a full blown AI the more I progress on it), one of the tasks I'm facing is edge detection. I initially thought it wouldn't be that hard. I've written programs that can detect the edges of still images provided the "edge" in question is white. One example is the ASCII art bbcode program I was writing a while back. Here is one example:

    Original Image:

    Please Register or Log in to view the hidden image!



    ASCII version (minus the registered trademark):
    ***************************************************ZZZZ
    **************************************************ZZZZ
    *************************************************ZZZZ
    *************************************************ZZZZ*
    *************************************************ZZZZZ
    *************************************************ZZZZZZ
    *************************************************ZZZZZZZ
    ***********ZZZZZZZZZZZZZZ*****************Z********ZZZZZZ*******************ZZZ
    *******ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ*ZZZZZ****ZZ**ZZZZZZZ**********ZZZ**ZZZZZ
    ******ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ*ZZZZZZ*ZZZZZZ**ZZZZZZZZ******ZZZZZZ*ZZZZZ
    ***********************ZZZZZZZZZZZZ*****ZZZZZZZZZZZ****ZZZZZZZZ*****ZZZZZ**ZZZZ
    *************************ZZZZZZZ********ZZ****ZZZZZ*****ZZZZZZZZ****ZZZZ**ZZZ*****************************ZZZZZ
    **********************ZZZZZZZZ**********ZZZ***ZZZZZ*******ZZZZZZZZ**Z*ZZZ**ZZZZ***ZZZZZZZ*******ZZZZZZ*ZZZZZZZZZ
    ******************ZZZZZZZZ***********ZZZZZZ***ZZZZ*****ZZZZZZZZZZZZ*ZZZZZZZZZZZ*ZZZZZZZZZZZ****ZZZZZZZZZZZZZZZZZZ
    *************ZZZZZZZZZ**************ZZZZZZZ**ZZZZZ***ZZZZZZ***ZZZZZZZZZZZZZZZZZ*****ZZZZZZZZ*****ZZZZZZZ***ZZZZZZ*
    ********ZZZZZZZZZZZ*******************ZZZZZ**ZZZZZ**ZZZZZZ****ZZZZZZZZZZ**ZZZZZ***ZZZZZZZZZZZZ***ZZZZZZ****ZZZZZZZZ*ZZZ
    *****ZZZZZZZZZZZZZZ*******************ZZZZZ**ZZZZZ**ZZZZZZ***ZZZZZZZZZZZ**ZZZZZ*ZZZZZ**ZZZZZZZZZZZZZZZZ*****ZZZZZZZZZZZZ
    ***ZZZZZZZZZZZZZZZZZZZZZZZ************ZZZZZZZZZZZZZZZZZZZZZZZZZZZ**ZZZZZ**ZZZZZZZZZZZZZZZZZZZZZZZZZZZZ*******ZZZZZZZZZZ
    *ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ***ZZZZZZ***ZZZZZZZZZZZZZZZ*ZZZZZZZZZZ**********ZZZZZZ
    ZZZZ**********ZZZZZZZZZZZZZZZZZZZZZZZZZ**ZZ*************ZZ********ZZZZZ**************************Z
    ***********************ZZZZZZZZZZZZZ******************************ZZZZZ
    *****************************************************************ZZZZZ
    ****************************************************************ZZZZZ
    ********************************************************ZZZZZZZZZZZ
    *******************************************************ZZZZZZZZZ


    This program uses an image (jpeg, png, bitmap, etc) as input, and treats each pixel of that image as an analyzed segment. Furthermore, the program detects the right-most edge of the image and performs a behavior. In this case, everything to the right of the dark pixel is excluded from the output (you can highlight the ascii image, and see this).

    When applied to moving images (video), you can say that this image represents a single frame (or "sample") of the video. If the input video is black and white, then this will effectively perform edge detection on that too. My current hypothesis on the matter is that the overall edge detection problem is simply a matter of developing the right algorithm, and isn't necessarily a computer-intelligence problem.

    The reason I posted this thread however is to find SciForumers who've actually worked with computer vision before, and see what you guys have to say about the matter. I was reading the wiki article on it, and it explained that edge detection was actually extremely difficult.

    PS -- I welcome input from anybody that wants to post (even people who don't have any knowledge of computer vision or programming and just want to share their ideas)
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Real sight is based upon a neural network, that applies an overlay of various visual directions or patterns. How I mean is that one node will view all horizontal lines while another views vertical and another might view diagonal, there is then of course colour matching that works similarly, and this overall is duplicated for the other eye (since we have two usually)

    A proof of concept for this is by looking at a colourblindness test and identifying what numbers you can see, or looking through sheet of numbers for a particular numeric value, or playing a computer game where you need to rotate coloured blocks to generate pattern matches.

    It's possible to apply the same method to artificial sight, obviously though for an enriched method it will require neural networking which means multiple processes applying their output together to generate a "fuzzy" outcome. Which is probably a little more work than you care to put in.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. domesticated om Stickler for details Valued Senior Member

    Messages:
    3,277

    That is a really interesting idea actually. I could attach each of those tasks to a different dedicated process that runs simultaneously, and outputs its data to a central "analysis process" (I guess I can call that the "chalkboard"). I would only need to write one "line finder" node process, then instantiate it however many times with the obligatory parameters .... like a single LineFinder class with parameters that tells it how to look for stuff per axis.

    Only three are needed (horizontal, vertical, diagonal) right? That seems a bit odd....the vertical and horizontal process would rarely be used, and the diagonal process would be working all the time. Would you happen to know if the horizontal/vertical axis have wider margins how they are defined? Example - a line who's angle is between +/- 3 steps vertical on the horizontal axis is sill considered horizontal..... or is is strict?
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    To domesticated om
    I post to make you aware that humans don't do “oriented edge detection” and perhaps your robotic application should not either. (Mother nature has often found the best way to process information, especially for recognition regardless of lateral displacement shift or rotations.)

    Here are two pictures of President Lincoln:

    Please Register or Log in to view the hidden image!



    From: http://www.ajronline.org/cgi/content/full/190/5/1396, but they do not know much about how the brain functions; but this is a good introduction to transformation of our normal 2D images, with some discussion of its use in medical image processing.

    The one on the right is quite like that one constructed in V1 of your brain. Actually rather than Fourier’s trigmetric basis set (infinite extent) the brain uses a basis set much more like the Gabor functions.

    Hubble and Wesel got the Nobel Prize for discovering that V1 processing is to a large extent “oriented edge detection” *but their experimental samples (sharp contrast single linear boundaries) presented to monkeys are a special case that allows them to draw that conclusion.

    I will not attempt to fully explain here why we do not do “oriented edge detection” but instead immediately convert the image data into frequency space distribution, similar to the Fourier (or Gabor) transformation.

    The absolutely definitive experimental proofs of this has to do with sample that have the same edges but different phase relationships. They would not have any different responses if it were an edge detection process, but they do as the phase of the set of linear edges is important. If you want to fully understand how we know it is not line detection but a transform into a spatial frequency space of the retina information read the chapter on that here:

    Spatial Vision, first edition, Oxford Psychology Series No. 14, by R.L. & K.K. DeValois Oxford University Press (ISBN 0-19-505019-3)

    -------------
    *And most people who know anything about visual processing still at least speak in those easier to understand terms.
     
    Last edited by a moderator: Jul 10, 2010
  8. domesticated om Stickler for details Valued Senior Member

    Messages:
    3,277
    Thanks Billy T.
    I also noticed that the AForge.net framework appears to have a ready made Fourier transform process to work with just in case I shouldn't bother trying to reinvent the wheel.....but I think I'll write my own anyway just to get a good understanding on what can be done.
     
  9. kriminal99 Registered Senior Member

    Messages:
    292
    That's not true. Our eyes do edge detection mechanically. It's like having separate inputs of an outline of the image and the actual image before our brain even begins to process the image.

    Anyways this information about human beings isn't necessarily useful to this guy.

    You can take all the pixel values and apply a filter. To detect edges. One is to just replace each pixel with the result of cross subtracting 4 pixels. IE if you had

    1|5
    ---
    4|8

    Just replace it with |5-4| + |8 - 1|
     
    Last edited: Nov 16, 2010
  10. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    I would like to know more specifically what part of my post you quoted is "not true" but I agree a great deal of processing takes place in the eye and that much of it is like “edge detection.”

    This retinal processing is needed as the information transfer capacity of the optic nerve in much lower than the rate information is captured by the retinal cells - something like 50 times less! I have always considered this retinal processing to be "very efficient data compressing", but do agree, something like "edge detection" is a major part of it as in retinal image regions where there is little intensity variation or color variation, most of the retinal impulses from that region contribute nothing to the optic nerve data stream. There are some very interesting “filling in” psychological experiments that prove that this “not sent” information is later in the brain “filled in.” (More than just the well known “blind spot” fill in.)

    As I believe what we visually perceive is created in parietal tissue, not the "emergent" end result of many stages of neural, computational transforms of the retinal input signals (or their compressed version in the optic nerve) as the accepted POV states, I can explain why one's perception of the space in front of them is essentially equally sharp over a wide angle instead of only over the narrow (less than 2 degrees) that the high resolution fovea is data processing.

    The accepted "perception emerges" idea of visual perception has the impossible task of explaining how "garbage in" for all but less than 2 degree solid angle makes the same quality perception as the fovea processed area does over about a 100 degree solid angle. My "crackpot" POV about how perception is achieved, easily explains this and dozens of other psychological and neurological facts that the accepted POV about perception cannot.

    It also offer a very plausible explanations of several somewhat mysterious historical events, such as the "Out of Africa explosion” (when one small group of hominoids quickly dominated all others) & how our ancestors were able to kill off the bigger brain and much stronger, Neanderthals.

    To quickly understand some serious flaws of the accepted theory of perception, I hope you will first read: my short post at: http://www.sciforums.com/showpost.php?p=2502342&postcount=12

    Then to understand my alternative POV about perception read my longer “RTS essay” at: http://www.sciforums.com/showpost.php?p=905778&postcount=66

    You can ignore what it has to say about opening the possibility that genuine Free Will need not necessarily being inconsistent with the deterministic laws of nature – that is minor point that just “falls out” of my RTS theory of perception.

    You obviously know a great deal about visual processing so I hope you will read both and comment.
     
  11. kriminal99 Registered Senior Member

    Messages:
    292
    A little... My main area of interest is the classification problem, and I studied image processing for a while as images are one of the most interesting types of data to classify. I have also studied neural networks, some about the human brain while learning about processing functional MRI data etc. Also studied statistics and philosophy in the past.

    A professor I knew had a hypothesis that brain sulci were caused by white matter generated tension. Implying that when parts of the brain became connected, they pulled themselves together somewhat.

    Hebb, as in Hebb's rule of neural networks, hypothesized that neural networks based on the principle of his rule (aka long term potentiation) would generate
    "engrams" or representations of inputs in the neural networks.

    However, he was not implying those engrams would have a certain size or shape. Meaning there could be a local representation of an input that would then get pulled apart as that input was related to various other precepts/concepts.

    How far away are the various visual processing centers in terms of signal transmission time? Maybe they don't have to be reassembled, because maybe they were never really apart? Maybe their separation is in fact data driven. Unless you believe in intelligent design, there is little other explanation.

    Regarding 3d perception, the tiny high resolution circle etc. I am confused, because to me both of these things seem like illusions.

    We know that it is possible to create a 3d model by having two cameras a short distance from each other taking pictures in the same direction. We also know that humans like to associate things. The strength of association between the image of an object and it's likely appearance from other angles is strong, as we see this ALL THE TIME. Even if it is a new object, it probably is similar to a geometric shape we have seen rotated before.

    Association explains a lot of things like the size correction you mentioned in the case of the father and son. Other similar examples with other aspects of vision and even other senses exist. There is an optical illusion where there is a chessboard with pieces on it, and the shadow of the piece over a white square is exactly the same shade of gray as one of the dark squares out of the shadow. However, it doesn't look that way at all. Saving the image to paint and extracting the color then painting a path from the shadowed white square to the dark square just about makes your brain explode. IMO paranoia is another example of the same thing happening with sound. Someone a short distance away says "Ow my leg hurts", but your brain processes it as "the guy in the blue shirt" when you happen to be wearing a blue shirt and turn to look at them to see if they are looking your way.

    <picture> http://en.wikipedia.org/wiki/File:Grey_square_optical_illusion.PNG </picture>


    So we are looking at 2 slightly different angles of an object at the same time, and a view from any angle invokes memories of the others. Other than these two things, I don't feel like I do have a 3d perception. I know how long/far to move my hand out to interact with something because I see my 2d hand from 2 different angles get smaller at the same rate it always does, the same amount I always do before I feel whatever is at a given distance. I know the distances by looking at the objects current shape, guessing it's up-close size from past experience, etc. But who says I have a real 3d image at any given instant?

    I also don't see a wide angle of high resolution imagery. I am don't really think about this fact normally, but when put to the test it's obvious I have a narrow angle of high res view. As I type this with sight fixated on this word, I can't read the text on either edge of the box, or more than a couple lines above. However, I can quickly move my focus up there, I can still make out large shapes up there, and I have a strong instinct of what is probably up there because of association.

    I am not really sure if I am disagreeing with you or not, but it seems to me that we have all those limitations you imply from study of the physical properties of the eye and brain. We are just simply inclined not to notice those limitations or think of them that way.
     
    Last edited: Nov 19, 2010
  12. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Thanks for reading my RTS essay and telling some of your background. I have Ph.D. in physic from John Hopkins and was employed 30 years in their Applied Physic Lab. Perhaps 1/3 of my work was related to a dozen or so biomedical projects (Mainly implanted devices). I assisted several brain surgeons at the Hospital especially in the last half of my career, usually supplying electrodes, stimulators, etc. For more than a year, I spent my weekends as a volunteer working with one of them at his primate lab (~50 Rhesus monkeys in several neurological studies) On one occasion he was called back to the OR as a head injury car accident patient was inbound and I had to complete the half done brain surgery in progress.

    There is little to support this POV and much against it. For example the largest of all bundles of white fibers, the corpus callosum, is highly curved - clearly without significant tension in it. “... Its anterior portion (genu) arches forward and downward thinning out to form the rostrum. … The posterior thick portion (splenium) is bent acutely …” from Frank Netter’s classic CIBA collection of medical Illustrations. (A 1953 edition that belonged to my MD father.)
    ( By “acutely” they mean with a radius of curvature of about 3 cm!)

    Another simple proof that his POV is wrong is the fact that the brain is very soft. It can not support tension anymore than a bowl of jello can support tension of a stretched rubber band inside it.

    I believe the accepted POV is correct: The entire cortex has essentially the same seven layers of distinct cells. It is only a few mm thick with up to 100 cells arranged in columns. It is a late in evolution addition to the brain with increasing area as the creatures became more complex. With the deep sulci found in man, evolution packed at least four times more cortex inside the skull than the simple surface of the brain area. To make more cortex inside the limited skull is why the sulci exist.
    I am not completely sure, but think many of the separated regions that process various “characteristic” of the visual signals have no direct connections. If there are any, the signal speed would depend (at least by factor of two) on whether or not the white fiber connections were myelinated or not.) I believe that if information were to go say from V4 to V5 it could only do so by going back to V2 (or V1) I.e. the “tree model” of these branching connections I think is correct.
    In some sense everything is “illusion” as we cannot be 100% sure there even is an “external world” out there but I like you assume there is. I will compress much of your remaining post about image resolution into this part of it:
    That is completely correct. For example, move a printed text 5 degrees off the fovea’s direction and you cannot read it. You lack the high resolution there that you have in the part of the visual field the fovea is looking at. My statement that your PERCEPTION was equally clear over a wide (~100 degrees) field of view does not contradict this fact. It is your perception of the world that I believe is created in parietal brain.
    There is little reason to believe we store any image of any object. Instead, it is more likely that using the deconstructed “characteristic,” sort of like a list, we reconstruct the image to perceive it.

    Before we can do that, we must parse the continuous visual field into “objects.” I discus how that is done in detail in my original JHU/APL paper (ref 1 of my RTS essay).

    Briefly, it trades on the fact that like orientation “edge detector” cells are mutually stimulating and orthogonal ones are mutually inhibiting (That is known fact.) As all unobstructed objects do have a closed perimeter edge, these cells responding to that edge (I suggest) fall into synchronized firing patterns* producing the “40 Hz” synchronized oscillation that were observed (with micro electrode pairs) in cells of V1 which were several mm separated when I put this fact together with my knowledge about the difference in mutual interactions between like and orthogonal edge detectors to suggest that is how one parses the the un-interrupted visual field into separate but still un-identified “objects.”

    In the original JHU/APL paper, I also discuss how this works for parsing an object (such as black cat behind picket fence) which is partially obscured by a foreground object and give a set of photos demonstrating my theory. First photo is a field of separated objects for you to count, before turning the page. Then on the next page one more object is added, but your object count dramatically decreases as now that new object is understood to be in the foreground obscuring parts of the continuous objects you previously counted as separate objects. I.e. I explained at neural interaction level how and why the Gestalt “Law of good continuations” works. You may want to request a copy of Ref 1 of the RTS essay to read these details and also my ideas as to how the once parsed objects are identified.
    Yes, This “local contrast” effect is well known and applies to colors also. For example two identical pink squares, one inside a larger deep red area and he other inside a white area look entirely different. (The one inside the red area appear much lighter.) Both color and “greyness” are perceived relative to what is nearby. For second example: a black sheet of paper in full sunlight appears black even though it is reflecting much more sunlight to your eye than a white sheet of paper in a dark shadow region is. Land (of Polaroid fame) built his “retenx” theory of color mainly on this.
    You have about a dozen other ways to judge 3D. The one eyed person cannot use this one (except as he moves his head) Also angular difference between the two eyes is too small to sense beyound about 25 feet in part because your eyes, even when “staring” have a constant fine scale jitter. Images artificially fixed on the retina disappear in pieces without this jitter. I.e. a line drawing of a square will become a “U” or a “square C” etc. as the various side line cease to be seen for a few seconds. This is due to the illuminated retinal cells becoming fatigued, (and to the fact I discussed above that these co-aligned edge detector are mutually reinforcing. - They all fire together when seen and are not seen when tired and they are firing more slowly at random. Never will only part of one edge of the square be seen. It is "all or none.") Many "psycho-neurological" facts become clear when you understand how they are created at the neurological level. This "all or none" seeing of linear segments of retina fixed objects, I think, is strong evidence for my POV as to how the visual field is parsed into objects.
    Yes, but I would put it slightly differently. We are creating what we perceive and use our prior knowledge in doing this. This both compensates for some of the true sensor defects / limitations and also allows us to see what is not there – i.e. illusions.

    -----------------
    * This is exactly like what happens if you connect two AC generators together. If one was running at 60.01 Hz and the other at 59.99 Hz they will rapidly have one common frequency when mutually interacting. The "40Hz" oscillation just got that name as that was the frequency mentioned in the original paper reporting this synchronized oscillation of cells several mm apart. In fact the oscillation of the perimeter loops typically is a frequency between 25 and 70 Hz. I.e. I suggest objects are parsed and "separately tagged" by a unique frequency in this range as the first step in recognizing "objects" in the continuous field of visual cell stimulation in V1.
     
    Last edited by a moderator: Nov 19, 2010
  13. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Note on 3D:

    Looking Around Corners using Femto-Photography
    http://web.media.mit.edu/~raskar/femto/

    Interesting enough one scenario they've missed is the application in creating 3D camera's, after through one lense you could layer both a standard 2D photograph as well a Femto-photograph at the same time generating a 3D input. (There is another scenario involving observing the perspective of an atom, but that's a little off-topic and still needs a proper discourse)
     
  14. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Radars do something quiet like this to make 3D data of the air space around a ship including to some extent beyond the line of sight horizon. (Especially the near sea surface beams bend towards the center of the Earth as the travel away from the ship. You can see the sun setting even though its is already well below the straight line horizon but this visual effect, unlike the beyond horizon radar, does not depend upon precise timing. )

    Bats do it even better. They build a 3D understanding so well with their sonars that they can tell at about 30 feet, which of dozens of tiny bugs is the good tasting one. When I worked at APL, there was an extra secure large room full of fast computers which processes passively recored sounds of USSR's subs in huge amounts of natural sea noises. I think with the object of hull identification to aid keeping track of them, etc. A friend who worked in that room etc. once told me that it took them more than a day to process the same type of sonic information a bat processes in real time with his "computer" which weighs less than 2 grams!
     
    Last edited by a moderator: Nov 19, 2010
  15. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Here, related to post 9, is a little more on the fall off of resolution as you move away from the fovea, and the fact that you don’t perceive that decrease in resolution:

    When reading a book you perceive all of the letter on the page as clear and well defined, but they are not well resolved except in the fovea view area.

    There is an interesting demonstration of this which uses fast computer and accurate eye position sensors. The subject is asked to read text displayed on a computer screen and usually knows that some of the letters will be constantly changing. Anyone but him looking at the screen can read nothing as the screen seems to be just a sea or jumble of rapidly changing letters.

    It is not uncommon for the subject to say after reading some part of the text: “OK, I’m ready. Start scrambling the letters.” When they all, except for a few* in the line his fovea is looking at, are already continuously and rapidly changing! – His perception is of a static page full of steady letters.

    *This has been used extensively in reading research to learn that you only see clearly a couple of letters to the left of fixation and perhaps four to the right of fixation. If you are a rapid reader, with large saccades, more than half the page you never saw clearly!

    ------------------
    Also almost everyone can read at least 10 times faster than they do as the saccades take up most of the time. (Vision is totally suppressed during a saccade.)** I.e. a computer can rapidly display the text to be read at the same point on the screen so you don't make any saccades. In one case, the subject's reading rate was faster than the testing capacity of the high speed display system (a few letters of the text were displayed only in one frame then a blank frame before the next set of single frame's few letters. - Why a blank frame is needed between sets of letter displays, I don't know, but it is.)

    If you want to read more about this search on Rapid Sequential Reading. A lot of the original work on this was done at JHU hospital, I think, back when I was a frequent visitor there.

    ** As a simple demonstration of this total suppression, look at one eye when standing close to the mirror. You can see in that image which eye you are looking at. Then switch to look at the other eye. Never will you see either eye move! - not even a blur as they do so!
     
    Last edited by a moderator: Nov 19, 2010
  16. kriminal99 Registered Senior Member

    Messages:
    292
    Well this person was fully aware of the corpus callosum as we talked about it often. You would have to speak to him for details, however I can see various ways the theory could be compatible with your specific points. I didn't say how much tension (he was obviously talking about a tiny amount), and axons can connect to more than one neuron so a curved pattern is not out of the question. He intended to study a fly brain to watch it develop. He worked at john hopkins as well btw, perhaps you met him.

    The "accepted" POV you mention doesn't explain anything. It's more like a single weak observation - compatible with just about every other theory out there including the one I mentioned. How does this explain similar foldings and connectivity? Just copies of how the original brain folded in on itself? So if it folded differently then there would be different wiring in the brain? Or they would just find alternative awkward ways to wire to where they needed to go? Is the brain spaghetti wired or is it pretty well organized?

    The tree structure you mention does not contradict a hypothesis about local representations of inputs. Nor does a lack of neuronal resolution - lessening resolution in computer image processing is known to make the task of image registration easier. Exactly how a local representation would be disseminated in such a situation would depend on what each part of the representation was associated with. Different visual "processing" regions would simply represent visual information associated with a different combination of other regions responsible for things like detecting if something was going to hit you, or if it was good to eat, etc etc.

    People with one eye have problems with depth perception. If there are so many ways to perceive things in 3d, why would this be the case? When I mentioned the inability to read the text, I wasn't implying that I found a trick to fool the illusions. I don't really have this illusion. It's obvious I can't see anything with high resolution outside of that tiny angle. I just don't pay much attention to that fact. I do however have the ability to determine what is going on in my peripheral vision when I focus on doing so, just like you might use logic to score a little higher on a vision test than you could from obvious sight.

    The personal problem I have with your point of view is, where does all the brain complexity and design come from. I am not a religious man. I see a theory where the brain represents a data mining algorithm capable of automatically generating the brain's level of complexity when operated on worldly inputs, capable of explaining all that brains are capable of like adapting to different inputs or outputs, regenerating lost functions using remaining tissue etc.

    I know angular difference is minimal at 25 feet. But then, at that distance why would we need to perceive the object in 3d? It's not going to immediately hit, scratch or bite us...
     
    Last edited: Nov 19, 2010
  17. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    Again I will quote small part of your post and respond to it.
    Quick answer is from information coded in your DNA for all of the "gross structure." By that I mean visible to the un-aided eye although even at that level there are individual differences (exact locations and shapes of minor sulci) if you look carefully.

    In this the brain is no more complex than say the heart. Four chambers, specific flow paths, many valves with the same number of "flaps" oriented the same way. The same arterial structure (Right descending artery, etc. for all the others all in essentially the same locations). The "bundle of Hess" nerve fibers sending the electrical impluse down from the Sinal Atrial Node to trigger the other chambers and lots of other things, all specified by the DNA, which are essentially the same for all humans. This is not my field, but I am sure a heart surgeon could match the complexity of a the brain's gross structure point of point with the DNA specified gross structure of the heart.

    As for how nerve axons make their connection in the fetus that is complex, but also under DNA control (achieved by chemical signals) to a large extent. Many of the connections will be eliminated in the first years of life as they prove not to be useful.

    I remember reading an interesting experiment on salamander years ago. As memory is not perfect I will invent some of the details:

    The left eye of a young one and its entire optical nerve track was excised and re-implanted where the right eye had been. Months later it was fully functional - just like a blind in one eye salamander. Then the animal was carefully dissected and they found that the optical nerve had grown to the brain exactly as if it were still in the left eye socket.

    I think they too send half of their retinal field to each eye. (Outside half of the retina to same side of brain and near nasal half of left retina to right brain etc.). I.e. despite being on the wrong side of the head, the eye connected up to the visual cortex as it should have if still in its original position! There are chemical s that lead the growing nerves to roughly where they should connect. The secretion of these unique chemicals is also under DNA control.

    SUMMARY: The gross structure of human brains being identical is no more strange or complex than the gross structure of other organs, like the heart or the eye as that is all programed development using information of the DNA.
     
  18. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    I can no longer edit post 14, but noticed first sentence of paragraph just prior to summary was "excessively compressed." It reads:
    "I think they too send half of their retinal field to each eye."

    but is should have stated:

    "I think they too send half of their retinal field of each eye to opposite sides of the brain."

    If a mod reads this post who can and will correct that sentence in post 14, I thank you. (after doing so, kill all this post thru this line)

    That the heart's detailed gross structure (everything you can see with the unaided eye) is prescribed by DNA just as complex and detailed as for the brain's gross structure consider this new discovery:

    "... Discovery of a fifth gene defect and the identification of 47 DNA regions linked to thoracic aortic disease are the subject of studies released this month involving researchers at The University of Texas Health Science Center at Houston ..."

    From: http://www.therapeuticsdaily.com/ne...e=745717&contentType=newsarchive&channelID=26

    Note, in my ignorance, I did not list any heart mussels among the detailed features specified by the DNA. I know they are a unique type, like no others in the body and given that they don't attach to any bones how they contract to make a life- long efficient pump is amazing - I wish I did know more, but the more I think about it, it seems to me that the gross structure of the brain is simple compared to that of the heart.
     
    Last edited by a moderator: Nov 22, 2010
  19. Blindman Valued Senior Member

    Messages:
    1,425
    The brain is vastly more complex then the heart. Simply consider the weight differences. The heart is a quarter the weight of the brain.

    Secondly the brain has a much lower entropy then the heart even if they had the same mass. Information processing requires lower entropy.
     
  20. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    that "logic by the pound" is a very strange way to judge complexity. By that logic the skin is the body's most complex organ as it weighs more than any other organ. Is that your only argument for the brain's "gross structure" being more complex than the gross structure of the heart?

    I think there are many ways to compare gross levels of complexity*, but central to most would be the number of different types of gross structures specified by the DNA. If man understood better what various parts of the DNA were specifying, I would accept as means of comparing complexity how many "codons" of DNA are needed to specify a heart vs. how many are needed to specify a brain.

    ---------------
    * "Gross level" again being defined as visible to the unaided eye. Not the brain's very numerous interconnections between neurons, that can be seen with staining and a microscope. I agree on that fine scale the brain is much more complex than man's biggest computer network, but that set of connections is not DNA specified.
     
  21. Blindman Valued Senior Member

    Messages:
    1,425
    You have missed the point Billy. It is the lower entropy of the brain that gives it its complexity. We dont measure complexity by counting atoms, we measure its entropy. The brain is by far the biggest energy user of the human body. We decrease the entropy of the brain by increasing the entropy in such organs as the skin.
     
  22. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Messages:
    23,198
    I don’t want to get too far into this off thread point, but generally speaking low entropy objects don’t make a lot of heat. Objects that do that normally make the greatest INCREASE in entropy. You have just noted that the brain on a pound for pound basis makes much more heat than any other part of the body. Thus, I could turn you statements around to claim the brain is the highest entropy producing organ of the entire body.

    What is your basis for claiming the brain is the lowest entropy organ? Can you show that all that heat production is because energy is being used to lower entropy? I don’t think so, but you can try. I think that heat production is because almost all brain processes are IRREVERSABLE.*

    Your last sentence makes no sense to me. How does increasing entropy of skin decrease entropy of the brain?

    ------------------
    * For example consider the most basic property of a nerve - it fires. Each such cell has "sodium pump" which pumps Na+ ions from the interior to the outside. That pump takes a lot of energy to make the interior go to about -70 mV. - to fight the natural diffusion which would have the Na+ concentrations on both sides of the cell wall equal. When a nerve fires, the Na+ ions rush back into the inteior (briefly driving the voltage there slightly positive, until the Na pump can restore it to -70mV again.)

    This is like one team of men carefully stacking up tall tower of bricks only to have another knock it over again - huge waste of energy and great entropy production in both cases. Of course the brain is an "energy hog." It is making entropy increase with EVERY NERVE DISCHARGE! (everyone a thermodynamically irreversible entropy increasing process)
     
    Last edited by a moderator: Nov 22, 2010
  23. Blindman Valued Senior Member

    Messages:
    1,425
    The skin is the heat exchanger. The brain does a lot of work because it has low entropy, heat is moved to the skin. The body maintains the low entropy of the brain by exchanging heat into the environment via the blood and skin.

    BTW a very hot object in a cool environment has lower entropy then two objects of the same temperature..

    I think you should understand what entropy is before responding..

    To edge detection. Most edge detection is made in the first few layers of the retina. Specific optic nerves carry only edge information into the brain. Rods and Cones are not directly connected to the brain. Signals are processed in the first 4 layers of nerves in the retina before they travel to the optic bundle. Each nerve in the optic bundle will carry information derived from thousands of rods and cones.

    The brain does not process pixels.
     
Thread Status:
Not open for further replies.

Share This Page