Cellular Computing

Discussion in 'Computer Science & Culture' started by ElectricFetus, Jun 4, 2003.

Thread Status:
Not open for further replies.
  1. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    I have been reading up on the Playstation 3 and found this very interesting; here is a article on it:
    http://news.com.com/2100-1001-948493.html?tag=fd_lede

    Lets do a brief run over here: most CPUs are serial, having only one path or doing only one instruction at a time per step on their pipelines. This is rather very inefficient, take the human brain which processes at a millionth the speed of a average computer but we do a million different operation per instant on a million different pathways, and the computer only does one per instant. To get around this problem there is parallel computing in which we have many cpu working together in a server or big ass mainframe. This though is very expensive and space consuming. Resent ideas have been to put many processors on one slap of silicon (in one chip) this way you could have say 16 tiny 4GHz CPUs doing 16 instructions per instant instead of one, thus multiplying the performance. The concept can be extended beyond just single die parallel computing: why not have it share computing power with other compatible systems (this is call distributed computing) These combined ideas is cellular computing. This provides many other advantages:

    Makes production cheaper and adds redundancy : As is, more then 30 percent of CPU manufactured are flawed and rejected. As CPUs become ever smaller and more complex their rate of rejection goes up reducing production and increasing cost. By having a single die parallel processor with redundancy any failed core can be routed around, so a 16 processor CPU can survive a total failure of one or more of it processors (of course it would no long be 16 processor CPU, but 15 or lower now and have to be sold as a lower grade CPU)

    Efficiency and Redundancy With each processor having its own memory bus and perhaps it own controller say no more to the north bridged mother board controller or even the south bridge, but even more scary say no more to the graphics card! Yes the PS3 will have nothing but a cellular CPU and no GPU. So Nvidia and ATi might be in real trouble when this stuff hits.

    Down with Monopolies? I already mention how this might make the video card obsolete. Unless Intel (and AMD) comes up with there own competing parallel processing CPU (Intel’s latest P4 already have built in parallel processing on a single CPU, only 2 processor emulation though) parallel processing is something most programmers have been prepared for, even so Unix and Linux are much more suited to arrays beyond 32, though I doubt Monopolosoft, eer, I mean Microsoft would take this with their trousers down and will most likely be totally ready for this when it out in 1-2 years.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Blindman Valued Senior Member

    Messages:
    1,425
    Ahh Parallel software. Shudder……

    In the early 1990’s I was lucky to work on an array of 16 INMOS T800’s (Transputers), delivering a top speed of 320Mhz (1 T800 ran at 20Mhz). (very fast for its day)

    Yet we never got a processing rate of 320Mhz. It always took time for data to be moved around and thus there were always processes waiting for data, and CPU cycles lost.

    Also I found that increasing and decreasing the count of processors would require rewrites to most effectively use the new setup. It was not as simple as just increasing the number of cpu’s to increase the processing speed.

    Programming was also more difficult. I used a language called OCCAM and assembly.
    It had some great semantics such as

    PAR -- Run all the following lines in parallel (concurrently on one or more processors)
    A = A + 1
    B = B + 1

    SEQ -- Run all the following lines in sequence
    A = A + 1
    B = B + 1

    But could produce some horror bugs.
    The worst bug to find was communication deadlocks. In which the complete network would stop due to the fact that a process would halt while it waited for data from another process which also required data from the first process (very simple deadlock). A software solution could run for hours and then suddenly halt and it was almost impossible to trace.

    I think that parallel processing has not really taken off due to the difficulty of programming scaleable software.

    A 1Ghz processor will always out run a 1Ghz array of 1000 1Mhz CPUs or even 2 500Mhz CPUs. Not to mention the vastly different software solutions required to take advantage of the different CPU counts.

    Anyway this was 10years ago and I have not followed the progress of parallel languages since then. Things could be a little easier these days.

    Some apps are much more suited to parallel solutions, especially rendering pipelines.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. ElectricFetus Sanity going, going, gone Valued Senior Member

    Messages:
    18,523
    Parrallel computing is quit common today: many Macs are dual CPUed (2 processors), the newest P4shave hyperthreading and can run two threads at once emulating 2 processors, Almost all servers are made of arrays of processors. Yes it is true that things are more complicated then that and more processors also means more inefficiencies, even so it is a major performance enhancement.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
Thread Status:
Not open for further replies.

Share This Page