Atom Chip

Discussion in 'Computer Science & Culture' started by Sci-Phenomena, Mar 2, 2006.

Thread Status:
Not open for further replies.
  1. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    Yep, double the exponent and square the storage

    Please Register or Log in to view the hidden image!



    Unfortunately cheap memory seems far more difficult than cheap disk storage. Maybe quantum technology will come to the rescue there ... but until then, having the ability to address 2<sup>64</sup> bytes won't benefit most people.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Well, in my view it seems that the problem with memory isn't so much that it isn't cheap, because really it is. Compared to what it used to be. I can easily buy a gig for 100 dollars. This is not really expensive for a largerscale system. The problem really is that over a certain size and a computer simply can't access that LARGE of ram in a reasonable amount of time. My prediction, for desktop type machines, is that we will start seeing processors similar to the Niagra chip from SUN Microsystems. In these types of systems (which I may be putting a paper out on very soon), you would essentially have between 4 and 16 processors in the computer. Some of these may be on the same chip, others may not. The point is that we could run parallel computers. Essentially Separate CPUs, separate RAM, separate buses, etc. All in the same box. What ties them together would be a single small core and ram which can work with input and output. Basically humans only interface through I/O devices. They don't much care what happens in between. I don't care that the ram in my Office application is on the same chip or processor as my copy of Photoshop. What is important to a user is that he can use the same keyboard, mouse, etc and that it all outputs to the same screen. With recent advances in operating system display technologies, we're actually sending LESS and less data to the graphics chip. The graphics chip is then doing a lot of work to uncompress and display properly. This means that the only place the seperate computers would interact would be at Input and output. End user wouldn't know the difference and they'd have the computing power and the responsiveness of only running a single app or very few apps on each computer. This is high throughput computing. Useless for parallel applications, but the fact is we don't run a lot of parallel applications on a home computer.

    -AntonK
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    I was under the impression that RAM capacity hasn't been growing nearly as fast as harddisk capacity and processor speed. Is this incorrect?
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    You're correct. But you have to look at it from a relative perspective. Its expensive compared to harddrive space, but you don't need nearly as much of it. In truth, the major bottle neck for RAM isn't how much of it we have, but how much of it we can get to. Its awefully far away from the processor (in terms of access times). Think about it. We measure cycles in fractions of a nanosecond... thats really small compared to RAM latencies.

    "The term "memory wall," first officially coined in Hitting the Memory Wall: Implications of the Obvious, refers to the growing disparity between CPU and memory speed. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance."

    A second thing, is that the bus is a huge bottleneck. Consider that every single instruction must access the memory to fetch an instruction. Every 3rd instruction is a load or store, Take these numbers and compute basied on a 2 GHz speed and you'll see we quickly fill the bus with data. If we try to add a second processor -- whoops. No more bus bandwidth. What do we do now? These are the types of problems I'm trying to tackle for a modern desktop computer.

    -AntonK
     
  8. Huwy Secular Humanist Registered Senior Member

    Messages:
    890
    Actually both light and alex have good points

    Major increases in speed take research and innovation however alex's point is a good one: that they produce huge "batches" of chips of the same categories, and then those that perform better at higher speeds are set at the higher minor increments whereas those that don't perform at the top speeds get set at and sold at a lower speed - its all down to the performance of each batch for CPUs.

    often the highest performing video card in a series is designed, and then to get the cheaper market they simply manafacture the same board but with lower set processing speeds, lower performance memory, and lower performance cooling.

    Do you really think the CD/DVD burners were all made in tiny incremental steps?
    like "wow we've got to 32 speed!" ??? nope, the technology was there they just release it slowly to milk the market, along with ironing out bugs in firmware.

    this is why overclocking is so popular, products are often capable of a little bit more than what they are sold as.
     
  9. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Do you understand the manufacturing processes that allow for overclocking? CMOS is not an exact manufacturing process. Large batches of chips are made and a certain percentage of them have too many defects to even work. These are thrown away. Another percentage are rated to be able to be clocked a certain clock rate. Another percentage higher, etc. A lot of time it is not worth it to try to evaluate each chip and clock it accordingly. Thus they will clock them all at the common demoninator. If you happen to get one that can be clocked higher, which is a pretty good chance, then it works. If you don't, it will not. Why do you think some people have so much trouble with over clocking.

    You are making claims without any evidence or reasoning to back it up. What is your reasoning or evidence that manufacturers are holding back technology in order to "milk the market"? Its really simple economics as to why it took so long, but I want to hear your reasoning.

    -AntonK
     
  10. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    Well, atom chips happen to be one of my areas of expertise, and I can tell you that they definitely aren't fake.
     
  11. daktaklakpak God is irrelevant! Registered Senior Member

    Messages:
    710
    Last edited: Mar 5, 2006
  12. Singularity Banned Banned

    Messages:
    1,287
    So why not create 100s of buses and why not use multiplexing in existing buses ?

    sorry in advance for being Naive.
     
  13. leopold Valued Senior Member

    Messages:
    17,455
    what is being discussed here is processor speed
    multiplexing buses will not increase processor speed but it will increase throughput
     
  14. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    I don't think a lot of things will increase processor speed at this point. We may have a few generations left at the current manufacturing techniques, but clock speeds won't keep going up as they have. This is why we're seeing the emergence of multiprocessors even in desktop boxes.

    Scientific and engineering applications still require cooperative multiprocessing algorithms to compute complex tasks. Home users on the other hand I believe do not. If you run NOTHING on your computer except for 1 program, do you think you would have any speed issues? If your entire 2-3GHz machine was dedicated to Office or to Photoshop or Mozilla or whatever, do you think you could even want it any faster? Probably not a great deal. Computers start to slow down now adays when we run multiple software titles. We have to deal with context switching, bus contention, I/O contention, swapping programs in and out or ram, etc. All of these things are what slow it down. In a parallel computer environment you would essentially have entire computers dedicated to each program. No slow down, and perfect interactive speeds.

    In this respect, we are basically only trying to increase throughput, not the execution of a single application. That is still mostly in the realm of science and engineering tasks.

    -AntonK
     
  15. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    I of course agree with you, but I was curious what evidence you had. If only so I could bring up said evidence to others that have asked me similar questions.

    -AntonK
     
  16. daktaklakpak God is irrelevant! Registered Senior Member

    Messages:
    710
    It's kind of off topic. But what the hell, it seems the tech for a 4 sec car with 50 MPG running in soybean oil do exist, just not manufactured by any known auto maker.

    http://www.cbsnews.com/stories/2006/02/17/eveningnews/main1329941.shtml
     
  17. leopold Valued Senior Member

    Messages:
    17,455
    yeah it is off topic, but do you realize how many soybeans it would take to replace oil?
    and if we did, do you realize how many people would bitch about the starving people of the world?
     
  18. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    Look up any scientific citation index with the term "atom chip", and you'll find hundreds of scientific papers explaining what atom chips are and how they work.
     
  19. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Ahh yes, I've seen research on this before. I was under the impression that the "Atom Chip" in question was only an Atom Chip in name alone, not that they were claiming to do the sort of things atom-chips do. I thought that Atom Chip was simply a brand name.

    -AntonK
     
  20. James R Just this guy, you know? Staff Member

    Messages:
    39,426
    I wasn't aware that "Atom Chip" was a brand name. It wouldn't surprise me if makers of dodgy equipment meant to improve the sound of high-end stereo systems boxed a microprocessor and called it an "Atom Chip", or if New Agers invented something and gave it a scientific-sounding name.

    To me, an atom chip is some kind of substrate either with electrical or magnetic paths on it which create magnetic fields above the surface which trap and confine ultra-cold atoms, and allow them to move around, suspended above the surface.
     
  21. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Just to give you an idea of the product that we are referring to.

    http://atomchip.com/
    http://us.gizmodo.com/gadgets/pcs/live-from-ces-atom-photo-swirl-147155.php

    You don't need to read much of it to get an immediate idea that this is just a swindler putting a label on some COS items and calling it a breakthrough. The problem at times is proving to people why this is a hoax. People want to believe that companies have had these things for years and are just sitting on them to try to get more money.

    -Anton
     
  22. Sci-Phenomena Reality is in the Minds Eye Registered Senior Member

    Messages:
    869
    Ahhh shiiiiit, and I was REALLY hoping Atom Chip was the real deal.
     
  23. Singularity Banned Banned

    Messages:
    1,287
    I heard that "Core 2 Duo" uses shared L2 Cache. So what happened to cache coherence ?


    and why is C2D 1.8 Ghz without HT equals to performance of 3Ghz P4.

    Someone should sue Intel for cheating us all these years and
    get compensation of free new processors.
     
Thread Status:
Not open for further replies.

Share This Page