Atom Chip

Discussion in 'Computer Science & Culture' started by Sci-Phenomena, Mar 2, 2006.

Thread Status:
Not open for further replies.
  1. Sci-Phenomena Reality is in the Minds Eye Registered Senior Member

    Messages:
    869
    Some people say that Atom Chip (do a google search if you don't know what Im talking about) is fake and that no such computer is going to come onto the market.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. alexb123 The Amish web page is fast! Valued Senior Member

    Messages:
    2,238
    I have never heard of this but I have been looking it up and its very interesting. It's about time this happened. I have a post on this forum that says what a con 'moores law' is. And that it has only served to hold back computers because as long as the Law is proved the chip industry praises itself. However, anyone with any sense would know that 1.1 Ghz then 1.2 Ghz and so on, is not the computer industrys best effort. But rather the best way to get money out of us for very little effort on their part.

    I really hope these Atom chip products go main stream its about time the computer industry had a kick up the backside.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Light Registered Senior Member

    Messages:
    2,258
    Alex, for one thing you need to realize that Moore's law isn't supposed to be taken as a "law" at all. It was (and is) just a generalized observation of the rate of technology advances in computing speed. And it has held up pretty good over the years.

    You also need to understand that it's not just a simple matter of going from "1.1 Ghz to 1.2 Ghz" as you try to say. You seem to think it's just a matter of pushing harder on the gas in a car. Every major increase in speed has taken a lot of innovative research and development to squeeze more parts into the same space, improve and enhance the data transfer rate between components inside the SAME chip, reduce EM interference between parallel buss components and many other things.

    You cannot simply tell a 486 chip to "speed up" and run like a Pentium. I have a strong feeling that you know very, very little about actual processor chip technology and design. (Thus my attempt here to explain a small part of it to you.)
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. alexb123 The Amish web page is fast! Valued Senior Member

    Messages:
    2,238
    Light I understand what you are saying. But the computer industry never has an great leaps in the processor speed. Just look at Dual-Core if you want proof. Dual-core could have given us chip speeds far greater than the .1 Ghz or so at a time but this has not happened. Chips are still creeping forward slowly. The computer industry is out to make money so why should we expect anything less of them?

    But finger-crossed someone will come along and rock the boat and we will get truly cutting edge chips. Maybe the Atom Chips is the answer, but maybe not.

    http://www.theregister.co.uk/2005/09/07/atom_chip_miracle_machine/
     
  8. leopold Valued Senior Member

    Messages:
    17,455
    as stated by light it is not a law but an observation
    and also stated by light it has held reasonably true

    i think the law says chip complexity will double every 18 months
    correct me if i am wrong

    chips were originally made using light
    which gave a certain density

    manufacturers found out by rearranging components that they could get more parts on a chip

    when the parts gots so small that light couldn't do the job they started using what is called electron beam lithography which made the parts even smaller therefor increasing the complexity

    they are using even smaller wavelenghts to acheive even denser designs

    when the first chips came out they had something on the order of 40 to 100 parts per chip
    todays density is on the order of 6 million parts per chip
     
  9. leopold Valued Senior Member

    Messages:
    17,455
    the answer is not one processor with incredible speed
    the answer is multiple processors of average speed
     
  10. Zephyr Humans are ONE Registered Senior Member

    Messages:
    3,371
    It's true that it started as an observation, but as the Wikipedia article says, it's possible there is a self-fulfilling aspect to it. If Moore's law is well known and respected, chip manufacturers may be tempted to use it as a gauge of what their competitors are likely to manage and work to match that.

    -----quote-----
    Although Moore's law was initially made in the form of an observation and prediction, the more widely it became accepted, the more it served as a goal for an entire industry. This drove both marketing and engineering departments of semiconductor manufacturers to focus enormous energy aiming for the specified increase in processing power that it was presumed one or more of their competitors would soon actually attain. In this regard it can be viewed as a self-fulfilling prophecy.
    -----

    Actually I think they're both answers, just to different questions

    Please Register or Log in to view the hidden image!


    Parallel systems are definitely cheaper at the moment, but some algorithms don't work well on them.
     
  11. Light Registered Senior Member

    Messages:
    2,258
    You are still forgetting one tiny but VERY important aspect of the whole business, and it IS a business - competition. No company would purposely slow their development because that would allow someone else to get ahead in the race.

    Granted, Intel is the biggest maker in the in the field, but they are far from alone. How could you even begin to think they would drag around and let someone leap ahead of them? That would be the business equivalent of someone stupidly shooting themselves in the foot! No company, no matter how big, is going to allow that to happen.
     
  12. alexb123 The Amish web page is fast! Valued Senior Member

    Messages:
    2,238
    Light, your statement isn't true. Price fixing is an example of this and that involves seperate companys. This is also happening in the chip industry but its speed fixing.

    If AMD realised a chip that is far better than an Intel Chip, as opposed to just slightly better as is common, then Intel would have to hit back with an equal or better chip etc. This would push R & D costs much higher for both company, therefore it is not within their interests to make a big leap. If there really was chip war, statergy it would be good for the consumers but bad for the chip makers.

    Maybe the diff here is, if AMD and Intel both know the other one is holding back then no one wants to go first, because they know the lead that they take, will be very short lived.
     
  13. leopold Valued Senior Member

    Messages:
    17,455
    this remind me of the audio wars of the 70s
    power amplifier makers would claim something like .0001 thd for their amps
    anything less than about .01 is totally inaudible, therefor useless but the amp makers kept going further for some reason
    why would they do that? hoping that the other guys couldn't match it perhaps?
     
  14. daktaklakpak God is irrelevant! Registered Senior Member

    Messages:
    710
    Speed != Performance.

    What's the clock speed different between an AMD Sempron 2800+ (socket A) and an AMD Sempron 3400+ (socket 754)? None. They all run at 2GHz.
     
  15. Light Registered Senior Member

    Messages:
    2,258
    Sorry, Alex, but you don't understand how cutthroat the competition really is in this business. If one company made great leap ahead in processing power, it would reward then with billions in profits. And plenty left over with which to make the next significant leap which would keep them ahead - and make even more money. I understand what you're trying to say but it simply doesn't work as you've described it. (And there are several more companies involved in the R&D of this stuff besides just those two.)
     
  16. alexb123 The Amish web page is fast! Valued Senior Member

    Messages:
    2,238
    Light I also take your point and I really don't know for sure what the situation is. But I do base my whole Theory on the fact that neither AMD or Intel or anyone else have ever really taken a big step forward. Chips get Billions of $ in research yet no one takes a big step forward.

    AMD went 64bit first but really that was of very little benefit 64 Bits windows followed a year later and by then Intel has their 64 Bit Chip.

    However if you look at the console market they have to make big steps forward because their new products are only every 4/5 years. Consoles can make big advances why can't desktops?
     
  17. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Alexb123,

    I only wish you could understand past your simple observations of 1 core to 2 cores, or 32 to 64 bit to understand the kinds of complexity and problems that come up. Simply plopping two chips on a die and calling it a dual core is not how it is done. There are hundreds of issues that need to be tackled in terms of how to remain coherent between the two cores, how to deal with the added bus overhead needed to keept these two cores processing, how to deal with the myriad of issues that come up in terms of a split transaction busses which WEREN'T there in a single core system. As for 32 to 64 bits, this is purely an issue of space/return. No one said doing 64 bit would be difficult, but the question is, is it worth it? You don't get twice the processing power from 64 bits, you only get to store larger numbers. Do we need larger numbers? Well For storing larger numbers this helps because we don't need to use software emulation for numbers over 2^32, but how much does it help us in real world applications? The answer is still up in the air. I'd say the consensus is that it helps a bit in some cases and in others is useless. Also take into consideration that programs that aren't compiled with the 64 bit extensions can't make use of this new hardware. So in effect if you pop a 64 bit chip into your computer and your software doesn't use it, then you've just taken up space on your processor die that could have been used for something else. Perhaps used in caches, which is almost definitely going to improve performance.

    This brings us to the issue of size. Making processors is really hard. Not designing, I am talking about the actual CMOS photo lithography process of making a processor. For many years there was more we wanted to do, but we ran out of room on the processor. Moore's law say NOTHING about processor speed or processor performance. It says one thing and one thing only:

    "that at our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost, will double in about 18 months."

    What this essentially means is that the number of transistors on a die will double every 18 (Moore actually says that he was quoted wrong and said 24) months. This has been basically true (the 24 actually, not the 18). The problem is that speed and number of transistors is not exactly correlated. We are now using VERY advanced lithography techniques just to get below 65 nanometers. We are having to use quantum interference in order to achieve sub-wavelength sizes in these chips. If there were a way to get smaller transistors, we'd do it right now. Eventually I'm sure we will have ways and when they come about we'll use them.

    With your comment about consoles and PCs you're showing a lack of knowledge about the market in general. The PC market and the console market are not comparable. Not in the least. Not even close. The fact is, if the console makers could sell you a new console every year they would. But there isn't enough of a market to sell that many. PCs on the other hand are a huge market and its growing, especially in developing countries. Not to mention the fact that companies such as Google or any other large scale computer corporation is buying thousands and thousands of computers, they don't by consoles. So to keep up with demand, the PC makers can create new computers basically constantly, using whatever the latest technology is. Console makers can only do this every few years, and they do the same: they use whatever the latest technology is.

    You sound like a bit of a conspiracy theorist. What you're saying is essentially akin to saying that automobile maker's have a car that runs on water, but they don't want to sell it so they can slowly increase gas mileage and make us buy new cars.

    Take a look at the details of CPU and in general computer architecture and design. Ask me questions if you want. I'm 100% sure if you take a serious look at it you'll change your mind.

    -AntonK
     
  18. leopold Valued Senior Member

    Messages:
    17,455
    you also increase the memeory size you can access in one memory cycle
    going from 32 bits to 64 bits doubles the size of memory that can be accessed in one cycle

    granted, with memory managers you can access a 64 bit memory with 32 bits, but it does take extra cycles to do so
     
  19. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    You make a good point. This only matters if you're on a machine with larger than 4 GB of ram. But indeed, you are correct. Good call. But I don't think it invalidates any of my points really.

    -AntonK
     
  20. Sci-Phenomena Reality is in the Minds Eye Registered Senior Member

    Messages:
    869
    Just when I think I know everything about computers, Antonk comes along....
     
  21. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    I know a bit about somethings and love discussing it with people. I chime in when I think I can contribute.

    -AntonK
     
  22. leopold Valued Senior Member

    Messages:
    17,455
    this is wrong, and i am surprised that anton didn't pick up on it

    going from 32 to 33 bits would double the memory
    going from 32 bits to 64 bits would more that double the memory
    bytes for 32 bits would be 2 raised 32
    bytes for 64 bits would be 2 raised 64
     
  23. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    You are absolutely correct! I did miss that he said double! It doesn't double it, it is actually over 4 billion times larger! With 64 bits you can potentially access 1.844 x 10^19 bytes of ram. This is about 17,592,186,044,416 megabytes -- If I did my counting correctly thats 16 Exabytes! As you can guess, we wont be having that much ram any time soon unless some kind of revolution in materials technology happens. Because of this fact, even in 64 bit systems you don't always have 64 bits of address. For instance the multiprocessor I am working on at this second only uses 40 bits of address space. This is a very respectable 1,099,511,627,776 bytes. This saves a lot of space on the physical board because we don't need as many bus lines for address.

    -AntonK
     
Thread Status:
Not open for further replies.

Share This Page