Moore's Law

Discussion in 'General Science & Technology' started by kmguru, Jul 28, 2001.

Thread Status:
Not open for further replies.
  1. kmguru Staff Member

    Messages:
    11,757
    Somewhere we have a topic on this subject that the search engine could not locate. Here is some latest news.


    Trained for years to deal with scarcity, microprocessor designers are wondering what to do with a ton of new real estate on the surface of the newest chips. So they're putting more than one microprocessor onto each piece of silicon.

    Moore's law predicts that chips will double their performance and number of transistors every 18 months -- everyone knows that. But it's easy to overlook the massive gains in chip real estate occurring with the latest doublings. Most high-end chips today are made with 0.18-micron manufacturing processes. The features on these chips are pretty narrow. It's as if they were drawn with a pencil that can write circuit lines at less than 1/500th the width of a human hair. By the end of this year, chip makers will begin their scheduled shift to 0.13-micron manufacturing processes, which can create circuits that are as thin as 1/700th the width of a human hair. (A human hair weighs in at a fat 100 microns wide.)

    These doublings are changing the economics of chip design. What was once scarce real estate -- the space available on a chip -- is now becoming abundant. Intel could barely cram 3 million transistors on a Pentium chip in 1994; now it can put 42 million transistors on a Pentium 4 with a 0.18-micron process. That's 14 times the transistors. Many of the new chips, using a 0.13-micron process, are expected to have around 100 million transistors -- 33 times more than in 1994 and almost two and a half times more than today's chips.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. wet1 Wanderer Registered Senior Member

    Messages:
    8,616
    The Moore's Law thread can be found at:
    http://www.sciforums.com/t3312/s/thread.html

    In the earlier days of chip manufacture, around the days of the 286, the architecture of the chip was such, as the upper memory area was considered as unusable. There was not enough area there to provide as continuous memory to link into programs so it was ignored. The amount of space in the chip was highly limited and so was the performance of the chip. Further, chips were labeled with the designations of dx or sx.

    The dx meant that the chip had a math co-processor to help in the calculations and the main cpu could pass off the calculations to the math co-processor. This relieved the cpu to do other instructions and await the results from the co-processor. Sx meant that the chip could not pass the tests needed to assure high quality and therefore did not have a math co-processor. Later the 386’s and 486’s came out. The designations continued. However during the 486’s manufacture it was no longer a problem in getting the chips to pass. All chips were designed with the co-processor there. That meant if you had a 486sx that the co-processor had been disabled at the factory. During the development of the 386 and during the 486 uses were found for attaching tags to the upper memory area. This allowed a door through which to access beyond the first meg of memory. The first meg of memory was the critical point. 640k was allowed to kick off programs. Deducted from that were the amount of info that established tags and loaded drivers from the autoexec.bat and config.sys files. If there was enough info space taken then the amount left to run programs was reduced to the point that problems were encountered when trying to run and load programs. In each step up the “highways” to pass information and data were increased as the chip designation increased. 128 bit paths are now common. During the 286 days, it was 16 bit (if I remember correctly).

    While this data presented above doesn’t reflect the true architecture of the chip, it does show that the use of the chips’ capability is a separate issue that couples to the architecture to present the user with a total package of capabilities independent of what programs you might run. The bottleneck still exists in going through the first meg to access the other memory you may have residing in the computer you now use at present. Whether that is extended to 64 meg's or higher. this will have to be addressed sooner or later to increase the capabilities of computing in general.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Chagur .Seeker. Registered Senior Member

    Messages:
    2,235
    Just a quick point of information ...

    I think that along with the greater number of transistors on a chip the matter of how much closer the transistors are what affects the execution speed of the CPU (along with a couple of other factors).

    What has increased the 'power' of computers is how much memory can be addressed and that in turn is dependent on how many bits comprise a byte. With the Z-80 there were four bits to a byte (instruction) and consequently only sixty four memory locations could be addressed. With the '86 architecture, it became eight bits to a bye, then sixteen, etc. Now that it's either one twenty eight or two fifty six, either 2mg. or 16mg. of memory that can be addressed directly - and there's where the 'speed' comes from since more of the program(s), and the data, can be in memory rather than having to go to the hard disk.

    I think ... I'm pretty rusty too on this stuff

    Please Register or Log in to view the hidden image!

     
    Last edited: Jul 29, 2001
  6. Google AdSense Guest Advertisement



    to hide all adverts.
Thread Status:
Not open for further replies.

Share This Page