Wednesday 9 September 2009

The Chip of Future, AD 2020

Since I'm blogging here about the future of computing, i'd thought i'd have a look today, at what the next ten years will bring to the CPU, the processor chip at the heart of all modern computers. Futurology is often a difficult subject but when it come
to silicon the future has been pretty well mapped out by maths. Back in the 1960s when the silicon chip was first invented Gordon Moore of Intel, predicted the following.

  • The number of transistors on a chip will double every two years.
  • Or equivalently the width of a transistor will shrink by 1.414 every two years.
  • (From the speed of light), the frequency will increase by 1.414 every two years.

Taking these a face value, and starting with 2010s model CPU, 8 cores each running at 4GHz, 12MB of cache. We can extrapolate tens years, five die shrinks of 1.414 time smaller a piece. Lets make that six, to make the math easier, so where really looking at the chip of 2022. Here what we get:

  • 512 Cores on a chip each running at 32GHz
  • 512MB of cache
  • 35 TeraFlops or 35 Million Million Floating Point Math Operation per Second
  • RAM modules will be 128 GB and run at about 10GHz
  • Flash Memory will around 1TB module size
  • Hard Disks? Might well not keep place, might only have 10s of TBs

Are we really going to get a chip like that? In particular does the average user need 500 way multicore chips. The Multicore trend only started in the 2004 because chip designer found it was the most efficient way to increase the power of there processors. If it is to continue, software needs to change in order to use many processors at the same time, this will be a big change in programming style, it will
require new Operating Systems that can intelligently schedule big tasks to many processors while keeping small regular tasks easierly started on other spare processors. I'm sure it can be done, but will it? If not manufacturers will find it difficult to sell such massively parallel many core chips.

The ATOM version


If multi-cores don't catch on, then its more likely manufacturers will opt to use the space on silicon chip to build complete computers on a chip, with I/O, Graphics, Memory all integrated into a single chips, continue the trend started with AMD Fusion chips (graphics processor and cpu on the same chip). For gamers a big GPU and a few cores of CPU, perhaps with a dedicated physics unit, might be the prefer options.

This might be the final chip too


Our chip of 2022, is made at 4nm lithography (6 shrinks from the 32nm Intel and AMD will be using in 2010), this is about the limits as to how small we can design chips. In a Silicon Crystal atoms of silicon are spaced about 0.5nm apart. Our 4nm chip has wires just 8 atoms thick. Quantum tunnelling between adjacent wires becomes a problem already at 16nm (scheduled for 2013-2016). Once we can't shrink processors any further we can only make more powerful computers by increasing the size of chip itself or by building vertically, but either way we would have to pay more for building either. Once we stop shrinking chips, the price for each transistor stops going down. So it really looks that the end point of conventional silicon chips is reached in the 2020s.

The revolution is Over and still no AI


Our computer of 2020, is still at bit short of a human brain, which is estimated to need about 10^16 Computations per second, and 10TB of memory to emulate. We're about 300 times to slow to match a human brain, and 100 times to little memory. To get there we need another 6 dies shrinks, but one atom thick wires just aren't possible. Having said that a 2020s supercomputer made with hundreds of processors is there, so big organisation, can start playing god with brain rate supercomputers if the're so funded. In fact through, for modelling the brain, we've very much got the wrong architecture above.The brain is made of 100 billion slow but parallel operating neurons connected by 100 trillion interconnections. To emulate really we want chips
which are vastly more parallel, integrated with a rewire-able transport system able to connect any segment to any other in real time. Such a chip might only be made once AI or mind-uploading is already standard.

Conclusions: The singularity is delayed


Once we reach the end of Moore's Law, I'm sure innovation will continue in computing, certainly in software, but hardware will have stalled, improving only gradually from then on. It might be a long wait before the next revolution, Nanotech, starts. Nanotech rewrites the rule on how to build computers (and everything else), to start it will require 3 dimensional atom by atom construction of self replicating robots, thats not any easy task, nor is it easy to design, Nanotech might come anywhere next century, or never. Some futurologist would like to start Nanotech in the 2020s just to keep Moore's Law working, but Moore's Law is just a line drawn in a existing trends. Hard work makes it continue to be true, but hard work, can't get past the limit of the laws of physics. The computer of much of the current century may have specs like I've describe above, and it will take new revolutionary technology to better it.

No comments:

Post a Comment

Have your say