TOP

Where are we going after 7mm Chips?

News IntelLattePanda

Everyone knows Moore’s law. In 1965, Intel co-founder Gordon Moore observed that the number of components in integrated circuits was doubling every 12 months or so. In 1965, this meant that 50 transistors per chip offered the lowest per-transistor cost; Moore predicted that by 1970, this would rise to 1,000 components per chip, and that the price per transistor would drop by 90 percent. Of course, Gordon did not make this law on the basis of some scientific law or heavy research, instead he made it just on his observations. And it just happens that the person who came up with this law was also the co-founder of Intel. But since then, Moore’s law has progressed quite successfully and we can that in the technology we use today.

Currently, light with a 193-nanometer wavelength is used to create chips with features just 14 nanometers wide. This is an extremely complex manufacturing process and only a few players in the world have the expertise to manage to engineer this sort of process. Still, there are limits to which we can extrapolate this layman’s law. If we go to the physical limits of atoms, at 2nm, transistors would be just 10 atoms wide, and it’s unlikely that they’d operate reliably at such a small scale. Even if these problems were resolved, the specter of power usage and dissipation looms large: as the transistors are packed ever tighter, dissipating the energy that they use becomes ever harder. We will reach the physical limits of what we can fit on silicon. And this is a problem.

With all of our dreams of AI, VR and Smart cities with Super computers, we cannot simple stop 10 years from now without innovation in the computing sector. Intel is at its 8th Generation processors which use 10mm processes. Sooner or later, Intel will need to stop and introduce a new type of processor for the world to catch onto. There will also be a focus on new technology beyond the silicon CMOS process currently used. Intel has already announced that it will be dropping silicon at 7nm. Indium antimonide (InSb) and indium gallium arsenide (InGaAs) have both shown promise, and both offer much higher switching speeds at much lower power than silicon. Lets take a look at what those possibilities can look like for next-generation smarter and faster computing.

One, albeit way into the future, Quantum computing. Quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. Due to the way the tiniest of particles behave, operations can be done much more quickly and use less energy than classical computers. In classical computing, a bit is a single piece of information that can exist in two states – 1 or 0. Quantum computing uses quantum bits, or ‘qubits’ instead. These are quantum systems with two states. However, unlike a usual bit, they can store much more information than just 1 or 0, because they can exist in any superposition of these values, 1 and 0. This makes the possibilities for computing quite literally limitless since we not restricted to a binary system. But of course, harnessing quantum bits is much harder than it sounds, and its even harder to place stable qubits on a chip that would look like something from today. Quantum computers need super-cooling and massive amounts of control to just operate the most basic of calculations. It will take at least 15 to 20 years for this technology to mature and see it come into our homes and then possible even phones.

The other one, which is seen as more of a contender is neuron-chips. These chips work by mimicking the action of these ion channels, like the synapses we see in our brains. Transistors allow current to flow through them continuously — unlike the on-off design common to so many other chips, but like the synaptic cleft “gate” in brain cells. By tweaking the chip transistors, engineers can mimic specific ion channels in neurons — in this way, they can create any synaptic situation that would lead to the firing of an electrical impulse, just as if you would for a process in a computer or initiate communication. The difference is that these are 100 times faster at certain processes than a conventional silicon chip today is. And that means once these chips are improved and made for consumer level electronics – such as the Bionic Chip in the new Apple iPhone X, we will see more of these replaces parts of our computers. Intel has a couple of products in testing, but nothing mainstream yet. Test chips have been engineered at MIT in collaboration with IBM.

We have yet to see if our traditional computers will be phased out by virtue of one these two cutting-edge technologies, both or a combination of both. What is known is that in the coming few years, we are about to see a computing revolution with new technology coming into the sector.

———————————————————————————————————————

Credits to Sources cited and quoted:
[https://arstechnica.com/information-technology/2016/02/moores-law-really-is-dead-this-time/]
[http://www.wired.co.uk/article/quantum-computing-explained]
[https://www.engadget.com/2017/09/26/intel-loihi-neuromorphic-chip-human-brain/]