How does a biologist, or a computational neuroscientist, possibly have the wherewithal to stay current on all aspects of his field?
Nature, one of the world's top journals for peer-reviewed scientific breakthroughs, does what it can to encourage cross-discipline knowledge sharing by publishing non-technical essays from the leading lights in particular fields. For a lay person, this is often the best way to become current, very quickly, on very difficult subjects.
This week's topic, when boiled down to its essence, is: how small, how fast, how powerful can computers possibly get? The writer who answers these questions is Igor Markov, a professor of electrical engineering and computer science at the University of Michigan. I made my way through his prose rather slowly. It's quite dense because he packs an incredible amount of detail into an essay that spans only seven printed pages. When it was done, I had unlearned a number of basic beliefs I'd had about computers.
His conclusions, reflecting the current state of computer science, are illuminating for anyone who wants to figure out where the world is going, so I thought I'd share some of them. (This is dense stuff, so I may get some of the technical language a bit wrong. Please forgive me in advance.)
1. Quantum computing is probably not going to be as revolutionary as the popular press would have us believe. Such devices "hold promise only in niche applications and do not offer faster general-purpose computing because they are no faster for sorting and other specific tasks." He makes this analogy: the field of computer science is a decathlon, but quantum computing can only make the sprint much faster. Two examples he uses: searching the web and computer-aided design require computation technology that non-quantum digital computers can do as well as, if not better than, quantum computers. Quantum effects, like entanglement, which is useful for hard-to-break codes, remain subject to the laws of the universe.
Here's something else I didn't know:
2. The relationship between an integrated circuit and the power it consumes is all out of whack. This is known as the "voltage scale" problem. Markov writes: "Power consumption of transistors available in modern integrated circuits reduces more slowly than their size."
The one "law" that most of us know about these chips was laid down by Gordon Moore, a founder of Intel, who predicted that the number of transistors in a circuit would basically double every 18 months. So if Moore's law holds — and so far, it has, because the entire industry has adopted it as a target — there emerges a very large imbalance between the size of the chip and the amount of power it needs to work. The chips are getting tinier; the power they use gets tinier more slowly. What this means, practically, is that power becomes a limiting factor in reducing the size of solid state circuits. Moore's law might soon be "broken" because, beyond a certain point, the tinier chips will require too much power to operate, but the precise level of power cannot be harnessed to power those transistors without damaging them, because the amount of energy involved in this transaction is too high. In English, your computer will get too hot. Scientists haven't figured out how to cool it properly. (And data centers already consume 2 percent of all energy used in the U.S.)
A big corollary:
3. Only a tiny fraction of a circuit can operate at once, because switching circuits create heat that modern computers can only dissipate if most of the rest of the chip is "dark." The higher performance the chip your computer is using, the less of the chip the computer is able to use to compute. And since tiny signals are so weak, "gates" or junctures in the circuits are built much like radio repeaters; they take the frequency and amplify it. This development has mammoth ramifications for the field.
4. Energy efficiency is a casualty of speed. Related to this is our need for speed. We love fast computers. But there are walls, here, too, and energy efficiency is often a casualty. Computers, reduced to their essence, rely on controlling and then translating the frequencies at which waves of crystals (like quartz crystals) vibrate. The quicker the vibration, the weaker the signal — this is a function of the speed at which electricity travels through wires in the transistor. The smaller the chip, the smaller and less dense the wire; the less dense and smaller the wire, the higher the "delay" in transmitting the signal. So if the crystal vibrates too quickly, the signal can't make it to the end of the transistor without being repeated. Repeaters cost energy, too. Computer manufacturers have stopped simply trying to make processing faster, and have turned instead to a temporary solution: they just add another CPU to the hardware. (This is the "dual core" or "quad core" that the blue shirts at the Apple store talk about.) The "work" here can be distributed across different cores, which means that things can get quickly without bumping up against the energy/physical problems generated by a really fast computer.
You would be correct in assuming that Silicon Valley is not going to let alleged physical constraints to technology get in their way; there are all types of potential ways to mitigate some of the problems that crop up at tiny scales, and billions upon billions of dollars are being spent on this basic research. As Ars Technica's John Trimmer writes: