The Far Limits of Datacenter Compute Efficiency and RSFQ
Pulse
Octopart Staff
Oct 10, 2017

The energy efficiency of computation (instructions-per-joule or FLOPS-per-watt) is quickly replacing the upfront cost of computation (FLOPS-per-dollar) as the most important metric of microprocessor fabrication technology, and has been the dominant metric for data center and mobile device processors for some time now. This shift in priorities might contribute to the tapering off of Moore's Law (the doubling of transistors in integrated circuits every 18 months), but Johnathan Koomey hasrecently proposed a different exponential rule of thumb, noting that computing efficiency has also been doubling every 18 months for the past few decades. Instead of pursuing brute clock speed at an aggressive pace, processor manufacturers are already focusing their research and development on new more energy efficient innovations, and it might be "Koomey's Law" that sets the pace of the industry over the next 50 years.

computational_efficiency Figure from "Implications of Historical Trends in The Electrical Efficiency of Computing" by J. Koomey, S. Bernard, M. Sanchez, and H. Wong (2011, DOI link). Full paper unfortunately behind a paywall, try this earlier work (2009).

There are physical limits to the efficiency of non-reversible computation (see "The Feynman Lectures on Computation" for a great introduction), so even with exotic technology the 18 month doubling can't continue forever. But especially in our fossil-fuel driven society there is both an economic and environmental mandate to drastically reduce the electrical power expended on number crunching in the cloud. Thus far technical advances in both efficiency and processing power have mostly been tied to transistor package size and have improved together, but if transistors stop shrinking we'll need to look for new innovations to squeeze out better efficiency. A handful of start-ups are exploring interesting new silicon-based chips, including the massively-parallel Connection Machine-like GreenArrays chips and Lyric Semiconductor's probabilistic processors, but neither of these can run existing software out of the box. FPGAs (reconfigurable logic) and domain-specific ASIC chips are starting to get more attention, but require even more radical changes in programming techniques.

There is one proven technology for data-center style computation which provides a significant leap in efficiency without major changes in architecture, and provides a large bump in clock speed at the same time: Rapid Single Flux Quantum Computation, or RSFQ. Using Josephson Junctions etched into simple Type I superconductors (like lead, tin, or niobium), digital circuits can be implemented using individual flux quanta to represent bits with efficiency approaching the information-theoretic cost of computation. This is not to be confused with traditional quantum computation, which involves operations on qubits and can bust through gnarly problems like prime factorization: RSFQ circuits behave almost exactly like silicon circuits, they just use electromagnetic field quanta for signaling instead of small currents of many electrons. Because there is no resistive dissipation of heat, circuits can be laid out very close together and run very fast (up to hundreds of gigahertz), and an entire superconductor would directly consume only a few watts of power. The downside is that the circuits must be cryogenically cooled down to a few Kelvin over absolute zero. Contemporary closed-cycle pulse tubes and the vacuum systems required to reach these temperatures consume several kilowatts of power and will not be miniaturized to fit in your pocket any time soon, but could easily be integrated into the infrastructure of modern data centers. The cost in capital, electricity, and maintenance to maintain the cryogenics should easily be paid off by the megawatts of electricity saved compared to contemporary silicon-based supercomputers.

hypres-rsfq-device-scaled Photo of packaged device on cryogenic cooler from hypres.com.

RSFQ technology has been under development for decades, both in academia and in industry, and most of the difficult "glue" circuits (like memory and optical interconnections) have been demonstrated. The US National Security Agency (NSA) published a technology assessment in 2005 recommending the development of the technology for future code-cracking supercomputers, and a group at Stony Brook university researched petaflop-scale supercomputers even further back in 1999 (their website also has a dated but enthusiastic overview of the technology). A company namedHypres even sells rack-mountable multi-GHz direct-digital-sampling equipment using the technology and offers foundry services to other research groups.

We look forward to the day we can host our services on a ultra-efficient computers powered by renewable energy sources. We don't list any RSFQ microprocessors on Octopart yet, but certainly do list the type of components and tools used to build supporting electronics for experimental technologies like this, and we hope we can save researchers hours comparison shopping foroptoisolators or obscure card connectors.

Octopart is hiring! We are looking for an analytics engineer, a front-end engineer, and a business operations person. If you are enthusiastic about the future of electronics and how Octopart can help get in touch!

Read More Articles