21 C
Miami
Saturday, January 4, 2025

Reversible Computing Has Potential For 4000x More Energy Efficient Computation

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Michael Frank has spent his career as an academic researcher working over three decades in a very peculiar niche of computer engineering. According to Frank, that peculiar niche’s time has finally come. “I decided earlier this year that it was the right time to try to commercialize this stuff,” Frank says. In July 2024, he left his position as a senior engineering scientist at Sandia National Laboratories to join a startup, U.S. and U.K.-based Vaire Computing.

Frank argues that it’s the right time to bring his life’s work—called
reversible computing—out of academia and into the real world because the computing industry is running out of energy. “We keep getting closer and closer to the end of scaling energy efficiency in conventional chips,” Frank says. According to an IEEE semiconducting industry road map report Frank helped edit, by late in this decade the fundamental energy efficiency of conventional digital logic is going to plateau, and “it’s going to require more unconventional approaches like what we’re pursuing,” he says.

As Moore’s Law
stumbles and its energy-themed cousin Koomey’s Law slows, a new paradigm might be necessary to meet the increasing computing demands of today’s world. According to Frank’s research at Sandia, in Albuquerque, reversible computing may offer up to a 4,000x energy-efficiency gain compared to traditional approaches.

“Moore’s Law has kind of collapsed, or it’s really slowed down,” says
Erik DeBenedictis, founder of Zettaflops, who isn’t affiliated with Vaire. “Reversible computing is one of just a small number of options for reinvigorating Moore’s Law, or getting some additional improvements in energy efficiency.”

Vaire’s first prototype, expected to be fabricated in the first quarter of 2025, is less ambitious—it is producing a chip that, for the first time, recovers energy used in an arithmetic circuit. The next chip, projected to hit the market in 2027, will be an energy-saving processor specialized for AI inference. The 4,000x energy-efficiency improvement is on Vaire’s road map but probably 10 or 15 years out.

“I feel that the technology has promise,” says
Himanshu Thapliyal, associate professor of electrical engineering and computer science at the University of Tennessee, Knoxville, who isn’t affiliated with Vaire. “But there are some challenges also, and hopefully, Vaire Computing will be able to overcome some of the challenges.”

What Is Reversible Computing?

Intuitively, information may seem like an ephemeral, abstract concept. But in 1961, Rolf Landauer at IBM
discovered a surprising fact: Erasing a bit of information in a computer necessarily costs energy, which is lost as heat. It occurred to Landauer that if you were to do computation without erasing any information, or “reversibly,” you could, at least theoretically, compute without using any energy at all.

Landauer himself considered the idea
impractical. If you were to store every input and intermediate computation result, you would quickly fill up memory with unnecessary data. But Landauer’s successor, IBM’s Charles Bennett, discovered a workaround for this issue. Instead of just storing intermediate results in memory, you could reverse the computation, or “decompute,” once that result was no longer needed. This way, only the original inputs and final result need to be stored.

Take a simple example, such as the exclusive-OR, or XOR gate. Normally, the gate is not reversible—there are two inputs and only one output, and knowing the output doesn’t give you complete information about what the inputs were. The same computation can be done reversibly by adding an extra output, a copy of one of the original inputs. Then, using the two outputs, the original inputs can be recovered in a decomputation step.

A traditional exclusive-OR (XOR) gate is not reversible—you cannot recover the inputs just by knowing the output. Adding an extra output, just a copy of one of the inputs, makes it reversible. Then, the two outputs can be used to “decompute” the XOR gate and recover the inputs, and with it, the energy used in computation.

The idea kept gaining academic traction, and in the 1990s, several students working under MIT’s
Thomas Knight embarked on a series of proof-of-principle demonstrations of reversible computing chips. One of these students was Frank. While these demonstrations showed that reversible computation was possible, the wall-plug power usage was not necessarily reduced: Although power was recovered within the circuit itself, it was subsequently lost within the external power supply. That’s the problem that Vaire set out to solve.

Computing Reversibly in CMOS

Landauer’s limit gives a theoretical minimum for how much energy information erasure costs, but there is no maximum. Today’s CMOS implementations use more than a thousand times as much energy to erase a bit than is theoretically possible. That’s mostly because transistors need to maintain high signal energies for reliability, and under normal operation that all gets dissipated as heat.

To avoid this problem, many alternative physical implementations of reversible circuits have been considered, including
superconducting computers, molecular machines, and even living cells. However, to make reversible computing practical, Vaire’s team is sticking with conventional CMOS techniques. “Reversible computing is disrupting enough as it is,” says Vaire chief technology officer and cofounder Hannah Earley. “We don’t want to disrupt everything else at the same time.”

To make CMOS play nicely with reversibility, researchers had to come up with clever ways to to recover and recycle this signal energy. “It’s kind of not immediately clear how you make CMOS operate reversibly,” Earley says.

The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly. This can be done without adding extra compute time, Earley argues, because currently transistor switching times are kept comparatively slow to avoid generating too much heat. So, you could keep the switching time the same and just change the waveform that does the switching, saving energy. However, adiabatic switching does require something to generate the more complex ramping waveforms.

It still takes energy to flip a bit from 0 to 1, changing the gate voltage on a transistor from its low to high state. The trick is that, as long as you don’t convert energy to heat but store most of it in the transistor itself, you can recover most of that energy during the decomputation step, where any no-longer-needed computation is reversed. The way to recover that energy, Earley explains, is by embedding the whole circuit into a resonator.

A resonator is kind of like a swinging pendulum. If there were no friction from the pendulum’s hinge or the surrounding air, the pendulum would swing forever, going up to the same height with each swing. Here, the swing of the pendulum is a rise and fall in voltage powering the circuit. On each upswing, one computational step is performed. On each downswing, a decomputation is performed, recovering the energy.

In every real implementation, some amount of energy is still lost with each swing, so the pendulum requires some power to keep it going. But Vaire’s approach paves the way to minimizing that friction. Embedding the circuit in a resonator simultaneously creates the more complex waveforms needed for adiabatic transistor switching and provides the mechanism for recovering the saved energy.

The Long Road to Commercial Viability

Although the idea of embedding reversible logic inside a resonator has been developed before, no one has yet built one that integrates the resonator on chip with the computing core. Vaire’s team is hard at work on their first version of this chip. The simplest resonator to implement, and the one the team is tackling first, is an inductive-capacitive (LC) resonator, where the role of the capacitor is played by the whole circuit and an on-chip inductor serves to keep the voltage oscillating.

The chip Vaire plans to send for fabrication in early 2025 will be a reversible adder embedded in an LC resonator. The team is also working on a chip that will perform the multiply-accumulate operation, the basic computation in most machine learning applications. In the following years, Vaire plans to design the first reversible chip specialized for AI inference.

“Some of our early test chips might be lower-end systems, especially power-constrained environments, but not long after that, we’re addressing higher-end markets as well,” Frank says.

LC resonators are the most straightforward way to implement in CMOS, but they come with comparatively low quality factors, meaning the voltage pendulum will run with some friction. The Vaire team is also working on integrating a
microelectromechanical systems (MEMS) resonator version, which is much more difficult to integrate on chip but promises much higher quality factors (less friction). Earley expects a MEMS-based resonator to eventually provide 99.97 percent friction-free operation.

Along the way, the team is designing new reversible logic gate architectures and electronic-design-automation tools for reversible computation. “Most of our challenges will be, I think, in custom manufacturing and hetero-integration in order to combine efficient resonator circuits together with the logic in one integrated product,” Frank says.

Earley hopes that these are challenges the company will overcome. “In principle, this allows [us], over the next 10 to 15 years, to get to 4,000x improvement in performance,” she says. “Really it is going to be down to how good a resonator you can get.”

From Your Site Articles

Related Articles Around the Web

Source link

- Advertisement -spot_imgspot_img

Highlights

- Advertisement -spot_img

Latest News

- Advertisement -spot_img