Analogue computers use less energy than digital ones
metamorworks/Getty Images
Analogue computers that rapidly solve a key type of equation used in training artificial intelligence models could offer a potential solution to the growing energy consumption in data centres caused by the AI boom.
Laptops, smartphones and other familiar devices are known as digital computers, because they store and process data as a series of binary digits, either 0 or 1, and can be programmed to solve a range of problems. In contrast, analogue computers are normally designed to solve just one specific problem. They store and process data using quantities that can vary continuously such as electrical resistance, rather than discrete 0s and 1s.
Analogue computers can excel at speed and energy efficiency, but have previously lacked the accuracy of their digital counterparts. Now, Zhong Sun at Peking University, China, and his colleagues have created a pair of analogue chips that work together to accurately solve matrix equations – a fundamental part of sending data over telecom networks, running large scientific simulations or training AI models.
The first chip outputs a low-precision solution to matrix calculations very rapidly, while a second runs an iterative refinement algorithm to analyse the error rates of the first chip and so improve accuracy. Sun says that the first chip produces results with an error rate of around 1 per cent, but that after three cycles of the second chip, this drops to 0.0000001 per cent – which he says matches the precision of standard digital calculations.
So far, the researchers have built chips capable of solving 16 by 16 matrices, or those with 256 variables, which could have applications for some small problems. But Sun admits that tackling the questions used in today’s large AI models would require far larger circuits, perhaps a million by a million.
But one advantage analogue chips have over digital is that larger matrices don’t take any longer to solve, while digital chips struggle exponentially as the matrix size increases. That means the throughput – the amount of data crunched per second – of a 32 by 32 matrix chip would beat that of a Nvidia H100 GPU, one of the high-end chips used to train AI today.
Theoretically, scaling further could see throughput reach 1000 times that of digital chips like GPUs, while using 100 times less energy, says Sun. But he is quick to point out that real-world tasks may stray outside the extremely narrow capabilities of their circuits, leading to smaller gains.
“It’s only a comparison of speed, and for real applications, the problem may be different,” says Sun. “Our chip can only do matrix computations. If matrix computation occupies most of the computing task, it represents a very significant acceleration for the problem, but if not, it will be a limited speed-up.”
Sun says that because of this, the most likely outcome is the creation of hybrid chips, where a GPU features some analogue circuits that handle very specific parts of problems – but even that is likely some years away.
James Millen at King’s College London says that matrix calculations are a key process in training AI models and that analogue computing offers a potential boost.
“The modern world is built on digital computers. These incredible machines are universal computers, which means they can be used to calculate absolutely anything, but not everything can necessarily be computed efficiently or fast,” says Millen. “Analogue computers are tailored to specific tasks, and in this way can be incredibly fast and efficient. This work uses an analogue computing chip to speed up a process called matrix inversion, which is a key process in training certain AI models. Doing this more efficiently could help reduce the huge energy demands of our ever-growing reliance on AI.”
Topics: