An R&D lab under America’s Energy Department annnounced this week that “Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges.”
Phys.org publishes the announcement from Sandia National Lab:
In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world’s first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond…
“We’re just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly,” Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren’t expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren’t surprising. The researchers believe the brain itself performs complex computations constantly, even if we don’t consciously realize it. “Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball,” Aimone said. “These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply…”
Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. “We based our circuit on a relatively well-known model in the computational neuroscience world,” Theilman said. “We’ve shown the model has a natural but non-obvious link to PDEs, and that link hasn’t been made until now — 12 years after the model was introduced.” The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. “Diseases of the brain could be diseases of computation,” Aimone said. “But we don’t have a solid grasp on how the brain performs computations yet.” If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer’s and Parkinson’s.