Brain-Inspired Computers Now Solve Physics Equations at a Fraction of the Energy Cost
A computer that thinks like a brain just proved it can do something nobody expected – solve the fiendishly complex mathematics that underpin weather forecasting, fluid dynamics, and nuclear physics simulations. And it can do so while consuming dramatically less energy than the massive supercomputers that have traditionally owned this territory.
The breakthrough comes from Sandia National Laboratories, where computational neuroscientists Brad Theilman and Brad Aimone published a pivotal paper in Nature Machine Intelligence describing a novel algorithm called NeuroFEM. Their work demonstrates that neuromorphic hardware – circuitry designed to mimic the architecture of the human brain – can tackle partial differential equations (PDEs), the mathematical backbone of virtually every physics simulation that matters. For decades, the prevailing wisdom held that brain-inspired computers were useful for pattern recognition and little else. That assumption has now been shattered.
The implications stretch from national security to climate science, and the energy savings alone could reshape how the world approaches large-scale scientific computing.
What Makes Neuromorphic Computing Fundamentally Different
Traditional computers follow the von Neumann architecture conceived during World War II: a central processor connected to separate memory via a digital bus. Every unit in the system stays active at all times, regardless of whether it has useful work to do. This design has powered seven decades of computing progress, but it has also created an enormous energy problem. The best supercomputers in the world are roughly eight orders of magnitude less computationally efficient than the human brain, maxing out at around 100 picojoules per multiply-accumulate operation. The human brain, by contrast, achieves a computational efficiency of less than one attojoule per operation and executes an astonishing 10^18 operations per second on just 20 watts of power – the same amount needed to run a light bulb.
Neuromorphic systems flip this paradigm entirely. Instead of keeping all circuits active continuously, they use spiking neural networks where individual units fire only when they receive or emit information, operating in a purely event-driven manner on sparse binary signals. This mirrors how biological neurons work: silent until needed, then firing brief electrical impulses. The result is inherent energy efficiency baked into the hardware’s fundamental design, not bolted on as an afterthought.
The Sandia Breakthrough: NeuroFEM
The algorithm at the center of this advance is called NeuroFEM – a neuromorphic finite element method. It works by constructing a spiking neural network from the sparse linear systems that arise when PDEs are discretized using the finite element method. Each mesh node in the discretization contains a population of spiking neurons, with the weights between neural populations determined by the elements of the linear system matrix. Each population receives a bias from the right-hand side of the linear system, and the network’s construction ensures that spiking activity flows toward the solution over time.
The mechanism is elegantly adversarial. Half the neurons in each node project with a positive readout weight and half with a negative weight. This tug-of-war between opposing neural populations pulls the readout variable to the required value, with individual spikes adding exponentially decaying kernels to the output. The sum of these kernels across all neurons produces a fluctuating readout that converges on the correct solution.
Theilman and Aimone demonstrated NeuroFEM solving the Poisson equation on a disk – a benchmark PDE problem – and showed close agreement with ground-truth solutions generated by conventional linear solvers. Critically, the relative residual per mesh point remains constant as the mesh scales up, and accuracy improves linearly with the number of readout timesteps averaged together. The system also demonstrated the ability to dynamically switch between different right-hand sides mid-simulation, adapting in real time.
Why PDEs Matter So Much
Partial differential equations describe how physical quantities change across multiple variables like space and time. They are not abstract mathematical curiosities – they are the essential language for modeling nearly every real-world system that scientists and engineers care about.
- Weather and climate forecasting relies on PDEs to model atmospheric fluid dynamics across the entire planet
- Structural mechanics uses PDEs to predict how materials respond to stress, vibration, and thermal loads
- Electromagnetic field modeling depends on Maxwell’s equations, a system of PDEs
- Nuclear weapons physics requires solving PDEs to simulate detonation dynamics without physical testing
- Fluid dynamics – from aircraft design to blood flow in arteries – is governed by the Navier-Stokes equations, among the most computationally demanding PDEs in existence
Solving these equations at scale traditionally requires supercomputers consuming megawatts of electricity. The prospect of achieving comparable results on brain-inspired hardware using a fraction of that power represents a potential sea change in scientific computing.
Energy Implications for National Security
The practical significance of this work is perhaps most acute for the National Nuclear Security Administration (NNSA), which maintains the United States’ nuclear deterrent. Since the end of underground nuclear testing, the nation’s nuclear stockpile has been certified through computational simulation – an endeavor that demands some of the most powerful supercomputers on Earth, all consuming vast amounts of electricity.
“You can solve real physics problems with brain-like computation,” Aimone explained. “That’s something you wouldn’t expect because people’s intuition goes the opposite way. And in fact, that intuition is often wrong.”
The energy argument becomes even more compelling when you consider the brain’s own computational feats. As Aimone pointed out, everyday motor control tasks – hitting a tennis ball, swinging a bat at a baseball – are exascale-level problems that human brains solve cheaply and continuously. If neuromorphic hardware can capture even a fraction of that efficiency for formal mathematical computation, the savings for energy-intensive national security simulations could be transformative. Specific wattage comparisons between neuromorphic and conventional systems for PDE workloads have not yet been published, but researchers consistently describe the reductions as dramatic.
The Unexpected Bridge Between Neuroscience and Mathematics
One of the most fascinating dimensions of this research is what it reveals about the brain itself. The NeuroFEM algorithm was built on a relatively well-known cortical network model from computational neuroscience – a model that had existed for 12 years without anyone recognizing its connection to PDEs.
“We based our circuit on a relatively well-known model in the computational neuroscience world,” Theilman said. “We’ve shown the model has a natural but non-obvious link to PDEs, and that link hasn’t been made until now.”
This raises provocative questions. If brain-like circuits naturally solve the same equations that describe physical reality, what does that tell us about how biological brains process information? Aimone has speculated that diseases of the brain could be diseases of computation – a hypothesis that, if validated, could open entirely new approaches to understanding and treating neurological conditions like Alzheimer’s and Parkinson’s disease. The research creates a feedback loop: insights from neuroscience improve computing hardware, and that hardware in turn generates new hypotheses about neural function.
Complementary Advances Across the Field
Sandia’s work does not exist in isolation. A broader ecosystem of neuromorphic research is converging from multiple directions.
At the University of California San Diego, engineers have developed a brain-inspired hardware platform using hydrogen-doped perovskite nickelate – a quantum material with unusual electronic properties. Their device combines memory and computation on the same chip, with nodes interacting collectively through a shared substrate that loosely resembles the ionic fluid surrounding biological neurons. The system processes information using spatiotemporal computing, analyzing signals both over time and through spatial interactions. It operates on the scale of hundreds of nanoseconds and consumes approximately 0.2 nanojoules per operation, making it a candidate for edge AI applications in wearable health monitors and smart sensors.
Meanwhile, Los Alamos National Laboratory is pursuing an ambitious vision for neuromorphic AI that could operate on just 20 watts. Their near-term goal is a neuromorphic computer fitting within a two-square-meter box, housing as many neurons as the human cerebral cortex, operating between 250,000 and 1,000,000 times faster than a biological brain on approximately 10 kilowatts of power. The CHIPS and Science Act of 2021 allocated $280 billion toward reestablishing American dominance in computing, with neuromorphic technology among the prioritized areas.
Current Neuromorphic Hardware Landscape
| Platform | Neuron Capacity | Energy Profile | Primary Use Case |
|---|---|---|---|
| Intel Loihi 2 | 1 million neurons per chip | 0.1-1 picojoule per spike | Differential equations, real-time physics |
| IBM TrueNorth | 1 million neurons, 256 cores | ~70 milliwatts per chip | Event-driven particle simulations |
| Stanford Neurogrid | 1 million neurons | Less than 1 watt full system | Biological physics models |
| Lava Simulator (Software) | Scalable to 10 million+ | GPU-hosted | Prototyping any equation type |
Challenges and the Road to Neuromorphic Supercomputers
For all its promise, neuromorphic computing for scientific applications remains in its early stages. No direct kilowatt-hour-per-simulation comparisons between neuromorphic and conventional systems have been published. Deployment timelines for production-scale neuromorphic supercomputers are not yet established. The technology is firmly in the proof-of-concept phase.
Several technical hurdles remain:
- Scaling beyond lab prototypes – Current demonstrations work on benchmark problems; extending to full-scale industrial simulations requires orders-of-magnitude increases in network size
- Precision management – Mapping continuous physics variables to discrete spikes introduces quantization challenges that must be carefully managed through encoding strategies like rate coding at 10-100 Hz spike rates or temporal coding with 1-5 millisecond inter-spike intervals
- Software ecosystem maturity – Tools like Intel’s open-source Lava framework exist but are far less mature than the decades-old software stacks supporting conventional HPC
- Hybrid integration – Near-term practical deployments will likely pair neuromorphic hardware with traditional GPUs, using the former for inference-heavy workloads and the latter for training
Theilman has framed the central research question going forward: “If we’ve already shown that we can import this relatively basic but fundamental applied math algorithm into neuromorphic – is there a corresponding neuromorphic formulation for even more advanced applied math techniques?” The answer to that question will determine whether neuromorphic computing remains a niche curiosity or becomes a pillar of next-generation scientific infrastructure.
What This Means for the Future of Computing
The convergence of these developments – Sandia’s NeuroFEM algorithm, UC San Diego’s spatiotemporal hardware, Los Alamos’s vision for brain-scale neuromorphic machines – points toward a future where computing power is no longer synonymous with energy consumption. The human brain has always been the proof that extraordinary computation can happen on a modest energy budget. Neuromorphic engineering is finally beginning to translate that biological reality into silicon.
For the scientific community, the immediate opportunity lies in energy-constrained environments: national security simulations where electricity costs are a strategic concern, edge devices that must process physics in real time with limited power, and autonomous systems where every watt matters. For the longer term, the vision is nothing less than the world’s first neuromorphic supercomputer – a machine that thinks like a brain and solves the equations that describe the physical universe, all while consuming a fraction of the energy that today’s systems demand.
“We have a foot in the door for understanding the scientific questions, but also we have something that solves a real problem,” Theilman said. That combination of fundamental insight and practical utility is rare in computing research – and it is exactly what makes this moment so significant.
Sources
- Brain-Inspired Machines Are Better at Math Than Expected – ScienceDaily
- Nature-Inspired Computers Are Shockingly Good at Math – Sandia
- Brain-Inspired Computers Are Shockingly Good at Math – LabNews
- Neuromorphic Computing: The Future of AI – LANL
- Brain-Inspired Device for Faster AI Hardware – UC San Diego
- Neuromorphic Computing: An Overview – arXiv