neural networks
(Unsplash)

Breakthrough in Teaching Hard Physics to Neural Networks Reported by Los Alamos Physicists

From the immensity of cosmic ray halos like those enveloping the Milky Way and other distant galaxies, to our understanding of the forces underlying the expansion of the universe, the physics of charged particles that comprise the field of electrodynamics are at the heart of some of the most complex phenomena scientists observe in our universe.

The branch of physics that examines interactions between the motion of charged particles and variances in electric and magnetic fields, electrodynamics has its origins in the early studies of Michael Faraday and James Clerk Maxwell, whose observations were the first to suggest that there were aspects of reality that existed apart from tangible matter.

Today, high-performance computing has greatly advanced our ability to study electrodynamic phenomena since Maxwell and Faraday’s time. Yet physicists still encounter challenges when it comes to using numerical simulations to calculate the relativistic charged particle dynamics that govern chaotic conditions like those that can occur during plasma turbulence, as well as synchrotron radiation and other phenomena.

Now, scientists with the Department of Energy’s Los Alamos National Laboratory say they have produced a new method that allows them to introduce hard physics constraints to the structure of neural networks, an approach that they say could help resolve some of the lingering questions about the electromagnetic fields of high-energy charged particle beams.

In a new paper appearing in the journal APL Machine Learning, Los Alamos researchers Alexander Scheinker and Reeju Pokharel present “a physics-constrained neural network (PCNN) approach to solving Maxwell’s equations for the electromagnetic fields of intense relativistic charged particle beams.”

In an email to The Debrief, Scheinker explained the ways that physical laws related to data in the context of a learning computer with the PCNN approach he and Pokharel present.

“The main principle behind the physics-constrained neural network (PCNN) approach is to generate potential functions rather than field functions directly,” Scheinker says. An example would be if a team of researchers showed a neural network a particular magnetic field, labeled B, they are trying to produce, and it generates a field that approximately matches it (which for purposes of this example is called B’).

“In that case, the network’s approximation B’ can get arbitrarily close to the correct field B,” Scheinker says. However, closer inspection would reveal that B’ is, in reality, quite unstable.

“If on the other hand, you create a potential function and use that to then generate B’, it will respect the physics,” Scheinker says. “Not only does the network then try to match point-by-point, but it actually matches it in a physical sense in terms of derivatives and smoothness.”

This describes the same process Scheinker and Pokharel applied toward the problem in their recent study, which they achieved “by forcing the neural networks to create the vector and scalar potentials and then generate the electromagnetic fields from those potentials according to Maxwell’s equations,” whereby they hard-coded the physics constraints into the structure of the approach.

But how did they manage to enforce these physical constraints so that the magnetic field generated possessed zero divergence from the one shown to the neural network?

“The way to enforce that as a hard constraint is that the neural network generates the vector potential A and then we create B as the curl of A,” Scheinker explained. “This way, by definition B has zero divergence based on vector calculus because the divergence of the curl of a vector field is always zero.”

In the past, complex charged particle behaviors and related phenomena have been studied with some success thanks to the help of physics-informed neural networks (PINNs), which introduce soft constraints in the function of neural networks. While both flexible and powerful, the PINN’s soft constraints offer no guarantee that constraints introduced by researchers will always be met.

“PINN-type approaches do not build any hard constraints into the structures of the neural networks, they simply add an additional term to the cost function which gently pushes the network’s output towards satisfying some desired constraint,” Scheinker told The Debrief.

“In contrast to that, the PCNN approach has guaranteed built-in hard physics constraints that cannot be violated by the network,” he adds. By its design, the PCNN he and Pokharel utilized in their research is only capable of generating fields that will satisfy such hard constraints.

“Intuitively, if you think about a neural network generating a magnetic field and you know the divergence should be zero (divB=0), a PINN approach would add an additional term to the cost function which penalizes non-zero values of divergence,” Scheinker explains.

In statistics, a cost function (also called a loss function) maps events or values of at least one variable onto a real number that represents a “cost” associated with a given event.

In the case of a PINN approach, “This comes at a cost because as you penalize divergence,” Scheinker adds. “The easiest way for a network to make a field that has zero divergence is to just make the field equal to some arbitrary constant, which has no relation to the correct value of the field,” Scheinker says. In the case of a PINN approach, two parts of the cost function are placed at odds, where one attempts to match the magnetic field in various ways, but is incapable of gauging constraints like how the field’s derivatives should behave.

Meanwhile, “the other part tries to respect the physics,” Scheinker says, “but does not respect the fact that the output should match the correct value.”

“In the PCNN approach there is no such tradeoff,” he says. “By construction, the networks can only create fields that satisfy the hard constraints and the only cost function is the accuracy of the match.” However, in their research, Scheinker and Pokharel were able to utilize both approaches by finding a way to combine the PCNN and PINN methods.

“We used the PCNN approach to build fields that are guaranteed to satisfy physics constraints so that we could trust them for beam dynamics simulations,” Scheinker explained, adding that this meant they “would be guaranteed to satisfy other physics such as energy conservation and Liouville’s theorem.”

“However, we also added a PINN-type soft constraint that would gently push our PCNN’s potential functions to satisfy the Lorenz gauge.

“In my mind, this method of combination is something anyone could and should use when trying to build physics into their neural networks,” Scheinker told The Debrief.

Scheinker also described several potential practical outcomes of a combined approach in this way. One includes new applications for advanced particle accelerators, which may help to reduce the effects of high space charge forces and coherent synchrotron radiation that can cause significant distortion to both the shape and the energy of the short particle beams they produce.

“In order to design accelerators and components such as bunch compressors for such short and intense beams you really have to start taking into account these collective effects,” Scheinker told The Debrief.

“You can no longer get away with much simpler and faster simulations,” he says, noting that as many as tens of thousands of hours could be required on even one of the most powerful supercomputers available today, just to simulate just one of these bunches.

“Now imagine simulating trains of hundreds of closely spaced intense bunches,” Scheinker says.
“The problems become incredibly computationally expensive.

“Having a model that is aided by a PCNN which you can trust because it is guaranteed to satisfy physics constraints,” he says, “would be a game changer.

“It would speed these things up by orders of magnitude and would allow you to quickly study such designs in your office with a powerful workstation,” he added.

Scheinker and Pokharel’s new paper, “Physics-constrained 3D convolutional neural networks for electrodynamics,” was published in APL Learning Machine on April 14, 2023, and can be read in its entirety online.

Micah Hanks is the Editor-in-Chief and Co-Founder of The Debrief. He can be reached by email at micah@thedebrief.org. Follow his work at micahhanks.com and on Twitter: @MicahHanks