
Throwing pebbles into a flowing stream of water may not change the flow pattern significantly. But throwing the pebble elsewhere can make a big difference. Who can make predictions?
Answer:Neural NetworksCan.California Institute of Technology, Pasadena, USA(California Institute of Technology, Caltech)of computer scientists and mathematicians, by showing that neural networks can teach themselves howSolve a broad class of fluid flow problems faster and more accurately than any previous computer programfor artificial intelligence(AI)opened up a new stage.
Professor of Computational and Mathematical Sciences, Scientific Artificial Intelligence at Caltech(AI4Science)Co-leader Animashree Anandkumar said: “When our group got together two years ago, we discussed which fields of science were ripe for AI to disrupt. We thought that if we could identify a robust framework for solving partial differentials equation, then we can have a broad impact.”
Their first goal isTwo-dimensional Navier-Stokes equations(Navier-Stokes equation),This equation describes the motion of an infinitely thin layer of water(figure 1).their neural network(They call it a “Fourier Neural Operator”)when solving such problemsits performance(400 times faster and 30% more accurate)Significantly better than any previous differential equation solver.
Figure 1 The water flows in flakes above the fountain. The neural network predicts this two-dimensional fluid flow faster and more accurately than computer programs that solve differential equations using standard methods, reports the Caltech Scientific Artificial Intelligence team. They go on to conduct experiments in three-dimensional fluid flow that could have broad implications for advancing science through improved modeling of natural phenomena such as nuclear fusion. Image source: Pixabay (public domain)
Partial Differential Equations(PDE)It is a class of equations that arise naturally from Newton’s laws of motion. To this end, partial differential equations are the foundation of science, and any significant progress in solving these equations has widespread implications. Anandkumar said: “We are in discussions with many teams in various industries, as well as in academia and national laboratories. We are already conducting experiments in three-dimensional fluid flow.”
Anandkumar says a good use case isFusion Modeling Equations. She added: “Another application case is material design, especially plastic and elastic material design. The team member, Kaushik Bhattacharya, professor of mechanics and materials science, has extensive experience in this field.”
During World War II, computers came into being in part because of the use of differential equations to predict the motion of artillery shells. Since then, computers have been used to solve differential equations with a certain degree of accuracy and success. But previous approaches, whether involving traditional computer programming or artificial intelligence, have always dealt with only one equation at a time. For example, a computer can figure out how a pebble thrown in one spot affects the flow of water. The computer can then learn how pebbles thrown elsewhere change the flow of water.But the computer doesn’t go any further to understand how a pebble thrown anywhere changes the flow of water. That’s the grand goal behind the Fourier Neural Operator at Caltech.
Of course, there’s a reason why previous methods couldn’t handle multiple equations at once.Neural networks are good at learning what mathematicians callAssociations between finite dimensional spaces.For example, AlphaGo, Google’s artificial intelligence program that beat the best human Go player, learned Go positions(Although astronomical, the number is limited)The functional relationship between the chess moves. In contrast, the Fourier neural operator takes the initial velocity field of the fluid as input and produces a velocity field output after a certain time.Both of these two velocity fields exist in infinite dimensional space, which is just a mathematical expression, namelyThere are an infinite number of ways to throw a pebble into a stream of water.
The Caltech team trained the Fourier neural operator on thousands of instances of the Navier-Stokes equations solved by traditional methods. Then, by “cost function“(cost function)The network is evaluated, measuring how far the predictions are from the correct solution, and it evolves in a way that gradually improves its predictions. Since the network starts with a select set of inputs and outputs, it is called “Supervised learning“(supervised learning). The original version of Google’s AlphaGo combines “supervised learning” and “unsupervised learning”(although later versions only employ “unsupervised learning”). Other neural network programs for image processing often employ “supervised learning”.
But no matter how much training data you have, you may not be able to explore the tiniest parts of an infinite dimensional space. You can’t try to put pebbles in all the places in the stream. Furthermore, without any prior assumptions, there is no guarantee that your network will correctly predict what will happen when a pebble is thrown to a new location.
For this and other reasons, Andrew Stuart, another member of the Scientific AI team and professor of computational and mathematical sciences, said: “We wanted to take the relevant parts of neural networks and combine them with domain-specific understanding of mathematics. “
In particular, Stuart knew about linear partial differential equations(the simplest type of partial differential equation)It can be solved by the well-known Green’s function method, a strategy used to solve these common problems and partial differential equations that other methods may not be able to solve. Basically, it provides a template for the proper solution of the equation.The template can be approximated in finite-dimensional space, thus reducing the problem from infinite to finite dimensions.
The Navier-Stokes equations are nonlinear, so there is no such template for them yet.However, there is something similar to Green’s function in the Jonavi-Stokes equation, which is a nonlinear equation(However, it still has finite-dimensional templates), then the neural network should be able to learn it. While there’s no guarantee this will work, Stuart calls it an “informed adventure.” He said that experience has shown time and time again,Neural networks are great for learning nonlinear mappings in finite-dimensional spaces.
Daniele Venturi, assistant professor in the Department of Applied Mathematics at the University of California, Santa Cruz, said:Learning nonlinear operators between infinite dimensional spaces is the “holy grail” of computational science(holy grail). Venturi, whose research involves differential equations and infinite-dimensional function spaces, says he doesn’t believe the Caltech team has done it. He said: “In general, it is not possible to learn a nonlinear mapping between an infinite dimensional space on the basis of a finite number of input-output pairs, but it can be solved approximately. In fact, the main problem lies in the The computational cost and its accuracy and efficiency. The results they showed were really impressive.”
In addition to unprecedented speed and accuracy, Caltech’s method has other notable properties. By design,The method predicts fluid flow even at locations without initial data and predicts previously unseen disturbance outcomes. The program also confirmed the emergent behavior of solutions to the Navier-Stokes equations: They redistribute long-wavelength energy to short wavelengths over time. This phenomenon is called “energy cascade” (energy cascade)which was proposed by Andrei Kolmogorov in the 1940s to explain the phenomenon of turbulence in fluids.
A future research frontier for Fourier neural operators is three-dimensional fluid flow, where turbulence and chaos are the main obstacles. Can Neural Networks Tame Chaos? “We know that chaos means that fluid motion over long periods of time cannot be accurately predicted,” Anandkumar said. “But we also know from theory that there are statistical invariants, such as invariant measures and stable attractors.” location, it is possible to make better probabilistic predictions, even if accurate deterministic predictions are not possible. Anandkumar points out that neural networks can control chaotic systems and, therefore, do not move toward undesired states of attraction.”In nuclear fusion, for example, controlling destruction,” she said(such as plasma instability)ability becomes very important. “
Adapted from the original:
Dana Mackenzie. Pushing Mathematical Limits, a Neural Network Learns Fluid Flow[J].Engineering, 2021, 7(5):550-551.
This article is from the WeChat public account:Journal of the Chinese Academy of Engineering (ID: CAE-Engineering)by Dana Mackenzie
The Links: SRDA-SDA14A01A-E JANCD-YCP01B-E IGBT