NeuralFlow
NeuralFlow applies neural networks within a fully conservative formulation rather than relying on traditional PINN residual minimization.
NeuralFlow applies neural networks within a fully conservative formulation rather than relying on traditional PINN residual minimization.
NeuralFlow is a next-generation CFD solver that harnesses the power of artificial intelligence to enhance and accelerate fluid simulations. Traditional solvers rely on discretizing and iterating on the governing partial differential equations (PDEs). While effective, these methods can be extremely time-consuming for complex geometries or high-fidelity turbulence modeling.
By incorporating a physics-constrained neural network, NeuralFlow integrates domain knowledge directly into the learning process. Rather than training purely on data, the neural network minimizes a PDE residual (or flux) alongside any available simulation data, yielding physically consistent solutions that converge more rapidly than purely data-driven approaches.
NeuralFlow leverages two forms of automatic differentiation (AD) to couple the neural network with our CFD solver:
This dual-AD approach enables end-to-end learning of fluid dynamics: each iteration drives the neural network to produce field variables (e.g., velocity or correction factors) that minimize the PDE residual within the solver.
A classical physics-informed neural network directly uses differential equations in its loss. The network must exactly match derivatives of mass, momentum, and energy at every point. This works for smooth flows but struggles with shocks, boundary layers, or steep gradients. In these regions, derivatives become large or undefined, causing instability and oscillations during training.
NeuralFlow adopts the integral (weak) form of these equations instead. It asks the network to minimise the total flux passing through each control-volume face. These fluxes come from advanced numerical flux methods—like AUSM⁺-up—used in real CFD solvers. During training, NeuralFlow uses the solver’s exact Jacobians (flux derivatives), ensuring accuracy and stability even near shocks and thin boundary layers.
In short, classical PINNs differentiate equations and struggle at shocks and boundary layers. NeuralFlow integrates fluxes, uses solver-proven numerical fluxes and Jacobians, and remains stable and accurate in challenging flow regions.
NeuralFlow is formulated as an implicit optimization problem where a neural network is trained to minimize the residuals of a governing system of partial differential equations (PDEs). Given a set of state variables \( V \), the solver constructs a numerical representation of fluxes \( F \), ensuring conservation laws and physical consistency through automatic differentiation.
The state variables \( V \) satisfy a general system of governing equations expressed as:
\( \mathcal{R}(V) = 0 \)
where \( \mathcal{R} \) represents the residuals derived from conservation laws:
\( \frac{\partial V}{\partial t} + \nabla \cdot F(V) = S(V) \)
where:
NeuralFlow approximates the state variables using a neural network \( \mathcal{N}_\theta \):
\( V = \mathcal{N}_\theta(X) \)
where \( X \) represents input features such as spatial coordinates, initial conditions, or auxiliary parameters. The objective is to train the network to minimize the residuals \( \mathcal{R}(V) \).
Using automatic differentiation, the Jacobian of the flux function is computed as:
\( J_F = \frac{\partial F}{\partial V} \)
allowing for a local linearization:
\( \delta F \approx J_F \delta V \)
The optimization objective is to minimize the total residual loss over the computational domain:
\( \mathcal{J}(\theta) = \sum_{\Omega} \|\mathcal{R}(V)\|^2 \)
The network is trained using a gradient-based optimizer, where gradients are corrected using the flux Jacobians to enforce physics consistency:
\( \frac{\partial \mathcal{J}}{\partial \theta} = \sum_{i} \frac{\partial \mathcal{J}}{\partial V_i} \frac{\partial V_i}{\partial \theta} \)
The parameter updates are performed iteratively:
\( \theta \leftarrow \theta - \eta \frac{\partial \mathcal{J}}{\partial \theta} \)
where \( \eta \) is the learning rate.
By coupling numerical discretization with differentiable programming, NeuralFlow ensures that the learned state variables adhere to the governing physics while enabling a data-driven approach to solving PDEs.
What Is Completed So Far?
✓ Forward-mode AD integration for flux calculations.
✓ Implementation of robust neural network architectures in C++ (LibTorch) for in-solver training.
✓ Incorporation of boundary conditions and additional physics constraints (e.g., energy equation) directly in the training loop.
✓ Demonstrated efficiency for compressible high-speed flows.
Future Plans & Expectations
NeuralFlow is currently in active development, and we anticipate:
In the next phase, we aim to provide benchmark comparisons against traditional CFD solvers, focusing on convergence speed and physical accuracy in complex flow regimes.