Where Physics Constraints Meet Machine Learning
A new frontier in Computational Fluid Dynamics
NeuralFlow is a next-generation CFD solver that harnesses the power of artificial intelligence to enhance and accelerate fluid simulations. Traditional solvers rely on discretizing and iterating on the governing partial differential equations (PDEs). While effective, these methods can be extremely time-consuming for complex geometries or high-fidelity turbulence modeling.
By incorporating a physics-constrained neural network, NeuralFlow integrates domain knowledge directly into the learning process. Rather than training purely on data, the neural network minimizes a PDE residual (or flux) alongside any available simulation data, yielding physically consistent solutions that converge more rapidly than purely data-driven approaches.
NeuralFlow leverages two forms of automatic differentiation (AD) to couple the neural network with our CFD solver:
Forward-mode AD: Used in the flux/residual calculations, capturing partial derivatives with respect to the neural network’s outputs.
Reverse-mode AD: Applied within the neural network itself, allowing parameter updates based on chain rule backpropagation.
This dual-AD approach enables end-to-end learning of fluid dynamics: each iteration drives the neural network to produce field variables (e.g., velocity or correction factors) that minimize the PDE residual within the solver.
Classical PINNs
A classical physics-informed neural network directly uses differential equations in its loss. The network must exactly match derivatives of mass, momentum, and energy at every point. This works for smooth flows but struggles with shocks, boundary layers, or steep gradients. In these regions, derivatives become large or undefined, causing instability and oscillations during training.
NeuralFlow
NeuralFlow adopts the integral (weak) form of these equations instead. It asks the network to minimise the total flux passing through each control-volume face. These fluxes come from advanced numerical flux methods—like AUSM⁺-up—used in real CFD solvers. During training, NeuralFlow uses the solver’s exact Jacobians (flux derivatives), ensuring accuracy and stability even near shocks and thin boundary layers.
Why it matters
Exact conservation: Flux-based loss ensures mass, momentum, and energy remain precisely conserved.
Shock and boundary layer capturing: NeuralFlow directly inherits shock-capturing and boundary-layer-resolving capabilities from proven CFD numerical fluxes.
Stable and fast training: Solver-provided exact Jacobians accelerate convergence and stability.
Efficient inference: Trained networks run at neural-network speeds but with CFD-grade fidelity.
In short, classical PINNs differentiate equations and struggle at shocks and boundary layers. NeuralFlow integrates fluxes, uses solver-proven numerical fluxes and Jacobians, and remains stable and accurate in challenging flow regions.
What Is Completed So Far?
✓ Forward-mode AD integration for flux calculations.
✓ Implementation of robust neural network architectures in C++ (LibTorch) for in-solver training.
✓ Incorporation of boundary conditions and additional physics constraints (e.g., energy equation) directly in the training loop.
✓ Demonstrated efficiency for compressible high-speed flows.
Future Plans & Expectations
NeuralFlow is currently in active development, and we anticipate:
GUI integration
Multi-physics integration
Transient problems
In the next phase, we aim to provide benchmark comparisons against traditional CFD solvers, focusing on convergence speed and physical accuracy in complex flow regimes.
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics.
Derek Hansen and Danielle Maddix Robinson and Shima Alizadeh and Gaurav Gupta and Michael Mahoney, Learning physical models that can respect conservation laws, 2023, Physica D: Nonlinear Phenomena.
Paszke, A., et al. (2019). PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems.
CoDiPack Documentation: https://scicomp.rptu.de/codi/index.html