We propose and analyse a hybrid high-order (HHO) scheme for stationary incompressible magnetohydrodynamics equations. The scheme has an arbitrary order of accuracy and is applicable on generic polyhedral meshes. For sources that are small enough, we prove error estimates in energy norm for the velocity and magnetic field, and $L^2$-norm for the pressure; these estimates are fully robust with respect to small faces, and of optimal order with respect to the mesh size. Using compactness techniques, we also prove that the scheme converges to a solution of the continuous problem, irrespective of the source being small or large. Finally, we illustrate our theoretical results through 3D numerical tests on tetrahedral and Voronoi mesh families.
The paper focuses on a new error analysis of a class of mixed FEMs for stationary incompressible magnetohydrodynamics with the standard inf-sup stable velocity-pressure space pairs to Navier-Stokes equations and the N\'ed\'elec's edge element for the magnetic field. The methods have been widely used in various numerical simulations in the last several decades, while the existing analysis is not optimal due to the strong coupling of system and the pollution of the lower-order N\'ed\'elec's edge approximation in analysis. In terms of a newly modified Maxwell projection we establish new and optimal error estimates. In particular, we prove that the method based on the commonly-used Taylor-Hood/lowest-order N\'ed\'elec's edge element is efficient and the method provides the second-order accuracy for numerical velocity. Two numerical examples for the problem in both convex and nonconvex polygonal domains are presented. Numerical results confirm our theoretical analysis.
Neural Radiance Fields (NeRF) coupled with GANs represent a promising direction in the area of 3D reconstruction from a single view, owing to their ability to efficiently model arbitrary topologies. Recent work in this area, however, has mostly focused on synthetic datasets where exact ground-truth poses are known, and has overlooked pose estimation, which is important for certain downstream applications such as augmented reality (AR) and robotics. We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available. Our approach recovers an SDF-parameterized 3D shape, pose, and appearance from a single image of an object, without exploiting multiple views during training. More specifically, we leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution which is then refined via optimization. Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios. We demonstrate state-of-the-art results on a variety of real and synthetic benchmarks.
In this paper we analyze a pressure-robust method based on divergence-free mixed finite element methods with continuous interior penalty stabilization. The main goal is to prove an $O(h^{k+1/2})$ error estimate for the $L^2$ norm of the velocity in the convection dominated regime. This bound is pressure robust (the error bound of the velocity does not depend on the pressure) and also convection robust (the constants in the error bounds are independent of the Reynolds number).
Motivated by numerical modeling of ultrasound waves, we investigate robust conforming finite element discretizations of quasilinear and possibly nonlocal equations of Westervelt type. These wave equations involve either a strong dissipation or damping of fractional-derivative type and we unify them into one class by introducing a memory kernel that satisfies non-restrictive regularity and positivity assumptions. As the involved damping parameter is relatively small and can become negligible in certain (inviscid) media, it is important to develop methods that remain stable as the said parameter vanishes. To this end, the contributions of this work are twofold. First, we determine sufficient conditions under which conforming finite element discretizations of (non)local Westervelt equations can be made robust with respect to the dissipation parameter. Secondly, we establish the rate of convergence of the semi-discrete solutions in the singular vanishing dissipation limit. The analysis hinges upon devising appropriate energy functionals for the semi-discrete solutions that remain uniformly bounded with respect to the damping parameter.
Numerous physics theories are rooted in partial differential equations (PDEs). However, the increasingly intricate physics equations, especially those that lack analytic solutions or closed forms, have impeded the further development of physics. Computationally solving PDEs by classic numerical approaches suffers from the trade-off between accuracy and efficiency and is not applicable to the empirical data generated by unknown latent PDEs. To overcome this challenge, we present KoopmanLab, an efficient module of the Koopman neural operator family, for learning PDEs without analytic solutions or closed forms. Our module consists of multiple variants of the Koopman neural operator (KNO), a kind of mesh-independent neural-network-based PDE solvers developed following dynamic system theory. The compact variants of KNO can accurately solve PDEs with small model sizes while the large variants of KNO are more competitive in predicting highly complicated dynamic systems govern by unknown, high-dimensional, and non-linear PDEs. All variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation in fluid mechanics) and ERA5 (i.e., one of the largest high-resolution global-scale climate data sets in earth physics). These demonstrations suggest the potential of KoopmanLab to be a fundamental tool in diverse physics studies related to equations or dynamic systems.
Traditional signal processing methods relying on mathematical data generation models have been cast aside in favour of deep neural networks, which require vast amounts of data. Since the theoretical sample complexity is nearly impossible to evaluate, these amounts of examples are usually estimated with crude rules of thumb. However, these rules only suggest when the networks should work, but do not relate to the traditional methods. In particular, an interesting question is: how much data is required for neural networks to be on par or outperform, if possible, the traditional model-based methods? In this work, we empirically investigate this question in two simple examples, where the data is generated according to precisely defined mathematical models, and where well-understood optimal or state-of-the-art mathematical data-agnostic solutions are known. A first problem is deconvolving one-dimensional Gaussian signals and a second one is estimating a circle's radius and location in random grayscale images of disks. By training various networks, either naive custom designed or well-established ones, with various amounts of training data, we find that networks require tens of thousands of examples in comparison to the traditional methods, whether the networks are trained from scratch or even with transfer-learning or finetuning.
In this paper, we consider numerical approximations for solving the inductionless magnetohydrodynamic (MHD) equations. By utilizing the scalar auxiliary variable (SAV) approach for dealing with the convective and coupling terms, we propose some first- and second-order schemes for this system. These schemes are linear, decoupled, unconditionally energy stable, and only require solving a sequence of differential equations with constant coefficients at each time step. We further derive a rigorous error analysis for the first-order scheme, establishing optimal convergence rates for the velocity, pressure, current density and electric potential in the two-dimensional case. Numerical examples are presented to verify the theoretical findings and show the performances of the schemes.
Large-scale dynamics of the oceans and the atmosphere are governed by primitive equations (PEs). Due to the nonlinearity and nonlocality, the numerical study of the PEs is generally challenging. Neural networks have been shown to be a promising machine learning tool to tackle this challenge. In this work, we employ physics-informed neural networks (PINNs) to approximate the solutions to the PEs and study the error estimates. We first establish the higher-order regularity for the global solutions to the PEs with either full viscosity and diffusivity, or with only the horizontal ones. Such a result for the case with only the horizontal ones is new and required in the analysis under the PINNs framework. Then we prove the existence of two-layer tanh PINNs of which the corresponding training error can be arbitrarily small by taking the width of PINNs to be sufficiently wide, and the error between the true solution and its approximation can be arbitrarily small provided that the training error is small enough and the sample set is large enough. In particular, all the estimates are a priori, and our analysis includes higher-order (in spatial Sobolev norm) error estimates. Numerical results on prototype systems are presented to further illustrate the advantage of using the $H^s$ norm during the training.
This paper develops efficient preconditioned iterative solvers for incompressible flow problems discretised by an enriched Taylor-Hood mixed approximation, in which the usual pressure space is augmented by a piecewise constant pressure to ensure local mass conservation. This enrichment process causes over-specification of the pressure, which complicates the design and implementation of efficient solvers for the resulting linear systems. We first describe the impact of this choice of pressure space on the matrices involved. Next, we show how to recover effective solvers for Stokes problems, with a preconditioner based on the singular pressure mass matrix, and for Oseen systems arising from linearised Navier-Stokes equations, by using a two-stage pressure convection-diffusion strategy. The codes used to generate the numerical results are available online.
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.