This work concerns the implementation of the hybridizable discontinuous Galerkin (HDG) method to solve the linear anisotropic elastic equation in the frequency domain. First-order formulation with the compliance tensor and Voigt notation are employed to provide a compact description of the discretized problem and flexibility with highly heterogeneous media. We further focus on the question of optimal choice of stabilization in the definition of HDG numerical traces. For this purpose, we construct a hybridized Godunov-upwind flux for anisotropic elasticity possessing three distinct wavespeeds. This stabilization removes the need to choose scaling factors, contrary to identity and Kelvin-Christoffel based stabilizations which are popular choices in literature. We carry out comparisons among these families for isotropic and anisotropic material, with constant background and highly heterogeneous ones, in two and three dimensions. They establish the optimality of the Godunov stabilization which can be used as a reference choice for generic material and different types of waves.
Uncertainty quantification (UQ) in scientific machine learning (SciML) combines the powerful predictive power of SciML with methods for quantifying the reliability of the learned models. However, two major challenges remain: limited interpretability and expensive training procedures. We provide a new interpretation for UQ problems by establishing a new theoretical connection between some Bayesian inference problems arising in SciML and viscous Hamilton-Jacobi partial differential equations (HJ PDEs). Namely, we show that the posterior mean and covariance can be recovered from the spatial gradient and Hessian of the solution to a viscous HJ PDE. As a first exploration of this connection, we specialize to Bayesian inference problems with linear models, Gaussian likelihoods, and Gaussian priors. In this case, the associated viscous HJ PDEs can be solved using Riccati ODEs, and we develop a new Riccati-based methodology that provides computational advantages when continuously updating the model predictions. Specifically, our Riccati-based approach can efficiently add or remove data points to the training set invariant to the order of the data and continuously tune hyperparameters. Moreover, neither update requires retraining on or access to previously incorporated data. We provide several examples from SciML involving noisy data and \textit{epistemic uncertainty} to illustrate the potential advantages of our approach. In particular, this approach's amenability to data streaming applications demonstrates its potential for real-time inferences, which, in turn, allows for applications in which the predicted uncertainty is used to dynamically alter the learning process.
The method of fundamental solutions (MFS), also known as the method of auxiliary sources (MAS), is a well-known computational method for the solution of boundary-value problems. The final solution ("MAS solution") is obtained once we have found the amplitudes of $N$ auxiliary "MAS sources." Past studies have demonstrated that it is possible for the MAS solution to converge to the true solution even when the $N$ auxiliary sources diverge and oscillate. The present paper extends the past studies by demonstrating this possibility within the context of Laplace's equation with Neumann boundary conditions. One can thus obtain the correct solution from sources that, when $N$ is large, must be considered unphysical. We carefully explain the underlying reasons for the unphysical results, distinguish from other difficulties that might concurrently arise, and point to significant differences with time-dependent problems that were studied in the past.
We give generators and relations for the hypergraph props of Gaussian relations and positive affine Lagrangian relations. The former extends Gaussian probabilistic processes by completely-uninformative priors, and the latter extends Gaussian quantum mechanics with infinitely-squeezed states. These presentations are given by adding a generator to the presentation of real affine relations and of real affine Lagrangian relations which freely codiscards effects, as well as certain rotations. The presentation of positive affine Lagrangian relations provides a rigorous justification for many common yet informal calculations in the quantum physics literature involving infinite-squeezing. Our presentation naturally extends Menicucci et al.'s graph-theoretic representation of Gaussian quantum states with a representation for Gaussian transformations. Using this graphical calculus, we also give a graphical proof of Braunstein and Kimble's continuous-variable quantum teleportation protocol. We also interpret the LOv-calculus, a diagrammatic calculus for reasoning about passive linear-optical quantum circuits in our graphical calculus. Moreover, we show how our presentation allows for additional optical operations such as active squeezing.
We exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise while a too large value would lead to an excessive penalization of the solution. In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, allowing different regularization for different components of the unknown, or for groups of them. A distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the Maximum A Posteriori (MAP) estimation, requires no statistical tools. A combination of numerical linear algebra and optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available.
In this work we consider the Allen--Cahn equation, a prototypical model problem in nonlinear dynamics that exhibits bifurcations corresponding to variations of a deterministic bifurcation parameter. Going beyond the state-of-the-art, we introduce a random coefficient function in the linear reaction part of the equation, thereby accounting for random, spatially-heterogeneous effects. Importantly, we assume a spatially constant, deterministic mean value of the random coefficient. We show that this mean value is in fact a bifurcation parameter in the Allen--Cahn equation with random coefficients. Moreover, we show that the bifurcation points and bifurcation curves become random objects. We consider two distinct modelling situations: (i) for a spatially homogeneous coefficient we derive analytical expressions for the distribution of the bifurcation points and show that the bifurcation curves are random shifts of a fixed reference curve; (ii) for a spatially heterogeneous coefficient we employ a generalized polynomial chaos expansion to approximate the statistical properties of the random bifurcation points and bifurcation curves. We present numerical examples in 1D physical space, where we combine the popular software package Continuation Core and Toolboxes (CoCo) for numerical continuation and the Sparse Grids Matlab Kit for the polynomial chaos expansion. Our exposition addresses both, dynamical systems and uncertainty quantification, highlighting how analytical and numerical tools from both areas can be combined efficiently for the challenging uncertainty quantification analysis of bifurcations in random differential equations.
We present and analyze an a posteriori error estimator for a space-time hybridizable discontinuous Galerkin discretization of the time-dependent advection-diffusion problem. The residual-based error estimator is proven to be reliable and locally efficient. In the reliability analysis we combine a Peclet-robust coercivity type result and a saturation assumption, while local efficiency analysis is based on using bubble functions. The analysis considers both local space and time adaptivity and is verified by numerical simulations on problems which include boundary and interior layers.
This work presents a comparative review and classification between some well-known thermodynamically consistent models of hydrogel behavior in a large deformation setting, specifically focusing on solvent absorption/desorption and its impact on mechanical deformation and network swelling. The proposed discussion addresses formulation aspects, general mathematical classification of the governing equations, and numerical implementation issues based on the finite element method. The theories are presented in a unified framework demonstrating that, despite not being evident in some cases, all of them follow equivalent thermodynamic arguments. A detailed numerical analysis is carried out where Taylor-Hood elements are employed in the spatial discretization to satisfy the inf-sup condition and to prevent spurious numerical oscillations. The resulting discrete problems are solved using the FEniCS platform through consistent variational formulations, employing both monolithic and staggered approaches. We conduct benchmark tests on various hydrogel structures, demonstrating that major differences arise from the chosen volumetric response of the hydrogel. The significance of this choice is frequently underestimated in the state-of-the-art literature but has been shown to have substantial implications on the resulting hydrogel behavior.
This work performs the convergence analysis of the polytopal nodal discretisation of contact-mechanics (with Tresca friction) recently introduced in [18] in the framework of poro-elastic models in fractured porous media. The scheme is based on a mixed formulation, using face-wise constant approximations of the Lagrange multipliers along the fracture network and a fully discrete first order nodal approximation of the displacement field. The displacement field is enriched with additional bubble degrees of freedom along the fractures to ensure the inf-sup stability with the Lagrange multiplier space. It is presented in a fully discrete formulation, which makes its study more straightforward, but also has a Virtual Element interpretation. The analysis establishes an abstract error estimate accounting for the fully discrete framework and the non-conformity of the discretisation. A first order error estimate is deduced for sufficiently smooth solutions both for the gradient of the displacement field and the Lagrange multiplier. A key difficulty of the numerical analysis is the proof of a discrete inf-sup condition, which is based on a non-standard $H^{-1/2}$-norm (to deal with fracture networks) and involves the jump of the displacements, not their traces. The analysis also requires the proof of a discrete Korn inequality for the discrete displacement field which takes into account fracture networks. Numerical experiments based on analytical solutions confirm our theoretical findings
We study overlapping Schwarz methods for the Helmholtz equation posed in any dimension with large, real wavenumber and smooth variable wave speed. The radiation condition is approximated by a Cartesian perfectly-matched layer (PML). The domain-decomposition subdomains are overlapping hyperrectangles with Cartesian PMLs at their boundaries. The overlaps of the subdomains and the widths of the PMLs are all taken to be independent of the wavenumber. For both parallel (i.e., additive) and sequential (i.e., multiplicative) methods, we show that after a specified number of iterations -- depending on the behaviour of the geometric-optic rays -- the error is smooth and smaller than any negative power of the wavenumber. For the parallel method, the specified number of iterations is less than the maximum number of subdomains, counted with their multiplicity, that a geometric-optic ray can intersect. These results, which are illustrated by numerical experiments, are the first wavenumber-explicit results about convergence of overlapping Schwarz methods for the Helmholtz equation, and the first wavenumber-explicit results about convergence of any domain-decomposition method for the Helmholtz equation with a non-trivial scatterer (here a variable wave speed).
We characterize the solution to the entropically regularized optimal transport problem by a well-posed ordinary differential equation (ODE). Our approach works for discrete marginals and general cost functions, and in addition to two marginal problems, applies to multi-marginal problems and those with additional linear constraints. Solving the ODE gives a new numerical method to solve the optimal transport problem, which has the advantage of yielding the solution for all intermediate values of the ODE parameter (which is equivalent to the usual regularization parameter). We illustrate this method with several numerical simulations. The formulation of the ODE also allows one to compute derivatives of the optimal cost when the ODE parameter is $0$, corresponding to the fully regularized limit problem in which only the entropy is minimized.