Helmholtz decompositions of elastic fields is a common approach for the solution of Navier scattering problems. Used in the context of Boundary Integral Equations (BIE), this approach affords solutions of Navier problems via the simpler Helmholtz boundary integral operators (BIOs). Approximations of Helmholtz Dirichlet-to-Neumann (DtN) can be employed within a regularizing combined field strategy to deliver BIE formulations of the second kind for the solution of Navier scattering problems in two dimensions with Dirichlet boundary conditions, at least in the case of smooth boundaries. Unlike the case of scattering and transmission Helmholtz problems, the approximations of the DtN maps we use in the Helmholtz decomposition BIE in the Navier case require incorporation of lower order terms in their pseudodifferential asymptotic expansions. The presence of these lower order terms in the Navier regularized BIE formulations complicates the stability analysis of their Nystr\"om discretizations in the framework of global trigonometric interpolation and the Kussmaul-Martensen kernel singularity splitting strategy. The main difficulty stems from compositions of pseudodifferential operators of opposite orders, whose Nystr\"om discretization must be performed with care via pseudodifferential expansions beyond the principal symbol. The error analysis is significantly simpler in the case of arclength boundary parametrizations and considerably more involved in the case of general smooth parametrizations which are typically encountered in the description of one dimensional closed curves.
Riemannian optimization is concerned with problems, where the independent variable lies on a smooth manifold. There is a number of problems from numerical linear algebra that fall into this category, where the manifold is usually specified by special matrix structures, such as orthogonality or definiteness. Following this line of research, we investigate tools for Riemannian optimization on the symplectic Stiefel manifold. We complement the existing set of numerical optimization algorithms with a Riemannian trust region method tailored to the symplectic Stiefel manifold. To this end, we derive a matrix formula for the Riemannian Hessian under a right-invariant metric. Moreover, we propose a novel retraction for approximating the Riemannian geodesics. Finally, we conduct a comparative study in which we juxtapose the performance of the Riemannian variants of the steepest descent, conjugate gradients, and trust region methods on selected matrix optimization problems that feature symplectic constraints.
Specifying a prior distribution is an essential part of solving Bayesian inverse problems. The prior encodes a belief on the nature of the solution and this regularizes the problem. In this article we completely characterize a Gaussian prior that encodes the belief that the solution is a structured tensor. We first define the notion of (A,b)-constrained tensors and show that they describe a large variety of different structures such as Hankel, circulant, triangular, symmetric, and so on. Then we completely characterize the Gaussian probability distribution of such tensors by specifying its mean vector and covariance matrix. Furthermore, explicit expressions are proved for the covariance matrix of tensors whose entries are invariant under a permutation. These results unlock a whole new class of priors for Bayesian inverse problems. We illustrate how new kernel functions can be designed and efficiently computed and apply our results on two particular Bayesian inverse problems: completing a Hankel matrix from a few noisy measurements and learning an image classifier of handwritten digits. The effectiveness of the proposed priors is demonstrated for both problems. All applications have been implemented as reactive Pluto notebooks in Julia.
Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.
Structural identifiability is an important property of parametric ODE models. When conducting an experiment and inferring the parameter value from the time-series data, we want to know if the value is globally, locally, or non-identifiable. Global identifiability of the parameter indicates that there exists only one possible solution to the inference problem, local identifiability suggests that there could be several (but finitely many) possibilities, while non-identifiability implies that there are infinitely many possibilities for the value. Having this information is useful since, one would, for example, only perform inferences for the parameters which are identifiable. Given the current significance and widespread research conducted in this area, we decided to create a database of linear compartment models and their identifiability results. This facilitates the process of checking theorems and conjectures and drawing conclusions on identifiability. By only storing models up to symmetries and isomorphisms, we optimize memory efficiency and reduce query time. We conclude by applying our database to real problems. We tested a conjecture about deleting one leak of the model states in the paper 'Linear compartmental models: Input-output equations and operations that preserve identifiability' by E. Gross et al., and managed to produce a counterexample. We also compute some interesting statistics related to the identifiability of linear compartment model parameters.
The present work concerns the derivation of a numerical scheme to approximate weak solutions of the Euler equations with a gravitational source term. The designed scheme is proved to be fully well-balanced since it is able to exactly preserve all moving equilibrium solutions, as well as the corresponding steady solutions at rest obtained when the velocity vanishes. Moreover, the proposed scheme is entropy-preserving since it satisfies all fully discrete entropy inequalities. In addition, in order to satisfy the required admissibility of the approximate solutions, the positivity of both approximate density and pressure is established. Several numerical experiments attest the relevance of the developed numerical method.
Precision matrices are crucial in many fields such as social networks, neuroscience, and economics, representing the edge structure of Gaussian graphical models (GGMs), where a zero in an off-diagonal position of the precision matrix indicates conditional independence between nodes. In high-dimensional settings where the dimension of the precision matrix $p$ exceeds the sample size $n$ and the matrix is sparse, methods like graphical Lasso, graphical SCAD, and CLIME are popular for estimating GGMs. While frequentist methods are well-studied, Bayesian approaches for (unstructured) sparse precision matrices are less explored. The graphical horseshoe estimate by \citet{li2019graphical}, applying the global-local horseshoe prior, shows superior empirical performance, but theoretical work for sparse precision matrix estimations using shrinkage priors is limited. This paper addresses these gaps by providing concentration results for the tempered posterior with the fully specified horseshoe prior in high-dimensional settings. Moreover, we also provide novel theoretical results for model misspecification, offering a general oracle inequality for the posterior.
We prove the existence of signed combinatorial interpretations for several large families of structure constants. These families include standard bases of symmetric and quasisymmetric polynomials, as well as various bases in Schubert theory. The results are stated in the language of computational complexity, while the proofs are based on the effective M\"obius inversion.
A new type of systematic approach to study the incompressible Euler equations numerically via the vanishing viscosity limit is proposed in this work. We show the new strategy is unconditionally stable that the $L^2$-energy dissipates and $H^s$-norm is uniformly bounded in time without any restriction on the time step. Moreover, first-order convergence of the proposed method is established including both low regularity and high regularity error estimates. The proposed method is extended to full discretization with a newly developed iterative Fourier spectral scheme. Another main contributions of this work is to propose a new integration by parts technique to lower the regularity requirement from $H^4$ to $H^3$ in order to perform the $L^2$-error estimate. To our best knowledge, this is one of the very first work to study incompressible Euler equations by designing stable numerical schemes via the inviscid limit with rigorous analysis. Furthermore, we will present both low and high regularity errors from numerical experiments and demonstrate the dynamics in several benchmark examples.
We present a new, monolithic first--order (both in time and space) BSSNOK formulation of the coupled Einstein--Euler equations. The entire system of hyperbolic PDEs is solved in a completely unified manner via one single numerical scheme applied to both the conservative sector of the matter part and to the first--order strictly non--conservative sector of the spacetime evolution. The coupling between matter and space-time is achieved via algebraic source terms. The numerical scheme used for the solution of the new monolithic first order formulation is a path-conservative central WENO (CWENO) finite difference scheme, with suitable insertions to account for the presence of the non--conservative terms. By solving several crucial tests of numerical general relativity, including a stable neutron star, Riemann problems in relativistic matter with shock waves and the stable long-time evolution of single and binary puncture black holes up and beyond the binary merger, we show that our new CWENO scheme, introduced two decades ago for the compressible Euler equations of gas dynamics, can be successfully applied also to numerical general relativity, solving all equations at the same time with one single numerical method. In the future the new monolithic approach proposed in this paper may become an attractive alternative to traditional methods that couple central finite difference schemes with Kreiss-Oliger dissipation for the space-time part with totally different TVD schemes for the matter evolution and which are currently the state of the art in the field.
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial data, the regularity of the mild solution is investigated, and an error estimate is derived within the spatial (L^2)-norm setting. In the case of smooth initial data, two error estimates are established within the framework of general spatial (L^q)-norms.