It is known that the solution of a conservative steady-state two-sided fractional diffusion problem can exhibit singularities near the boundaries. As consequence of this, and due to the conservative nature of the problem, we adopt a finite volume elements discretization approach over a generic non-uniform mesh. We focus on grids mapped by a smooth function which consist in a combination of a graded mesh near the singularity and a uniform mesh where the solution is smooth. Such a choice gives rise to Toeplitz-like discretization matrices and thus allows a low computational cost of the matrix-vector product and a detailed spectral analysis. The obtained spectral information is used to develop an ad-hoc parameter free multigrid preconditioner for GMRES, which is numerically shown to yield good convergence results in presence of graded meshes mapped by power functions that accumulate points near the singularity. The approximation order of the considered graded meshes is numerically compared with the one of a certain composite mesh given in literature that still leads to Toeplitz-like linear systems and is then still well-suited for our multigrid method. Several numerical tests confirm that power graded meshes result in lower approximation errors than composite ones and that our solver has a wide range of applicability.
In this paper, we analyze the numerical approximation of the Navier-Stokes problem over a bounded polygonal domain in $\mathbb{R}^2$, where the initial condition is modeled by a log-normal random field. This problem usually arises in the area of uncertainty quantification. We aim to compute the expectation value of linear functionals of the solution to the Navier-Stokes equations and perform a rigorous error analysis for the problem. In particular, our method includes the finite element, fully-discrete discretizations, truncated Karhunen-Lo\'eve expansion for the realizations of the initial condition, and lattice-based quasi-Monte Carlo (QMC) method to estimate the expected values over the parameter space. Our QMC analysis is based on randomly-shifted lattice rules for the integration over the domain in high-dimensional space, which guarantees the error decays with $\mathcal{O}(N^{-1+\delta})$, where $N$ is the number of sampling points, $\delta>0$ is an arbitrary small number, and the constant in the decay estimate is independent of the dimension of integration.
In this paper we study the convergence rate of a finite volume approximation of the compressible Navier--Stokes--Fourier system. To this end we first show the local existence of a highly regular unique strong solution and analyse its global extension in time as far as the density and temperature remain bounded. We make a physically reasonable assumption that the numerical density and temperature are uniformly bounded from above and below. The relative energy provides us an elegant way to derive a priori error estimates between finite volume solutions and the strong solution.
Preferential sampling is a common feature in geostatistics and occurs when the locations to be sampled are chosen based on information about the phenomena under study. In this case, point pattern models are commonly used as the probability law for the distribution of the locations. However, analytic intractability of the point process likelihood prevents its direct calculation. Many Bayesian (and non-Bayesian) approaches in non-parametric model specifications handle this difficulty with approximation-based methods. These approximations involve errors that are difficult to quantify and can lead to biased inference. This paper presents an approach for performing exact Bayesian inference for this setting without the need for model approximation. A qualitatively minor change on the traditional model is proposed to circumvent the likelihood intractability. This change enables the use of an augmented model strategy. Recent work on Bayesian inference for point pattern models can be adapted to the geostatistics setting and renders computational tractability for exact inference for the proposed methodology. Estimation of model parameters and prediction of the response at unsampled locations can then be obtained from the joint posterior distribution of the augmented model. Simulated studies showed good quality of the proposed model for estimation and prediction in a variety of preferentiality scenarios. The performance of our approach is illustrated in the analysis of real datasets and compares favourably against approximation-based approaches. The paper is concluded with comments regarding extensions of and improvements to the proposed methodology.
This paper is concerned with developing an efficient numerical algorithm for fast implementation of the sparse grid method for computing the $d$-dimensional integral of a given function. The new algorithm, called the MDI-SG ({\em multilevel dimension iteration sparse grid}) method, implements the sparse grid method based on a dimension iteration/reduction procedure, it does not need to store the integration points, neither does it compute the function values independently at each integration point, instead, it re-uses the computation for function evaluations as much as possible by performing the function evaluations at all integration points in a cluster and iteratively along coordinate directions. It is shown numerically that the computational complexity (in terms of CPU time) of the proposed MDI-SG method is of polynomial order $O(Nd^3 )$ or better, compared to the exponential order $O(N(\log N)^{d-1})$ for the standard sparse grid method, where $N$ denotes the maximum number of integration points in each coordinate direction. As a result, the proposed MDI-SG method effectively circumvents the curse of dimensionality suffered by the standard sparse grid method for high-dimensional numerical integration.
The local discontinuous Galerkin (LDG) method is studied for a third-order singularly perturbed problem of the convection-diffusion type. Based on a regularity assumption for the exact solution, we prove almost $O(N^{-(k+1/2)})$ (up to a logarithmic factor) energy-norm convergence uniformly in the perturbation parameter. Here, $k\geq 0$ is the maximum degree of piecewise polynomials used in discrete space, and $N$ is the number of mesh elements. The results are valid for the three types of layer-adapted meshes: Shishkin-type, Bakhvalov-Shishkin type, and Bakhvalov-type. Numerical experiments are conducted to test the theoretical results.
We consider a high-dimensional random constrained optimization problem in which a set of binary variables is subjected to a linear system of equations. The cost function is a simple linear cost, measuring the Hamming distance with respect to a reference configuration. Despite its apparent simplicity, this problem exhibits a rich phenomenology. We show that different situations arise depending on the random ensemble of linear systems. When each variable is involved in at most two linear constraints, we show that the problem can be partially solved analytically, in particular we show that upon convergence, the zero-temperature limit of the cavity equations returns the optimal solution. We then study the geometrical properties of more general random ensembles. In particular we observe a range in the density of constraints at which the systems enters a glassy phase where the cost function has many minima. Interestingly, the algorithmic performances are only sensitive to another phase transition affecting the structure of configurations allowed by the linear constraints. We also extend our results to variables belonging to $\text{GF}(q)$, the Galois Field of order $q$. We show that increasing the value of $q$ allows to achieve a better optimum, which is confirmed by the Replica Symmetric cavity method predictions.
This paper investigates, a new class of fractional order Runge-Kutta (FORK) methods for numerical approximation to the solution of fractional differential equations (FDEs). By using the Caputo generalized Taylor formula and the total differential for Caputo fractional derivative, we construct explicit and implicit FORK methods, as the well-known Runge-Kutta schemes for ordinary differential equations. In the proposed method, due to the dependence of fractional derivatives to a fixed base point $t_0,$ we had to modify the right-hand side of the given equation in all steps of the FORK methods. Some coefficients for explicit and implicit FORK schemes are presented. The convergence analysis of the proposed method is also discussed. Numerical experiments clarify the effectiveness and robustness of the method.
Projection-based model order reduction allows for the parsimonious representation of full order models (FOMs), typically obtained through the discretization of certain partial differential equations (PDEs) using conventional techniques where the discretization may contain a very large number of degrees of freedom. As a result of this more compact representation, the resulting projection-based reduced order models (ROMs) can achieve considerable computational speedups, which are especially useful in real-time or multi-query analyses. One known deficiency of projection-based ROMs is that they can suffer from a lack of robustness, stability and accuracy, especially in the predictive regime, which ultimately limits their useful application. Another research gap that has prevented the widespread adoption of ROMs within the modeling and simulation community is the lack of theoretical and algorithmic foundations necessary for the "plug-and-play" integration of these models into existing multi-scale and multi-physics frameworks. This paper describes a new methodology that has the potential to address both of the aforementioned deficiencies by coupling projection-based ROMs with each other as well as with conventional FOMs by means of the Schwarz alternating method. Leveraging recent work that adapted the Schwarz alternating method to enable consistent and concurrent multi-scale coupling of finite element FOMs in solid mechanics, we present a new extension of the Schwarz formulation that enables ROM-FOM and ROM-ROM coupling in nonlinear solid mechanics. In order to maintain efficiency, we employ hyper-reduction via the Energy-Conserving Sampling and Weighting approach. We evaluate the proposed coupling approach in the reproductive as well as in the predictive regime on a canonical test case that involves the dynamic propagation of a traveling wave in a nonlinear hyper-elastic material.
In this paper, we present algorithms and implementations for the end-to-end GPU acceleration of matrix-free low-order-refined preconditioning of high-order finite element problems. The methods described here allow for the construction of effective preconditioners for high-order problems with optimal memory usage and computational complexity. The preconditioners are based on the construction of a spectrally equivalent low-order discretization on a refined mesh, which is then amenable to, for example, algebraic multigrid preconditioning. The constants of equivalence are independent of mesh size and polynomial degree. For vector finite element problems in $H({\rm curl})$ and $H({\rm div})$ (e.g. for electromagnetic or radiation diffusion problems) a specially constructed interpolation-histopolation basis is used to ensure fast convergence. Detailed performance studies are carried out to analyze the efficiency of the GPU algorithms. The kernel throughput of each of the main algorithmic components is measured, and the strong and weak parallel scalability of the methods is demonstrated. The different relative weighting and significance of the algorithmic components on GPUs and CPUs is discussed. Results on problems involving adaptively refined nonconforming meshes are shown, and the use of the preconditioners on a large-scale magnetic diffusion problem using all spaces of the finite element de Rham complex is illustrated.
This study presents a theoretical structure for the monocular pose estimation problem using the total least squares. The unit-vector line-of-sight observations of the features are extracted from the monocular camera images. First, the optimization framework is formulated for the pose estimation problem with observation vectors extracted from unit vectors from the camera center-of-projection, pointing towards the image features. The attitude and position solutions obtained via the derived optimization framework are proven to reach the Cram\'er-Rao lower bound under the small angle approximation of the attitude errors. Specifically, The Fisher Information Matrix and the Cram\'er-Rao bounds are evaluated and compared to the analytical derivations of the error-covariance expressions to rigorously prove the optimality of the estimates. The sensor data for the measurement model is provided through a series of vector observations, and two fully populated noise-covariance matrices are assumed for the body and reference observation data. The inverse of the former matrices appear in terms of a series of weight matrices in the cost function. The proposed solution is simulated in a Monte-Carlo framework with 10,000 samples to validate the error-covariance analysis.