Complex interval arithmetic is a powerful tool for the analysis of computational errors. The naturally arising rectangular, polar, and circular (together called primitive) interval types are not closed under simple arithmetic operations and their use yields overly relaxed bounds. The later introduced polygonal type, on the other hand, allows for arbitrarily precise representaion of the above operations for a higher computational cost. We propose the polyarcular interval type as an effective extension of the previous types. The polyarcular interval can represent all primitive intervals and most of their arithmetic combinations precisely and has a approximation capability competing with that of the polygonal interval. In particular, in antenna tolerance analysis it can achieve perfect accuracy for lower computational cost then the polygonal type, which we show in a relevant case study. In this paper, we present a rigorous analysis of the arithmetic properties of all five interval types, involving a new algebro-geometric method of boundary analysis.
We introduce a 2-dimensional stochastic dominance (2DSD) index to characterize both strict and almost stochastic dominance. Based on this index, we derive an estimator for the minimum violation ratio (MVR), also known as the critical parameter, of the almost stochastic ordering condition between two variables. We determine the asymptotic properties of the empirical 2DSD index and MVR for the most frequently used stochastic orders. We also provide conditions under which the bootstrap estimators of these quantities are strongly consistent. As an application, we develop consistent bootstrap testing procedures for almost stochastic dominance. The performance of the tests is checked via simulations and the analysis of real data.
We consider the equilibrium equations for a linearized Cosserat material. We identify their structure in terms of a differential complex, which is isomorphic to six copies of the de Rham complex through an algebraic isomorphism. Moreover, we show how the Cosserat materials can be analyzed by inheriting results from linearized elasticity. Both perspectives give rise to mixed finite element methods, which we refer to as strongly and weaky coupled, respectively. We prove convergence of both classes of methods, with particular attention to improved convergence rate estimates, and stability in the limit of vanishing Cosserat material parameters. The theoretical results are fully reflected in the actual performance of the methods, as shown by the numerical verifications.
We consider the problem of estimating log-determinants of large, sparse, positive definite matrices. A key focus of our algorithm is to reduce computational cost, and it is based on sparse approximate inverses. The algorithm can be implemented to be adaptive, and it uses graph spline approximation to improve accuracy. We illustrate our approach on classes of large sparse matrices.
Parameters of differential equations are essential to characterize intrinsic behaviors of dynamic systems. Numerous methods for estimating parameters in dynamic systems are computationally and/or statistically inadequate, especially for complex systems with general-order differential operators, such as motion dynamics. This article presents Green's matching, a computationally tractable and statistically efficient two-step method, which only needs to approximate trajectories in dynamic systems but not their derivatives due to the inverse of differential operators by Green's function. This yields a statistically optimal guarantee for parameter estimation in general-order equations, a feature not shared by existing methods, and provides an efficient framework for broad statistical inferences in complex dynamic systems.
The numerical treatment of fluid-particle systems is a very challenging problem because of the complex coupling phenomena occurring between the two phases. Although accurate mathematical modelling is available to address this kind of application, the computational cost of the numerical simulations is very expensive. The use of the most modern high-performance computing infrastructures could help to mitigate such an issue but not completely fix it. In this work, we develop a non-intrusive data-driven reduced order model (ROM) for Computational Fluid Dynamics (CFD) - Discrete Element Method (DEM) simulations. The ROM is built using the proper orthogonal decomposition (POD) for the computation of the reduced basis space and the Long Short-Term Memory (LSTM) network for the computation of the reduced coefficients. We are interested in dealing both with system identification and prediction. The most relevant novelties rely on (i) a filtering procedure of the full-order snapshots to reduce the dimensionality of the reduced problem and (ii) a preliminary treatment of the particle phase. The accuracy of our ROM approach is assessed against the classic Goldschmidt fluidized bed benchmark problem. Finally, we also provide some insights about the efficiency of our ROM approach.
The subject of this work is an adaptive stochastic Galerkin finite element method for parametric or random elliptic partial differential equations, which generates sparse product polynomial expansions with respect to the parametric variables of solutions. For the corresponding spatial approximations, an independently refined finite element mesh is used for each polynomial coefficient. The method relies on multilevel expansions of input random fields and achieves error reduction with uniform rate. In particular, the saturation property for the refinement process is ensured by the algorithm. The results are illustrated by numerical experiments, including cases with random fields of low regularity.
Polyhedral affinoid algebras have been introduced by Einsiedler, Kapranov and Lind to connect rigid analytic geometry (analytic geometry over non-archimedean fields) and tropical geometry.In this article, we present a theory of Gr{\"o}bner bases for polytopal affinoid algebras that extends both Caruso et al.'s theory of Gr{\"o}bner bases on Tate algebras and Pauer et al.'s theory of Gr{\"o}bner bases on Laurent polynomials.We provide effective algorithms to compute Gr{\"o}bner bases for both ideals of Laurent polynomials and ideals in polytopal affinoid algebras. Experiments with a Sagemath implementation are provided.
Mixed methods for linear elasticity with strongly symmetric stresses of lowest order are studied in this paper. On each simplex, the stress space has piecewise linear components with respect to its Alfeld split (which connects the vertices to barycenter), generalizing the Johnson-Mercier two-dimensional element to higher dimensions. Further reductions in the stress space in the three-dimensional case (to 24 degrees of freedom per tetrahedron) are possible when the displacement space is reduced to local rigid displacements. Proofs of optimal error estimates of numerical solutions and improved error estimates via postprocessing and the duality argument are presented.
Incomplete factorizations have long been popular general-purpose algebraic preconditioners for solving large sparse linear systems of equations. Guaranteeing the factorization is breakdown free while computing a high quality preconditioner is challenging. A resurgence of interest in using low precision arithmetic makes the search for robustness more urgent and tougher. In this paper, we focus on symmetric positive definite problems and explore a number of approaches: a look-ahead strategy to anticipate break down as early as possible, the use of global shifts, and a modification of an idea developed in the field of numerical optimization for the complete Cholesky factorization of dense matrices. Our numerical simulations target highly ill-conditioned sparse linear systems with the goal of computing the factors in half precision arithmetic and then achieving double precision accuracy using mixed precision refinement.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.