We solve the comparison problem for generalized $\psi$-estimators introduced in Barczy and P\'ales (2022). Namely, we derive several necessary and sufficient conditions under which a generalized $\psi$-estimator less than or equal to another $\psi$-estimator for any sample. We also solve the corresponding equality problem for generalized $\psi$-estimators. For applications, we solve the two problems in question for Bajraktarevi\'c-type- and quasi-arithmetic-type estimators. We also apply our results for some known statistical estimators such as for empirical expectiles and Mathieu-type estimators and for solving likelihood equations in case of normal, a Beta-type, Gamma, Lomax (Pareto type II), lognormal and Laplace distributions.
We review recent results on the connection between Hermite-Pad\'e approximation problem, multiple orthogonal polynomials, and multidimensional Toda equations in continuous and discrete time. In order to motivate interest in the subject we first present a pedagogical introduction to the classical, by now, relation between the Pad\'e approximation problem, orthogonal polynomials, and the Toda lattice equations. We describe also briefly generalization of the connection to the interpolation problems and to the non-commutative algebra level.
In this paper we consider PIDEs with gradient-independent Lipschitz continuous nonlinearities and prove that deep neural networks with ReLU activation function can approximate solutions of such semilinear PIDEs without curse of dimensionality in the sense that the required number of parameters in the deep neural networks increases at most polynomially in both the dimension $ d $ of the corresponding PIDE and the reciprocal of the prescribed accuracy $\epsilon $.
This paper will suggest a new finite element method to find a $P^4$-velocity and a $P^3$-pressure solving incompressible Stokes equations at low cost. The method solves first the decoupled equation for a $P^4$-velocity. Then, using the calculated velocity, a locally calculable $P^3$-pressure will be defined component-wisely. The resulting $P^3$-pressure is analyzed to have the optimal order of convergence. Since the pressure is calculated by local computation only, the chief time cost of the new method is on solving the decoupled equation for the $P^4$-velocity. Besides, the method overcomes the problem of singular vertices or corners.
In this paper we propose a local projector for truncated hierarchical B-splines (THB-splines). The local THB-spline projector is an adaptation of the B\'ezier projector proposed by Thomas et al. (Comput Methods Appl Mech Eng 284, 2015) for B-splines and analysis-suitable T-splines (AS T-splines). For THB-splines, there are elements on which the restrictions of THB-splines are linearly dependent, contrary to B-splines and AS T-splines. Therefore, we cluster certain local mesh elements together such that the THB-splines with support over these clusters are linearly independent, and the B\'ezier projector is adapted to use these clusters. We introduce general extensions for which optimal convergence is shown theoretically and numerically. In addition, a simple adaptive refinement scheme is introduced and compared to Giust et al. (Comput. Aided Geom. Des. 80, 2020), where we find that our simple approach shows promise.
We show that, for all $k\geq 1$, there exists a $k$-uniform $3^+$-free binary morphism. Furthermore, we revisit an old result of Currie and Rampersad on $3$-free binary morphisms and reprove it in a conceptually simpler (but computationally more intensive) way. Our proofs use the theorem-prover Walnut as an essential tool.
In this work, we introduce a new acquisition function for sequential sampling to efficiently quantify rare-event statistics of an input-to-response (ItR) system with given input probability and expensive function evaluations. Our acquisition is a generalization of the likelihood-weighted (LW) acquisition that was initially designed for the same purpose and then extended to many other applications. The improvement in our acquisition comes from the generalized form with two additional parameters, by varying which one can target and address two weaknesses of the original LW acquisition: (1) that the input space associated with rare-event responses is not sufficiently stressed in sampling; (2) that the surrogate model (generated from samples) may have significant deviation from the true ItR function, especially for cases with complex ItR function and limited number of samples. In addition, we develop a critical procedure in Monte-Carlo discrete optimization of the acquisition function, which achieves orders of magnitude acceleration compared to existing approaches for such type of problems. The superior performance of our new acquisition to the original LW acquisition is demonstrated in a number of test cases, including some cases that were designed to show the effectiveness of the original LW acquisition. We finally apply our method to an engineering example to quantify the rare-event roll-motion statistics of a ship in a random sea.
In this paper we propose a variant of enriched Galerkin methods for second order elliptic equations with over-penalization of interior jump terms. The bilinear form with interior over-penalization gives a non-standard norm which is different from the discrete energy norm in the classical discontinuous Galerkin methods. Nonetheless we prove that optimal a priori error estimates with the standard discrete energy norm can be obtained by combining a priori and a posteriori error analysis techniques. We also show that the interior over-penalization is advantageous for constructing preconditioners robust to mesh refinement by analyzing spectral equivalence of bilinear forms. Numerical results are included to illustrate the convergence and preconditioning results.
Measurement-based quantum computation (MBQC) offers a fundamentally unique paradigm to design quantum algorithms. Indeed, due to the inherent randomness of quantum measurements, the natural operations in MBQC are not deterministic and unitary, but are rather augmented with probabilistic byproducts. Yet, the main algorithmic use of MBQC so far has been to completely counteract this probabilistic nature in order to simulate unitary computations expressed in the circuit model. In this work, we propose designing MBQC algorithms that embrace this inherent randomness and treat the random byproducts in MBQC as a resource for computation. As a natural application where randomness can be beneficial, we consider generative modeling, a task in machine learning centered around generating complex probability distributions. To address this task, we propose a variational MBQC algorithm equipped with control parameters that allow to directly adjust the degree of randomness to be admitted in the computation. Our numerical findings indicate that this additional randomness can lead to significant gains in learning performance in certain generative modeling tasks. These results highlight the potential advantages in exploiting the inherent randomness of MBQC and motivate further research into MBQC-based algorithms.
In two and three dimensions, we design and analyze a posteriori error estimators for the mixed Stokes eigenvalue problem. The unknowns on this mixed formulation are the pseudotress, velocity and pressure. With a lowest order mixed finite element scheme, together with a postprocressing technique, we prove that the proposed estimator is reliable and efficient. We illustrate the results with several numerical tests in two and three dimensions in order to assess the performance of the estimator.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.