Functional Ordinary Kriging is the most widely used method to predict a curve at a given spatial point. However, uncertainty remains an open issue. In this article a distribution-free prediction method based on two different modulation functions and two conformity scores is proposed. Through simulations and benchmark data analyses, we demonstrate the advantages of our approach when compared to standard methods.
We consider a class of stochastic gradient optimization schemes. Assuming that the objective function is strongly convex, we prove weak error estimates which are uniform in time for the error between the solution of the numerical scheme, and the solutions of continuous-time modified (or high-resolution) differential equations at first and second orders, with respect to the time-step size. At first order, the modified equation is deterministic, whereas at second order the modified equation is stochastic and depends on a modified objective function. We go beyond existing results where the error estimates have been considered only on finite time intervals and were not uniform in time. This allows us to then provide a rigorous complexity analysis of the method in the large time and small time step size regimes.
The local dependence function is important in many applications of probability and statistics. We extend the bivariate local dependence function introduced by Bairamov and Kotz (2000) and further developed by Bairamov et al. (2003) to three-variate and multivariate local dependence function characterizing the dependency between three and more random variables in a given specific point. The definition and properties of the three-variate local dependence function are discussed. An example of a three-variate local dependence function for underlying three-variate normal distribution is presented. The graphs and tables with numerical values are provided. The multivariate extension of the local dependence function that can characterize the dependency between multiple random variables at a specific point is also discussed.
In this paper, we propose high order numerical methods to solve a 2D advection diffusion equation, in the highly oscillatory regime. We use an integrator strategy that allows the construction of arbitrary high-order schemes {leading} to an accurate approximation of the solution without any time step-size restriction. This paper focuses on the multiscale challenges {in time} of the problem, that come from the velocity, an $\varepsilon-$periodic function, whose expression is explicitly known. $\varepsilon$-uniform third order in time numerical approximations are obtained. For the space discretization, this strategy is combined with high order finite difference schemes. Numerical experiments show that the proposed methods {achieve} the expected order of accuracy, and it is validated by several tests across diverse domains and boundary conditions. The novelty of the paper consists of introducing a numerical scheme that is high order accurate in space and time, with a particular attention to the dependency on a small parameter in the time scale. The high order in space is obtained enlarging the interpolation stencil already established in [44], and further refined in [46], with a special emphasis on the squared boundary, especially when a Dirichlet condition is assigned. In such case, we compute an \textit{ad hoc} Taylor expansion of the solution to ensure that there is no degradation of the accuracy order at the boundary. On the other hand, the high accuracy in time is obtained extending the work proposed in [19]. The combination of high-order accuracy in both space and time is particularly significant due to the presence of two small parameters-$\delta$ and $\varepsilon$-in space and time, respectively.
In many real-world optimization problems, we have prior information about what objective function values are achievable. In this paper, we study the scenario that we have either exact knowledge of the minimum value or a, possibly inexact, lower bound on its value. We propose bound-aware Bayesian optimization (BABO), a Bayesian optimization method that uses a new surrogate model and acquisition function to utilize such prior information. We present SlogGP, a new surrogate model that incorporates bound information and adapts the Expected Improvement (EI) acquisition function accordingly. Empirical results on a variety of benchmarks demonstrate the benefit of taking prior information about the optimal value into account, and that the proposed approach significantly outperforms existing techniques. Furthermore, we notice that even in the absence of prior information on the bound, the proposed SlogGP surrogate model still performs better than the standard GP model in most cases, which we explain by its larger expressiveness.
In this paper, we introduce a highly accurate and efficient numerical solver for the radial Kohn--Sham equation. The equation is discretized using a high-order finite element method, with its performance further improved by incorporating a parameter-free moving mesh technique. This approach greatly reduces the number of elements required to achieve the desired precision. In practice, the mesh redistribution involves no more than three steps, ensuring the algorithm remains computationally efficient. Remarkably, with a maximum of $13$ elements, we successfully reproduce the NIST database results for elements with atomic numbers ranging from $1$ to $92$.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.
Privacy protection methods, such as differentially private mechanisms, introduce noise into resulting statistics which often produces complex and intractable sampling distributions. In this paper, we propose a simulation-based "repro sample" approach to produce statistically valid confidence intervals and hypothesis tests, which builds on the work of Xie and Wang (2022). We show that this methodology is applicable to a wide variety of private inference problems, appropriately accounts for biases introduced by privacy mechanisms (such as by clamping), and improves over other state-of-the-art inference methods such as the parametric bootstrap in terms of the coverage and type I error of the private inference. We also develop significant improvements and extensions for the repro sample methodology for general models (not necessarily related to privacy), including 1) modifying the procedure to ensure guaranteed coverage and type I errors, even accounting for Monte Carlo error, and 2) proposing efficient numerical algorithms to implement the confidence intervals and $p$-values.
We prove that in the algebraic metacomplexity framework, the decomposition of metapolynomials into their isotypic components can be implemented efficiently, namely with only a quasipolynomial blowup in the circuit size. This means that many existing algebraic complexity lower bound proofs can be efficiently converted into isotypic lower bound proofs via highest weight metapolynomials, a notion studied in geometric complexity theory. In the context of algebraic natural proofs, our result means that without loss of generality algebraic natural proofs can be assumed to be isotypic. Our proof is built on the Poincar\'e--Birkhoff--Witt theorem for Lie algebras and on Gelfand--Tsetlin theory, for which we give the necessary comprehensive background.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.