In this work, we consider the problem of building distribution-free prediction intervals with finite-sample conditional coverage guarantees. Conformal prediction (CP) is an increasingly popular framework for building prediction intervals with distribution-free guarantees, but these guarantees only ensure marginal coverage: the probability of coverage is averaged over a random draw of both the training and test data, meaning that there might be substantial undercoverage within certain subpopulations. Instead, ideally, we would want to have local coverage guarantees that hold for each possible value of the test point's features. While the impossibility of achieving pointwise local coverage is well established in the literature, many variants of conformal prediction algorithm show favorable local coverage properties empirically. Relaxing the definition of local coverage can allow for a theoretical understanding of this empirical phenomenon. We aim to bridge this gap between theoretical validation and empirical performance by proving achievable and interpretable guarantees for a relaxed notion of local coverage. Building on the localized CP method of Guan (2023) and the weighted CP framework of Tibshirani et al. (2019), we propose a new method, randomly-localized conformal prediction (RLCP), which returns prediction intervals that are not only marginally valid but also achieve a relaxed local coverage guarantee and guarantees under covariate shift. Through a series of simulations and real data experiments, we validate these coverage guarantees of RLCP while comparing it with the other local conformal prediction methods.
In semi-supervised learning, the prevailing understanding suggests that observing additional unlabeled samples improves estimation accuracy for linear parameters only in the case of model misspecification. This paper challenges this notion, demonstrating its inaccuracy in high dimensions. Initially focusing on a dense scenario, we introduce robust semi-supervised estimators for the regression coefficient without relying on sparse structures in the population slope. Even when the true underlying model is linear, we show that leveraging information from large-scale unlabeled data improves both estimation accuracy and inference robustness. Moreover, we propose semi-supervised methods with further enhanced efficiency in scenarios with a sparse linear slope. Diverging from the standard semi-supervised literature, we also allow for covariate shift. The performance of the proposed methods is illustrated through extensive numerical studies, including simulations and a real-data application to the AIDS Clinical Trials Group Protocol 175 (ACTG175).
In this paper, we prove strong consistency of an estimator by the truncated singular value decomposition for a multivariate errors-in-variables linear regression model with collinearity. This result is an extension of Gleser's proof of the strong consistency of total least squares solutions to the case with modern rank constraints. While the usual discussion of consistency in the absence of solution uniqueness deals with the minimal norm solution, the contribution of this study is to develop a theory that shows the strong consistency of a set of solutions. The proof is based on properties of orthogonal projections, specifically properties of the Rayleigh-Ritz procedure for computing eigenvalues. This makes it suitable for targeting problems where some row vectors of the matrices do not contain noise. Therefore, this paper gives a proof for the regression model with the above condition on the row vectors, resulting in a natural generalization of the strong consistency for the standard TLS estimator.
In this paper, we provide bounds in Wasserstein and total variation distances between the distributions of the successive iterates of two functional autoregressive processes with isotropic Gaussian noise of the form $Y_{k+1} = \mathrm{T}_\gamma(Y_k) + \sqrt{\gamma\sigma^2} Z_{k+1}$ and $\tilde{Y}_{k+1} = \tilde{\mathrm{T}}_\gamma(\tilde{Y}_k) + \sqrt{\gamma\sigma^2} \tilde{Z}_{k+1}$. More precisely, we give non-asymptotic bounds on $\rho(\mathcal{L}(Y_{k}),\mathcal{L}(\tilde{Y}_k))$, where $\rho$ is an appropriate weighted Wasserstein distance or a $V$-distance, uniformly in the parameter $\gamma$, and on $\rho(\pi_{\gamma},\tilde{\pi}_{\gamma})$, where $\pi_{\gamma}$ and $\tilde{\pi}_{\gamma}$ are the respective stationary measures of the two processes. The class of considered processes encompasses the Euler-Maruyama discretization of Langevin diffusions and its variants. The bounds we derive are of order $\gamma$ as $\gamma \to 0$. To obtain our results, we rely on the construction of a discrete sticky Markov chain $(W_k^{(\gamma)})_{k \in \mathbb{N}}$ which bounds the distance between an appropriate coupling of the two processes. We then establish stability and quantitative convergence results for this process uniformly on $\gamma$. In addition, we show that it converges in distribution to the continuous sticky process studied in previous work. Finally, we apply our result to Bayesian inference of ODE parameters and numerically illustrate them on two particular problems.
Statistical analysis of extremes can be used to predict the probability of future extreme events, such as large rainfalls or devastating windstorms. The quality of these forecasts can be measured through scoring rules. Locally scale invariant scoring rules give equal importance to the forecasts at different locations regardless of differences in the prediction uncertainty. This is a useful feature when computing average scores but can be an unnecessarily strict requirement when mostly concerned with extremes. We propose the concept of local weight-scale invariance, describing scoring rules fulfilling local scale invariance in a certain region of interest, and as a special case local tail-scale invariance, for large events. Moreover, a new version of the weighted Continuous Ranked Probability score (wCRPS) called the scaled wCRPS (swCRPS) that possesses this property is developed and studied. The score is a suitable alternative for scoring extreme value models over areas with varying scale of extreme events, and we derive explicit formulas of the score for the Generalised Extreme Value distribution. The scoring rules are compared through simulation, and their usage is illustrated in modelling of extreme water levels, annual maximum rainfalls, and in an application to non-extreme forecast for the prediction of air pollution.
We construct an efficient class of increasingly high-order (up to 17th-order) essentially non-oscillatory schemes with multi-resolution (ENO-MR) for solving hyperbolic conservation laws. The candidate stencils for constructing ENO-MR schemes range from first-order one-point stencil increasingly up to the designed very high-order stencil. The proposed ENO-MR schemes adopt a very simple and efficient strategy that only requires the computation of the highest-order derivatives of a part of candidate stencils. Besides simplicity and high efficiency, ENO-MR schemes are completely parameter-free and essentially scale-invariant. Theoretical analysis and numerical computations show that ENO-MR schemes achieve designed high-order convergence in smooth regions which may contain high-order critical points (local extrema) and retain ENO property for strong shocks. In addition, ENO-MR schemes could capture complex flow structures very well.
In this paper we introduce a novel statistical framework based on the first two quantile conditional moments that facilitates effective goodness-of-fit testing for one-sided L\'evy distributions. The scale-ratio framework introduced in this paper extends our previous results in which we have shown how to extract unique distribution features using conditional variance ratio for the generic class of {\alpha}-stable distributions. We show that the conditional moment-based goodness-of-fit statistics are a good alternative to other methods introduced in the literature tailored to the one-sided L\'evy distributions. The usefulness of our approach is verified using an empirical test power study. For completeness, we also derive the asymptotic distributions of the test statistics and show how to apply our framework to real data.
In this paper, we propose a robust low-order stabilization-free virtual element method on quadrilateral meshes for linear elasticity that is based on the stress-hybrid principle. We refer to this approach as the Stress-Hybrid Virtual Element Method (SH-VEM). In this method, the Hellinger$-$Reissner variational principle is adopted, wherein both the equilibrium equations and the strain-displacement relations are variationally enforced. We consider small-strain deformations of linear elastic solids in the compressible and near-incompressible regimes over quadrilateral (convex and nonconvex) meshes. Within an element, the displacement field is approximated as a linear combination of canonical shape functions that are $\textit{virtual}$. The stress field, similar to the stress-hybrid finite element method of Pian and Sumihara, is represented using a linear combination of symmetric tensor polynomials. A 5-parameter expansion of the stress field is used in each element, with stress transformation equations applied on distorted quadrilaterals. In the variational statement of the strain-displacement relations, the divergence theorem is invoked to express the stress coefficients in terms of the nodal displacements. This results in a formulation with solely the nodal displacements as unknowns. Numerical results are presented for several benchmark problems from linear elasticity. We show that SH-VEM is free of volumetric and shear locking, and it converges optimally in the $L^2$ norm and energy seminorm of the displacement field, and in the $L^2$ norm of the hydrostatic stress.
We propose a new reduced order modeling strategy for tackling parametrized Partial Differential Equations (PDEs) with linear constraints, in particular Darcy flow systems in which the constraint is given by mass conservation. Our approach employs classical neural network architectures and supervised learning, but it is constructed in such a way that the resulting Reduced Order Model (ROM) is guaranteed to satisfy the linear constraints exactly. The procedure is based on a splitting of the PDE solution into a particular solution satisfying the constraint and a homogenous solution. The homogeneous solution is approximated by mapping a suitable potential function, generated by a neural network model, onto the kernel of the constraint operator; for the particular solution, instead, we propose an efficient spanning tree algorithm. Starting from this paradigm, we present three approaches that follow this methodology, obtained by exploring different choices of the potential spaces: from empirical ones, derived via Proper Orthogonal Decomposition (POD), to more abstract ones based on differential complexes. All proposed approaches combine computational efficiency with rigorous mathematical interpretation, thus guaranteeing the explainability of the model outputs. To demonstrate the efficacy of the proposed strategies and to emphasize their advantages over vanilla black-box approaches, we present a series of numerical experiments on fluid flows in porous media, ranging from mixed-dimensional problems to nonlinear systems. This research lays the foundation for further exploration and development in the realm of model order reduction, potentially unlocking new capabilities and solutions in computational geosciences and beyond.
Missing values are unavoidable in many applications of machine learning and present challenges both during training and at test time. When variables are missing in recurring patterns, fitting separate pattern submodels have been proposed as a solution. However, fitting models independently does not make efficient use of all available data. Conversely, fitting a single shared model to the full data set relies on imputation which often leads to biased results when missingness depends on unobserved factors. We propose an alternative approach, called sharing pattern submodels, which i) makes predictions that are robust to missing values at test time, ii) maintains or improves the predictive power of pattern submodels, and iii) has a short description, enabling improved interpretability. Parameter sharing is enforced through sparsity-inducing regularization which we prove leads to consistent estimation. Finally, we give conditions for when a sharing model is optimal, even when both missingness and the target outcome depend on unobserved variables. Classification and regression experiments on synthetic and real-world data sets demonstrate that our models achieve a favorable tradeoff between pattern specialization and information sharing.
Refinement calculus provides a structured framework for the progressive and modular development of programs, ensuring their correctness throughout the refinement process. This paper introduces a refinement calculus tailored for quantum programs. To this end, we first study the partial correctness of nondeterministic programs within a quantum while language featuring prescription statements. Orthogonal projectors, which are equivalent to subspaces of the state Hilbert space, are taken as assertions for quantum states. In addition to the denotational semantics where a nondeterministic program is associated with a set of trace-nonincreasing super-operators, we also present their semantics in transforming a postcondition to the weakest liberal postconditions and, conversely, transforming a precondition to the strongest postconditions. Subsequently, refinement rules are introduced based on these dual semantics, offering a systematic approach to the incremental development of quantum programs applicable in various contexts. To illustrate the practical application of the refinement calculus, we examine examples such as the implementation of a $Z$-rotation gate, the repetition code, and the quantum-to-quantum Bernoulli factory. Furthermore, we present Quire, a Python-based interactive prototype tool that provides practical support to programmers engaged in the stepwise development of correct quantum programs.