Existing frameworks for probabilistic inference assume the quantity of interest is the parameter of a posited statistical model. In machine learning applications, however, often there is no statistical model/parameter; the quantity of interest is a statistical functional, a feature of the underlying distribution. Model-based methods can only handle such problems indirectly, via marginalization from a model parameter to the real quantity of interest. Here we develop a generalized inferential model (IM) framework for direct probabilistic uncertainty quantification on the quantity of interest. In particular, we construct a data-dependent, bootstrap-based possibility measure for uncertainty quantification and inference. We then prove that this new approach provides approximately valid inference in the sense that the plausibility values assigned to hypotheses about the unknowns are asymptotically well-calibrated in a frequentist sense. Among other things, this implies that confidence regions for the underlying functional derived from our proposed IM are approximately valid. The method is shown to perform well in key examples, including quantile regression, and in a personalized medicine application.
We study the problem of variance estimation in general graph-structured problems. First, we develop a linear time estimator for the homoscedastic case that can consistently estimate the variance in general graphs. We show that our estimator attains minimax rates for the chain and 2D grid graphs when the mean signal has a total variation with canonical scaling. Furthermore, we provide general upper bounds on the mean squared error performance of the fused lasso estimator in general graphs under a moment condition and a bound on the tail behavior of the errors. These upper bounds allow us to generalize for broader classes of distributions, such as sub-Exponential, many existing results on the fused lasso that are only known to hold with the assumption that errors are sub-Gaussian random variables. Exploiting our upper bounds, we then study a simple total variation regularization estimator for estimating the signal of variances in the heteroscedastic case. Our results show that the variance estimator attains minimax rates for estimating signals of bounded variation in grid graphs, $K$-nearest neighbor graphs with very mild assumptions, and it is consistent for estimating the variances in any connected graph. In addition, extensive numerical results show that our proposed estimators perform reasonably well in a variety of graph-structured models.
Linear mixed effects are considered excellent predictors of cluster-level parameters in various domains. However, previous work has shown that their performance can be seriously affected by departures from modelling assumptions. Since the latter are common in applied studies, there is a need for inferential methods which are to certain extent robust to misspecfications, but at the same time simple enough to be appealing for practitioners. We construct statistical tools for cluster-wise and simultaneous inference for mixed effects under model misspecification using straightforward semiparametric random effect bootstrap. In our theoretical analysis, we show that our methods are asymptotically consistent under general regularity conditions. In simulations our intervals were robust to severe departures from model assumptions and performed better than their competitors in terms of empirical coverage probability.
The median absolute deviation is a widely used robust measure of statistical dispersion. Using a scale constant, we can use it as an asymptotically consistent estimator for the standard deviation under normality. For finite samples, the scale constant should be corrected in order to obtain an unbiased estimator. The bias-correction factor depends on the sample size and the median estimator. When we use the traditional sample median, the factor values are well known, but this approach does not provide optimal statistical efficiency. In this paper, we present the bias-correction factors for the median absolute deviation based on the Harrell-Davis quantile estimator and its trimmed modification which allow us to achieve better statistical efficiency of the standard deviation estimations. The obtained estimators are especially useful for samples with a small number of elements.
The convex rope problem is to find a counterclockwise or clockwise convex rope starting at the vertex a and ending at the vertex b of a simple polygon P, where a is a vertex of the convex hull of P and b is visible from infinity. The convex rope mentioned is the shortest path joining a and b that does not enter the interior of P. In this paper, the problem is reconstructed as the one of finding such shortest path in a simple polygon and solved by the method of multiple shooting. We then show that if the collinear condition of the method holds at all shooting points, then these shooting points form the shortest path. Otherwise, the sequence of paths obtained by the update of the method converges to the shortest path. The algorithm is implemented in C++ for numerical experiments.
We consider a causal inference model in which individuals interact in a social network and they may not comply with the assigned treatments. Estimating causal parameters is challenging in the presence of network interference of unknown form, as each individual may be influenced by both close individuals and distant ones in complex ways. Noncompliance with treatment assignment further complicates this problem, and prior methods dealing with network spillovers but disregarding the noncompliance issue may underestimate the effect of the treatment receipt on the outcome. To estimate meaningful causal parameters, we introduce a new concept of exposure mapping, which summarizes potentially complicated spillover effects into a fixed dimensional statistic of instrumental variables. We investigate identification conditions for the intention-to-treat effect and the average causal effect for compliers, while explicitly considering the possibility of misspecification of exposure mapping. Based on our identification results, we develop nonparametric estimation procedures via inverse probability weighting. Their asymptotic properties, including consistency and asymptotic normality, are investigated using an approximate neighborhood interference framework, which is convenient for dealing with unknown forms of spillovers between individuals. For an empirical illustration, we apply our method to experimental data on the anti-conflict intervention school program.
We propose a novel inference procedure for linear combinations of high-dimensional regression coefficients in generalized estimating equations, which have been widely used for correlated data analysis for decades. Our estimator, obtained via constructing a system of projected estimating equations, is shown to be asymptotically normally distributed under certain regularity conditions. We also introduce a data-driven cross-validation procedure to select the tuning parameter for estimating the projection direction, which is not addressed in the existing procedures. We demonstrate the robust finite-sample performance, especially in estimation bias and confidence interval coverage, of the proposed method via extensive simulations, and apply the method to gene expression data on riboflavin production with Bacillus subtilis.
We develop a new approach to drifting games, a class of two-person games with many applications to boosting and online learning settings, including Prediction with Expert Advice and the Hedge game. Our approach involves (a) guessing an asymptotically optimal potential by solving an associated partial differential equation (PDE); then (b) justifying the guess, by proving upper and lower bounds on the final-time loss whose difference scales like a negative power of the number of time steps. The proofs of our potential-based upper bounds are elementary, using little more than Taylor expansion. The proofs of our potential-based lower bounds are also rather elementary, combining Taylor expansion with probabilistic or combinatorial arguments. Most previous work on asymptotically optimal strategies has used potentials obtained by solving a discrete dynamic programming principle; the arguments are complicated by their discrete nature. Our approach is facilitated by the fact that the potentials we use are explicit solutions of PDEs; the arguments are based on basic calculus. Not only is our approach more elementary, but we give new potentials and derive corresponding upper and lower bounds that match each other in the asymptotic regime.
We consider the problem of joint simultaneous confidence band (JSCB) construction for regression coefficient functions of time series scalar-on-function linear regression when the regression model is estimated by roughness penalization approach with flexible choices of orthonormal basis functions. A simple and unified multiplier bootstrap methodology is proposed for the JSCB construction which is shown to achieve the correct coverage probability asymptotically. Furthermore, the JSCB is asymptotically robust to inconsistently estimated standard deviations of the model. The proposed methodology is applied to a time series data set of electricity market to visually investigate and formally test the overall regression relationship as well as perform model validation. A uniform Gaussian approximation and comparison result over all Euclidean convex sets for normalized sums of a class of moderately high-dimensional stationary time series is established. Finally, the proposed methodology can be applied to simultaneous inference for scalar-on-function linear regression of independent cross-sectional data.
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside some ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert spaces attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial.
This paper proposes a data-driven machine learning framework for parameter estimation and uncertainty quantification in epidemic models based on two key ingredients: (i) prior parameters learning via the cross-entropy method and (ii) update of the model calibration and uncertainty propagation through approximate Bayesian computation. The effectiveness of the new methodology is illustrated with the aid of actual data from COVID-19 epidemic at Rio de Janeiro city in Brazil, employing an ordinary differential equation-based model with a generalized SEIR-type mechanistic structure that includes time-dependent transmission rate, asymptomatics, and hospitalizations. A minimization problem with two cost terms (number of hospitalizations and deaths) is formulated, and twelve parameters are identified. The calibrated model provides a consistent description of the available data, able to extrapolate forecasts over a few weeks, which makes the proposed methodology very appealing for use in the context of real-time epidemic modeling.