Constructing asymptotically valid confidence intervals through a valid central limit theorem is crucial for A/B tests, where a classical goal is to statistically assert whether a treatment plan is significantly better than a control plan. In some emerging applications for online platforms, the treatment plan is not a single plan, but instead encompasses an infinite continuum of plans indexed by a continuous treatment parameter. As such, the experimenter not only needs to provide valid statistical inference, but also desires to effectively and adaptively find the optimal choice of value for the treatment parameter to use for the treatment plan. However, we find that classical optimization algorithms, despite of their fast convergence rates under convexity assumptions, do not come with a central limit theorem that can be used to construct asymptotically valid confidence intervals. We fix this issue by providing a new optimization algorithm that on one hand maintains the same fast convergence rate and on the other hand permits the establishment of a valid central limit theorem. We discuss practical implementations of the proposed algorithm and conduct numerical experiments to illustrate the theoretical findings.
We propose a new approach to portfolio optimization that utilizes a unique combination of synthetic data generation and a CVaR-constraint. We formulate the portfolio optimization problem as an asset allocation problem in which each asset class is accessed through a passive (index) fund. The asset-class weights are determined by solving an optimization problem which includes a CVaR-constraint. The optimization is carried out by means of a Modified CTGAN algorithm which incorporates features (contextual information) and is used to generate synthetic return scenarios, which, in turn, are fed into the optimization engine. For contextual information we rely on several points along the U.S. Treasury yield curve. The merits of this approach are demonstrated with an example based on ten asset classes (covering stocks, bonds, and commodities) over a fourteen-and-half year period (January 2008-June 2022). We also show that the synthetic generation process is able to capture well the key characteristics of the original data, and the optimization scheme results in portfolios that exhibit satisfactory out-of-sample performance. We also show that this approach outperforms the conventional equal-weights (1/N) asset allocation strategy and other optimization formulations based on historical data only.
This paper introduces an $hp$-adaptive multi-element stochastic collocation method, which additionally allows to re-use existing model evaluations during either $h$- or $p$-refinement. The collocation method is based on weighted Leja nodes. After $h$-refinement, local interpolations are stabilized by adding and sorting Leja nodes on each newly created sub-element in a hierarchical manner. For $p$-refinement, the local polynomial approximations are based on total-degree or dimension-adaptive bases. The method is applied in the context of forward and inverse uncertainty quantification to handle non-smooth or strongly localised response surfaces. The performance of the proposed method is assessed in several test cases, also in comparison to competing methods.
Gradient methods have become mainstream techniques for Bi-Level Optimization (BLO) in learning fields. The validity of existing works heavily rely on either a restrictive Lower- Level Strong Convexity (LLSC) condition or on solving a series of approximation subproblems with high accuracy or both. In this work, by averaging the upper and lower level objectives, we propose a single loop Bi-level Averaged Method of Multipliers (sl-BAMM) for BLO that is simple yet efficient for large-scale BLO and gets rid of the limited LLSC restriction. We further provide non-asymptotic convergence analysis of sl-BAMM towards KKT stationary points, and the comparative advantage of our analysis lies in the absence of strong gradient boundedness assumption, which is always required by others. Thus our theory safely captures a wider variety of applications in deep learning, especially where the upper-level objective is quadratic w.r.t. the lower-level variable. Experimental results demonstrate the superiority of our method.
We investigate the extent to which offline demonstration data can improve online learning. It is natural to expect some improvement, but the question is how, and by how much? We show that the degree of improvement must depend on the quality of the demonstration data. To generate portable insights, we focus on Thompson sampling (TS) applied to a multi-armed bandit as a prototypical online learning algorithm and model. The demonstration data is generated by an expert with a given competence level, a notion we introduce. We propose an informed TS algorithm that utilizes the demonstration data in a coherent way through Bayes' rule and derive a prior-dependent Bayesian regret bound. This offers insight into how pretraining can greatly improve online performance and how the degree of improvement increases with the expert's competence level. We also develop a practical, approximate informed TS algorithm through Bayesian bootstrapping and show substantial empirical regret reduction through experiments.
In Bayesian analysis, the selection of a prior distribution is typically done by considering each parameter in the model. While this can be convenient, in many scenarios it may be desirable to place a prior on a summary measure of the model instead. In this work, we propose a prior on the model fit, as measured by a Bayesian coefficient of determination (R2), which then induces a prior on the individual parameters. We achieve this by placing a beta prior on R2 and then deriving the induced prior on the global variance parameter for generalized linear mixed models. We derive closed-form expressions in many scenarios and present several approximation strategies when an analytic form is not possible and/or to allow for easier computation. In these situations, we suggest approximating the prior by using a generalized beta prime distribution and provide a simple default prior construction scheme. This approach is quite flexible and can be easily implemented in standard Bayesian software. Lastly, we demonstrate the performance of the method on simulated data, where it particularly shines in high-dimensional examples, as well as real-world data, which shows its ability to model spatial correlation in the random effects.
Solving high-dimensional random parametric PDEs poses a challenging computational problem. It is well-known that numerical methods can greatly benefit from adaptive refinement algorithms, in particular when functional approximations in polynomials are computed as in stochastic Galerkin and stochastic collocations methods. This work investigates a residual based adaptive algorithm used to approximate the solution of the stationary diffusion equation with lognormal coefficients. It is known that the refinement procedure is reliable, but the theoretical convergence of the scheme for this class of unbounded coefficients has long been an open question. This paper fills this gap and in particular provides a convergence results for the adaptive solution of the lognormal stationary diffusion problem. A computational example supports the theoretical statement.
The integration of discrete algorithmic components in deep learning architectures has numerous applications. Recently, Implicit Maximum Likelihood Estimation (IMLE, Niepert, Minervini, and Franceschi 2021), a class of gradient estimators for discrete exponential family distributions, was proposed by combining implicit differentiation through perturbation with the path-wise gradient estimator. However, due to the finite difference approximation of the gradients, it is especially sensitive to the choice of the finite difference step size, which needs to be specified by the user. In this work, we present Adaptive IMLE (AIMLE), the first adaptive gradient estimator for complex discrete distributions: it adaptively identifies the target distribution for IMLE by trading off the density of gradient information with the degree of bias in the gradient estimates. We empirically evaluate our estimator on synthetic examples, as well as on Learning to Explain, Discrete Variational Auto-Encoders, and Neural Relational Inference tasks. In our experiments, we show that our adaptive gradient estimator can produce faithful estimates while requiring orders of magnitude fewer samples than other gradient estimators.
This article tackles the old problem of prediction via a nonparametric transformation model (NTM) in a new Bayesian way. Estimation of NTMs is known challenging due to model unidentifiability though appealing because of its robust prediction capability in survival analysis. Inspired by the uniqueness of the posterior predictive distribution, we achieve efficient prediction via the NTM aforementioned under the Bayesian paradigm. Our strategy is to assign weakly informative priors to nonparametric components rather than identify the model by adding complicated constraints in the existing literature. The Bayesian success pays tribute to i) a subtle cast of NTMs by an exponential transformation for the purpose of compressing spaces of infinite-dimensional parameters to positive quadrants considering non-negativity of the failure time; ii) a newly constructed weakly informative quantile-knots I-splines prior for the recast transformation function together with the Dirichlet process mixture model assigned to the error distribution. In addition, we provide a convenient and precise estimator for the identified parameter component subject to the general unit-norm restriction through posterior modification, enabling effective relative risks. Simulations and applications on real datasets reveal that our method is robust and outperforms the competing methods. An R package BuLTM is available to predict survival curves, estimate relative risks, and facilitate posterior checking.
Modern statistical problems often involve such linear inequality constraints on model parameters. Ignoring natural parameter constraints usually results in less efficient statistical procedures. To this end, we define a notion of `sparsity' for such restricted sets using lower-dimensional features. We allow our framework to be flexible so that the number of restrictions may be higher than the number of parameters. One such situation arise in estimation of monotone curve using a non parametric approach e.g. splines. We show that the proposed notion of sparsity agrees with the usual notion of sparsity in the unrestricted case and proves the validity of the proposed definition as a measure of sparsity. The proposed sparsity measure also allows us to generalize popular priors for sparse vector estimation to the constrained case.
An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.