In practice, optimal screening designs for arbitrary run sizes are traditionally generated using the D-criterion with factor settings fixed at +/- 1, even when considering continuous factors with levels in [-1, 1]. This paper identifies cases of undesirable estimation variance properties for such D-optimal designs and argues that generally A-optimal designs tend to push variances closer to their minimum possible value. New insights about the behavior of the criteria are found through a study of their respective coordinate-exchange formulas. The study confirms the existence of D-optimal designs comprised only of settings +/- 1 for both main effect and interaction models for blocked and un-blocked experiments. Scenarios are also identified for which arbitrary manipulation of a coordinate between [-1, 1] leads to infinitely many D-optimal designs each having different variance properties. For the same conditions, the A-criterion is shown to have a unique optimal coordinate value for improvement. We also compare Bayesian version of the A- and D-criteria in how they balance minimization of estimation variance and bias. Multiple examples of screening designs are considered for various models under Bayesian and non-Bayesian versions of the A- and D-criteria.
Wearable Cognitive Assistance (WCA) applications present a challenge to benchmark and characterize due to their human-in-the-loop nature. Employing user testing to optimize system parameters is generally not feasible, given the scope of the problem and the number of observations needed to detect small but important effects in controlled experiments. Considering the intended mass-scale deployment of WCA applications in the future, there exists a need for tools enabling human-independent benchmarking. We present in this paper the first model for the complete end-to-end emulation of humans in WCA. We build this model through statistical analysis of data collected from previous work in this field, and demonstrate its utility by studying application task durations. Compared to first-order approximations, our model shows a ~36% larger gap between step execution times at high system impairment versus low. We further introduce a novel framework for stochastic optimization of resource consumption-responsiveness tradeoffs in WCA, and show that by combining this framework with our realistic model of human behavior, significant reductions of up to 50% in number processed frame samples and 20% in energy consumption can be achieved with respect to the state-of-the-art.
Actuaries use predictive modeling techniques to assess the loss cost on a contract as a function of observable risk characteristics. State-of-the-art statistical and machine learning methods are not well equipped to handle hierarchically structured risk factors with a large number of levels. In this paper, we demonstrate the data-driven construction of an insurance pricing model when hierarchically structured risk factors, contract-specific as well as externally collected risk factors are available. We examine the pricing of a workers' compensation insurance product with a hierarchical credibility model (Jewell, 1975), Ohlsson's combination of a generalized linear and a hierarchical credibility model (Ohlsson, 2008) and mixed models. We compare the predictive performance of these models and evaluate the effect of the distributional assumption on the target variable by comparing linear mixed models with Tweedie generalized linear mixed models. For our case-study the Tweedie distribution is well suited to model and predict the loss cost on a contract. Moreover, incorporating contract-specific risk factors in the model improves the predictive performance and the risk differentiation in our workers' compensation insurance portfolio.
The theoretical advances on the properties of scoring rules over the past decades have broadened the use of scoring rules in probabilistic forecasting. In meteorological forecasting, statistical postprocessing techniques are essential to improve the forecasts made by deterministic physical models. Numerous state-of-the-art statistical postprocessing techniques are based on distributional regression evaluated with the Continuous Ranked Probability Score (CRPS). However, theoretical properties of such evaluation with the CRPS have solely considered the unconditional framework (i.e. without covariates) and infinite sample sizes. We extend these results and study the rate of convergence in terms of CRPS of distributional regression methods. We find the optimal minimax rate of convergence for a given class of distributions and show that the k-nearest neighbor method and the kernel method reach this optimal minimax rate.
Causal inference is the process of using assumptions, study designs, and estimation strategies to draw conclusions about the causal relationships between variables based on data. This allows researchers to better understand the underlying mechanisms at work in complex systems and make more informed decisions. In many settings, we may not fully observe all the confounders that affect both the treatment and outcome variables, complicating the estimation of causal effects. To address this problem, a growing literature in both causal inference and machine learning proposes to use Instrumental Variables (IV). This paper serves as the first effort to systematically and comprehensively introduce and discuss the IV methods and their applications in both causal inference and machine learning. First, we provide the formal definition of IVs and discuss the identification problem of IV regression methods under different assumptions. Second, we categorize the existing work on IV methods into three streams according to the focus on the proposed methods, including two-stage least squares with IVs, control function with IVs, and evaluation of IVs. For each stream, we present both the classical causal inference methods, and recent developments in the machine learning literature. Then, we introduce a variety of applications of IV methods in real-world scenarios and provide a summary of the available datasets and algorithms. Finally, we summarize the literature, discuss the open problems and suggest promising future research directions for IV methods and their applications. We also develop a toolkit of IVs methods reviewed in this survey at //github.com/causal-machine-learning-lab/mliv.
We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. We also observe that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine. Furthermore, we also derive adaptive nonparametric distributed testing strategies and the corresponding theoretical lower bounds.
We consider the problem of comparing several samples of stochastic processes with respect to their second-order structure, and describing the main modes of variation in this second order structure, if present. These tasks can be seen as an Analysis of Variance (ANOVA) and a Principal Component Analysis (PCA) of covariance operators, respectively. They arise naturally in functional data analysis, where several populations are to be contrasted relative to the nature of their dispersion around their means, rather than relative to their means themselves. We contribute a novel approach based on optimal (multi)transport, where each covariance can be identified with a a centred Gaussian process of corresponding covariance. By means of constructing the optimal simultaneous coupling of these Gaussian processes, we contrast the (linear) maps that achieve it with the identity with respect to a norm-induced distance. The resulting test statistic, calibrated by permutation, is seen to distinctly outperform the state-of-the-art, and to furnish considerable power even under local alternatives. This effect is seen to be genuinely functional, and is related to the potential for perfect discrimination in infinite dimensions. In the event of a rejection of the null hypothesis stipulating equality, a geometric interpretation of the transport maps allows us to construct a (tangent space) PCA revealing the main modes of variation. As a necessary step to developing our methodology, we prove results on the existence and boundedness of optimal multitransport maps. These are of independent interest in the theory of transport of Gaussian processes. The transportation ANOVA and PCA are illustrated on a variety of simulated and real examples.
Engineers and scientists have been collecting and analyzing fatigue data since the 1800s to ensure the reliability of life-critical structures. Applications include (but are not limited to) bridges, building structures, aircraft and spacecraft components, ships, ground-based vehicles, and medical devices. Engineers need to estimate S-N relationships (Stress or Strain versus Number of cycles to failure), typically with a focus on estimating small quantiles of the fatigue-life distribution. Estimates from this kind of model are used as input to models (e.g., cumulative damage models) that predict failure-time distributions under varying stress patterns. Also, design engineers need to estimate lower-tail quantiles of the closely related fatigue-strength distribution. The history of applying incorrect statistical methods is nearly as long and such practices continue to the present. Examples include treating the applied stress (or strain) as the response and the number of cycles to failure as the explanatory variable in regression analyses (because of the need to estimate strength distributions) and ignoring or otherwise mishandling censored observations (known as runouts in the fatigue literature). The first part of the paper reviews the traditional modeling approach where a fatigue-life model is specified. We then show how this specification induces a corresponding fatigue-strength model. The second part of the paper presents a novel alternative modeling approach where a fatigue-strength model is specified and a corresponding fatigue-life model is induced. We explain and illustrate the important advantages of this new modeling approach.
This note complements the upcoming paper "One-Way Ticket to Las Vegas and the Quantum Adversary" by Belovs and Yolcu, to be presented at QIP 2023. I develop the ideas behind the adversary bound - universal algorithm duality therein in a different form, using the same perspective as Barnum-Saks-Szegedy in which query algorithms are defined as sequences of feasible reduced density matrices rather than sequences of unitaries. This form may be faster to understand for a general quantum information audience: It avoids defining the "unidirectional relative $\gamma_{2}$-bound" and relating it to query algorithms explicitly. This proof is also more general because the lower bound (and universal query algorithm) apply to a class of optimal control problems rather than just query problems. That is in addition to the advantages to be discussed in Belovs-Yolcu, namely the more elementary algorithm and correctness proof that avoids phase estimation and spectral analysis, allows for limited treatment of noise, and removes another $\Theta(\log(1/\epsilon))$ factor from the runtime compared to the previous discrete-time algorithm.
This paper investigates the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems in the context of channel estimation with jointly Gaussian inputs. We analyze the relationship of the generally nonlinear CME to the linear Bussgang estimator, a well-known method based on Bussgang's theorem. We highlight a novel observation that the Bussgang estimator is equal to the CME for different special cases, including the case of univariate Gaussian inputs and the case of multiple observations in the absence of additive noise prior to the quantization. For the general cases we conduct numerical simulations to quantify the gap between the Bussgang estimator and the CME. This gap increases for higher dimensions and longer pilot sequences. We propose an optimal pilot sequence, motivated by insights from the CME, and derive a novel closed-form expression of the MSE for that case. Afterwards, we find a closed-form limit of the MSE in the asymptotically large number of pilots regime that also holds for the Bussgang estimator. Lastly, we present numerical experiments for various system parameters and for different performance metrics which illuminate the behavior of the optimal channel estimator in the quantized regime. In this context, the well-known stochastic resonance effect that appears in quantized systems can be quantified.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.