In reliability and life data analysis, the Weibull distribution is widely used to accommodate more data characteristics by changing the values of the parameters. We frequently observe many zeros or close to zero data points in reliability and life testing experiments. We call this phenomenon a nearly instantaneous failure. Many researchers modified the commonly used univariate parametric models such as exponential, gamma, Weibull, and log-normal distributions to appropriately fit such data having instantaneous failure observations. Researchers also find bivariate correlated life testing data having many observations near a particular point while the remaining observations follow some continuous distribution. This situation defines as responses having early failures for such bivariate responses. If the point is the origin, then we call the situation a nearly instantaneous failure for the responses. Here, we propose a modified bivariate Weibull distribution that allows early failure by combining bivariate uniform distribution and bivariate Weibull distribution. The bivariate Weibull distribution is constructed using a 2-dimensional copula, assuming the marginal distributions as two parametric Weibull distributions. We derive some properties of that modified bivariate Weibull distribution, mainly the joint probability density function, the survival (reliability) function, and the hazard (failure rate) function. The model's unknown parameters are estimated using the Maximum Likelihood Estimation (MLE) technique combined with a machine learning clustering algorithm. Numerical examples are provided using simulated data to illustrate and test the performance of the proposed methodologies. The method is also applied to real data and compared with existing approaches to model such data in the literature.
We propose the first near-optimal quantum algorithm for estimating in Euclidean norm the mean of a vector-valued random variable with finite mean and covariance. Our result aims at extending the theory of multivariate sub-Gaussian estimators to the quantum setting. Unlike classically, where any univariate estimator can be turned into a multivariate estimator with at most a logarithmic overhead in the dimension, no similar result can be proved in the quantum setting. Indeed, Heinrich ruled out the existence of a quantum advantage for the mean estimation problem when the sample complexity is smaller than the dimension. Our main result is to show that, outside this low-precision regime, there is a quantum estimator that outperforms any classical estimator. Our approach is substantially more involved than in the univariate setting, where most quantum estimators rely only on phase estimation. We exploit a variety of additional algorithmic techniques such as amplitude amplification, the Bernstein-Vazirani algorithm, and quantum singular value transformation. Our analysis also uses concentration inequalities for multivariate truncated statistics. We develop our quantum estimators in two different input models that showed up in the literature before. The first one provides coherent access to the binary representation of the random variable and it encompasses the classical setting. In the second model, the random variable is directly encoded into the phases of quantum registers. This model arises naturally in many quantum algorithms but it is often incomparable to having classical samples. We adapt our techniques to these two settings and we show that the second model is strictly weaker for solving the mean estimation problem. Finally, we describe several applications of our algorithms, notably in measuring the expectation values of commuting observables and in the field of machine learning.
As opaque predictive models increasingly impact many areas of modern life, interest in quantifying the importance of a given input variable for making a specific prediction has grown. Recently, there has been a proliferation of model-agnostic methods to measure variable importance (VI) that analyze the difference in predictive power between a full model trained on all variables and a reduced model that excludes the variable(s) of interest. A bottleneck common to these methods is the estimation of the reduced model for each variable (or subset of variables), which is an expensive process that often does not come with theoretical guarantees. In this work, we propose a fast and flexible method for approximating the reduced model with important inferential guarantees. We replace the need for fully retraining a wide neural network by a linearization initialized at the full model parameters. By adding a ridge-like penalty to make the problem convex, we prove that when the ridge penalty parameter is sufficiently large, our method estimates the variable importance measure with an error rate of $O(\frac{1}{\sqrt{n}})$ where $n$ is the number of training samples. We also show that our estimator is asymptotically normal, enabling us to provide confidence bounds for the VI estimates. We demonstrate through simulations that our method is fast and accurate under several data-generating regimes, and we demonstrate its real-world applicability on a seasonal climate forecasting example.
In recent years, change point detection for high dimensional data has become increasingly important in many scientific fields. Most literature develop a variety of separate methods designed for specified models (e.g. mean shift model, vector auto-regressive model, graphical model). In this paper, we provide a unified framework for structural break detection which is suitable for a large class of models. Moreover, the proposed algorithm automatically achieves consistent parameter estimates during the change point detection process, without the need for refitting the model. Specifically, we introduce a three-step procedure. The first step utilizes the block segmentation strategy combined with a fused lasso based estimation criterion, leads to significant computational gains without compromising the statistical accuracy in identifying the number and location of the structural breaks. This procedure is further coupled with hard-thresholding and exhaustive search steps to consistently estimate the number and location of the break points. The strong guarantees are proved on both the number of estimated change points and the rates of convergence of their locations. The consistent estimates of model parameters are also provided. The numerical studies provide further support of the theory and validate its competitive performance for a wide range of models. The developed algorithm is implemented in the R package LinearDetect.
Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions, which is crucial in applications where online experimentation is limited. However, depending entirely on logged data, OPE/L is sensitive to environment distribution shifts -- discrepancies between the data-generating environment and that where policies are deployed. \citet{si2020distributional} proposed distributionally robust OPE/L (DROPE/L) to address this, but the proposal relies on inverse-propensity weighting, whose estimation error and regret will deteriorate if propensities are nonparametrically estimated and whose variance is suboptimal even if not. For standard, non-robust, OPE/L, this is solved by doubly robust (DR) methods, but they do not naturally extend to the more complex DROPE/L, which involves a worst-case expectation. In this paper, we propose the first DR algorithms for DROPE/L with KL-divergence uncertainty sets. For evaluation, we propose Localized Doubly Robust DROPE (LDR$^2$OPE) and show that it achieves semiparametric efficiency under weak product rates conditions. Thanks to a localization technique, LDR$^2$OPE only requires fitting a small number of regressions, just like DR methods for standard OPE. For learning, we propose Continuum Doubly Robust DROPL (CDR$^2$OPL) and show that, under a product rate condition involving a continuum of regressions, it enjoys a fast regret rate of $\mathcal{O}\left(N^{-1/2}\right)$ even when unknown propensities are nonparametrically estimated. We empirically validate our algorithms in simulations and further extend our results to general $f$-divergence uncertainty sets.
We develop a model-based boosting approach for multivariate distributional regression within the framework of generalized additive models for location, scale, and shape. Our approach enables the simultaneous modeling of all distribution parameters of an arbitrary parametric distribution of a multivariate response conditional on explanatory variables, while being applicable to potentially high-dimensional data. Moreover, the boosting algorithm incorporates data-driven variable selection, taking various different types of effects into account. As a special merit of our approach, it allows for modelling the association between multiple continuous or discrete outcomes through the relevant covariates. After a detailed simulation study investigating estimation and prediction performance, we demonstrate the full flexibility of our approach in three diverse biomedical applications. The first is based on high-dimensional genomic cohort data from the UK Biobank, considering a bivariate binary response (chronic ischemic heart disease and high cholesterol). Here, we are able to identify genetic variants that are informative for the association between cholesterol and heart disease. The second application considers the demand for health care in Australia with the number of consultations and the number of prescribed medications as a bivariate count response. The third application analyses two dimensions of childhood undernutrition in Nigeria as a bivariate response and we find that the correlation between the two undernutrition scores is considerably different depending on the child's age and the region the child lives in.
Convex model predictive controls (MPCs) with a single rigid body model have demonstrated strong performance on real legged robots. However, convex MPCs are limited by their assumptions such as small rotation angle and pre-defined gait, limiting the richness of potential solutions. We remove those assumptions and solve the complete mixed-integer non-convex programming with single rigid body model. We first collect datasets of pre-solved problems offline, then learn the problem-solution map to solve this optimization fast for MPC. If warm-starts can be found, offline problems can be solved close to the global optimality. The proposed controller is tested by generating various gaits and behaviors depending on the initial conditions. Hardware test demonstrates online gait generation and adaptation running at more than 50 Hz based on sensor feedback.
Functional linear regression gets its popularity as a statistical tool to study the relationship between function-valued response and exogenous explanatory variables. However, in practice, it is hard to expect that the explanatory variables of interest are perfectly exogenous, due to, for example, the presence of omitted variables and measurement error. Despite its empirical relevance, it was not until recently that this issue of endogeneity was studied in the literature on functional regression, and the development in this direction does not seem to sufficiently meet practitioners' needs; for example, this issue has been discussed with paying particular attention on consistent estimation and thus the distributional properties of the proposed estimators still remain to be further explored. To fill this gap, this paper proposes new consistent FPCA-based instrumental variable estimators and develops their asymptotic properties in detail. We also provide a novel test for examining if various characteristics of the response variable depend on the explanatory variable in our model. Simulation experiments under a wide range of settings show that the proposed estimators and test perform considerably well. We apply our methodology to estimate the impact of immigration on native wages.
Risk-adjusted quality measures are used to evaluate healthcare providers while controlling for factors beyond their control. Existing healthcare provider profiling approaches typically assume that the risk adjustment is perfect and the between-provider variation in quality measures is entirely due to the quality of care. However, in practice, even with very good models for risk adjustment, some between-provider variation will be due to incomplete risk adjustment, which should be recognized in assessing and monitoring providers. Otherwise, conventional methods disproportionately identify larger providers as outliers, even though their provider effects need not be "extreme.'' Motivated by efforts to evaluate the quality of care provided by transplant centers, we develop a composite evaluation score based on a novel individualized empirical null method, which robustly accounts for overdispersion due to unobserved risk factors, models the marginal variance of standardized scores as a function of the effective center size, and only requires the use of publicly-available center-level statistics. The evaluations of United States kidney transplant centers based on the proposed composite score are substantially different from those based on conventional methods. Simulations show that the proposed empirical null approach more accurately classifies centers in terms of quality of care, compared to existing methods.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.
Model-agnostic meta-learners aim to acquire meta-learned parameters from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. With the flexibility in the choice of models, those frameworks demonstrate appealing performance on a variety of domains such as few-shot image classification and reinforcement learning. However, one important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML (MMAML) framework, which is able to modulate its meta-learned prior parameters according to the identified mode, allowing more efficient fast adaptation. We evaluate the proposed model on a diverse set of few-shot learning tasks, including regression, image classification, and reinforcement learning. The results not only demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks but also show that training on a multimodal distribution can produce an improvement over unimodal training.