A new mixture vector autoressive model based on Gaussian and Student's $t$ distributions is introduced. The G-StMVAR model incorporates conditionally homoskedastic linear Gaussian vector autoregressions and conditionally heteroskedastic linear Student's $t$ vector autoregressions as its mixture components, and mixing weights that, for a $p$th order model, depend on the full distribution of the preceding $p$ observations. Also a structural version of the model with time-varying B-matrix and statistically identified shocks is proposed. We derive the stationary distribution of $p+1$ consecutive observations and show that the process is ergodic. It is also shown that the maximum likelihood estimator is strongly consistent, and thereby has the conventional limiting distribution under conventional high-level conditions. Finally, this paper is accompanied with the CRAN distributed R package gmvarkit that provides easy-to-use tools for estimating the models and applying the methods.
Let $X$ and $Y$ be two real-valued random variables. Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots$ be independent identically distributed copies of $(X,Y)$. Suppose there are two players A and B. Player A has access to $X_{1},X_{2},\ldots$ and player B has access to $Y_{1},Y_{2},\ldots$. Without communication, what joint probability distributions can players A and B jointly simulate? That is, if $k,m$ are fixed positive integers, what probability distributions on $\{1,\ldots,m\}^{2}$ are equal to the distribution of $(f(X_{1},\ldots,X_{k}),\,g(Y_{1},\ldots,Y_{k}))$ for some $f,g\colon\mathbb{R}^{k}\to\{1,\ldots,m\}$? When $X$ and $Y$ are standard Gaussians with fixed correlation $\rho\in(-1,1)$, we show that the set of probability distributions that can be noninteractively simulated from $k$ Gaussian samples is the same for any $k\geq m^{2}$. Previously, it was not even known if this number of samples $m^{2}$ would be finite or not, except when $m\leq 2$. Consequently, a straightforward brute-force search deciding whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ correlated Gaussian samples has run time bounded by $(5/\epsilon)^{m(\log(\epsilon/2) / \log|\rho|)^{m^{2}}}$, improving a bound of Ghazi, Kamath and Raghavendra. A nonlinear central limit theorem (i.e. invariance principle) of Mossel then generalizes this result to decide whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ samples of a given finite discrete distribution $(X,Y)$ in run time that does not depend on $k$, with constants that again improve a bound of Ghazi, Kamath and Raghavendra.
Methods to correct class imbalance, i.e. imbalance between the frequency of outcome events and non-events, are receiving increasing interest for developing prediction models. We examined the effect of imbalance correction on the performance of standard and penalized (ridge) logistic regression models in terms of discrimination, calibration, and classification. We examined random undersampling, random oversampling and SMOTE using Monte Carlo simulations and a case study on ovarian cancer diagnosis. The results indicated that all imbalance correction methods led to poor calibration (strong overestimation of the probability to belong to the minority class), but not to better discrimination in terms of the area under the receiver operating characteristic curve. Imbalance correction improved classification in terms of sensitivity and specificity, but similar results were obtained by shifting the probability threshold instead. Our study shows that outcome imbalance is not a problem in itself, and that imbalance correction may even worsen model performance.
Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.
The asymptotic distribution of a wide class of V- and U-statistics with estimated parameters is derived in the case when the kernel is not necessarily differentiable along the parameter. The results have their application in goodness-of-fit problems.
Logistic regression models for binomial responses are routinely used in statistical practice. However, the maximum likelihood estimate may not exist due to data separability. We address this issue by considering a conjugate prior penalty which always produces finite estimates. Such a specification has a clear Bayesian interpretation and enjoys several invariance properties, making it an appealing prior choice. We show that the proposed method leads to an accurate approximation of the reduced-bias approach of Firth (1993), resulting in estimators with smaller asymptotic bias than the maximum-likelihood and whose existence is always guaranteed. Moreover, the considered penalized likelihood can be expressed as a genuine likelihood, in which the original data are replaced with a collection of pseudo-counts. Hence, our approach may leverage well established and scalable algorithms for logistic regression. We compare our estimator with alternative reduced-bias methods, vastly improving their computational performance and achieving appealing inferential results.
In traditional logistic regression models, the link function is often assumed to be linear and continuous in predictors. Here, we consider a threshold model that all continuous features are discretized into ordinal levels, which further determine the binary responses. Both the threshold points and regression coefficients are unknown and to be estimated. For high dimensional data, we propose a fusion penalized logistic threshold regression (FILTER) model, where a fused lasso penalty is employed to control the total variation and shrink the coefficients to zero as a method of variable selection. Under mild conditions on the estimate of unknown threshold points, we establish the non-asymptotic error bound for coefficient estimation and the model selection consistency. With a careful characterization of the error propagation, we have also shown that the tree-based method, such as CART, fulfill the threshold estimation conditions. We find the FILTER model is well suited in the problem of early detection and prediction for chronic disease like diabetes, using physical examination data. The finite sample behavior of our proposed method are also explored and compared with extensive Monte Carlo studies, which supports our theoretical discoveries.
There is a rich literature on Bayesian methods for density estimation, which characterize the unknown density as a mixture of kernels. Such methods have advantages in terms of providing uncertainty quantification in estimation, while being adaptive to a rich variety of densities. However, relative to frequentist locally adaptive kernel methods, Bayesian approaches can be slow and unstable to implement in relying on Markov chain Monte Carlo algorithms. To maintain most of the strengths of Bayesian approaches without the computational disadvantages, we propose a class of nearest neighbor-Dirichlet mixtures. The approach starts by grouping the data into neighborhoods based on standard algorithms. Within each neighborhood, the density is characterized via a Bayesian parametric model, such as a Gaussian with unknown parameters. Assigning a Dirichlet prior to the weights on these local kernels, we obtain a pseudo-posterior for the weights and kernel parameters. A simple and embarrassingly parallel Monte Carlo algorithm is proposed to sample from the resulting pseudo-posterior for the unknown density. Desirable asymptotic properties are shown, and the methods are evaluated in simulation studies and applied to a motivating data set in the context of classification.
Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE). More precisely, we give nonasymptotic expected $l^2$-distance bounds between the LSE and the true regression function, where expectation is evaluated on a fresh, counterfactual, trajectory. We leverage recently developed information-theoretic methods to establish the optimality of the LSE for nonparametric hypotheses classes in terms of supremum norm metric entropy and a subgaussian parameter. Next, we relate this subgaussian parameter to the stability of the underlying process using notions from dynamical systems theory. When combined, these developments lead to rate-optimal error bounds that scale as $T^{-1/(2+q)}$ for suitably stable processes and hypothesis classes with metric entropy growth of order $\delta^{-q}$. Here, $T$ is the length of the observed trajectory, $\delta \in \mathbb{R}_+$ is the packing granularity and $q\in (0,2)$ is a complexity term. Finally, we specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS).
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.