A new class of copulas, termed the MGL copula class, is introduced. The new copula originates from extracting the dependence function of the multivariate generalized log-Moyal-gamma distribution whose marginals follow the univariate generalized log-Moyal-gamma (GLMGA) distribution as introduced in \citet{li2019jan}. The MGL copula can capture nonelliptical, exchangeable, and asymmetric dependencies among marginal coordinates and provides a simple formulation for regression applications. We discuss the probabilistic characteristics of MGL copula and obtain the corresponding extreme-value copula, named the MGL-EV copula. While the survival MGL copula can be also regarded as a special case of the MGB2 copula from \citet{yang2011generalized}, we show that the proposed model is effective in regression modelling of dependence structures. Next to a simulation study, we propose two applications illustrating the usefulness of the proposed model. This method is also implemented in a user-friendly R package: \texttt{rMGLReg}.
We consider the problem of testing for long-range dependence for time-varying coefficient regression models. The covariates and errors are assumed to be locally stationary, which allows complex temporal dynamics and heteroscedasticity. We develop KPSS, R/S, V/S, and K/S-type statistics based on the nonparametric residuals, and propose bootstrap approaches equipped with a difference-based long-run covariance matrix estimator for practical implementation. Under the null hypothesis, the local alternatives as well as the fixed alternatives, we derive the limiting distributions of the test statistics, establish the uniform consistency of the difference-based long-run covariance estimator, and justify the bootstrap algorithms theoretically. In particular, the exact local asymptotic power of our testing procedure enjoys the order $O( \log^{-1} n)$, the same as that of the classical KPSS test for long memory in strictly stationary series without covariates. We demonstrate the effectiveness of our tests by extensive simulation studies. The proposed tests are applied to a COVID-19 dataset in favor of long-range dependence in the cumulative confirmed series of COVID-19 in several countries, and to the Hong Kong circulatory and respiratory dataset, identifying a new type of 'spurious long memory'.
Georeferenced compositional data are prominent in many scientific fields and in spatial statistics. This work addresses the problem of proposing models and methods to analyze and predict, through kriging, this type of data. To this purpose, a novel class of transformations, named the Isometric $\alpha$-transformation ($\alpha$-IT), is proposed, which encompasses the traditional Isometric Log-Ratio (ILR) transformation. It is shown that the ILR is the limit case of the $\alpha$-IT as $\alpha$ tends to 0 and that $\alpha=1$ corresponds to a linear transformation of the data. Unlike the ILR, the proposed transformation accepts 0s in the compositions when $\alpha>0$. Maximum likelihood estimation of the parameter $\alpha$ is established. Prediction using kriging on $\alpha$-IT transformed data is validated on synthetic spatial compositional data, using prediction scores computed either in the geometry induced by the $\alpha$-IT, or in the simplex. Application to land cover data shows that the relative superiority of the various approaches w.r.t. a prediction objective depends on whether the compositions contained any zero component. When all components are positive, the limit cases (ILR or linear transformations) are optimal for none of the considered metrics. An intermediate geometry, corresponding to the $\alpha$-IT with maximum likelihood estimate, better describes the dataset in a geostatistical setting. When the amount of compositions with 0s is not negligible, some side-effects of the transformation gets amplified as $\alpha$ decreases, entailing poor kriging performances both within the $\alpha$-IT geometry and for metrics in the simplex.
$d$-dimensional efficient range-summability ($d$D-ERS) of a long list of random variables (RVs) is a fundamental algorithmic problem that has applications to two important families of database problems, namely, fast approximate wavelet tracking (FAWT) on data streams and approximately answering range-sum queries over a data cube. In this work, we propose a novel solution framework to $d$D-ERS for $d>1$ on RVs that have Gaussian or Poisson distribution. Our solutions are the first ones that compute any rectangular range-sum of the RVs in polylogarithmic time. Furthermore, we develop a novel $k$-wise independence theory that allows our $d$D-ERS solutions to have both high computational efficiencies and strong provable independence guarantees. Finally, we generalize existing DST-based solutions for 1D-ERS to 2D, and characterize a sufficient and likely necessary condition on the target distribution for this generalization to be feasible.
Quantile regression has been successfully used to study heterogeneous and heavy-tailed data. Varying-coefficient models are frequently used to capture changes in the effect of input variables on the response as a function of an index or time. In this work, we study high-dimensional varying-coefficient quantile regression models and develop new tools for statistical inference. We focus on development of valid confidence intervals and honest tests for nonparametric coefficients at a fixed time point and quantile, while allowing for a high-dimensional setting where the number of input variables exceeds the sample size. Performing statistical inference in this regime is challenging due to the usage of model selection techniques in estimation. Nevertheless, we can develop valid inferential tools that are applicable to a wide range of data generating processes and do not suffer from biases introduced by model selection. We performed numerical simulations to demonstrate the finite sample performance of our method, and we also illustrated the application with a real data example.
We revisit the classical problem of nonparametric density estimation, but impose local differential privacy constraints. Under such constraints, the original multivariate data $X_1,\ldots,X_n \in \mathbb{R}^d$ cannot be directly observed, and all estimators are functions of the randomised output of a suitable privacy mechanism. The statistician is free to choose the form of the privacy mechanism, and in this work we propose to add Laplace distributed noise to a discretisation of the location of a vector $X_i$. Based on these randomised data, we design a novel estimator of the density function, which can be viewed as a privatised version of the well-studied histogram density estimator. Our theoretical results include universal pointwise consistency and strong universal $L_1$-consistency. In addition, a convergence rate over classes of Lipschitz functions is derived, which is complemented by a matching minimax lower bound. We illustrate the trade-off between data utility and privacy by means of a small simulation study.
We introduce a comprehensive Bayesian multivariate predictive inference framework. The basis for our framework is a hierarchical Bayesian model, that is a mixture of finite Polya trees corresponding to multiple dyadic partitions of the unit cube. Given a sample of observations from an unknown multivariate distribution, the posterior predictive distribution is used to model and generate future observations from the unknown distribution. We illustrate the implementation of our methodology and study its performance on simulated examples. We introduce an algorithm for constructing conformal prediction sets, that provide finite sample probability assurances for future observations, with our Bayesian model.
We consider the problem of automatically proving resource bounds. That is, we study how to prove that an integer-valued resource variable is bounded by a given program expression. Automatic resource-bound analysis has recently received significant attention because of a number of important applications (e.g., detecting performance bugs, preventing algorithmic-complexity attacks, identifying side-channel vulnerabilities), where the focus has often been on developing precise amortized reasoning techniques to infer the most exact resource usage. While such innovations remain critical, we observe that fully precise amortization is not always necessary to prove a bound of interest. And in fact, by amortizing selectively, the needed supporting invariants can be simpler, making the invariant inference task more feasible and predictable. We present a framework for selectively-amortized analysis that mixes worst-case and amortized reasoning via a property decomposition and a program transformation. We show that proving bounds in any such decomposition yields a sound resource bound in the original program, and we give an algorithm for selecting a reasonable decomposition.
Customer behavior is often assumed to follow weak rationality, which implies that adding a product to an assortment will not increase the choice probability of another product in that assortment. However, an increasing amount of research has revealed that customers are not necessarily rational when making decisions. In this paper, we propose a new nonparametric choice model that relaxes this assumption and can model a wider range of customer behavior, such as decoy effects between products. In this model, each customer type is associated with a binary decision tree, which represents a decision process for making a purchase based on checking for the existence of specific products in the assortment. Together with a probability distribution over customer types, we show that the resulting model -- a decision forest -- is able to represent any customer choice model, including models that are inconsistent with weak rationality. We theoretically characterize the depth of the forest needed to fit a data set of historical assortments and prove that with high probability, a forest whose depth scales logarithmically in the number of assortments is sufficient to fit most data sets. We also propose two practical algorithms -- one based on column generation and one based on random sampling -- for estimating such models from data. Using synthetic data and real transaction data exhibiting non-rational behavior, we show that the model outperforms both rational and non-rational benchmark models in out-of-sample predictive ability.
Confounding variables are a recurrent challenge for causal discovery and inference. In many situations, complex causal mechanisms only manifest themselves in extreme events, or take simpler forms in the extremes. Stimulated by data on extreme river flows and precipitation, we introduce a new causal discovery methodology for heavy-tailed variables that allows the use of a known potential confounder as a covariate and allows its effect to be almost entirely removed when the variables have comparable tails, and also decreases it sufficiently to enable correct causal inference when the confounder has a heavier tail. We introduce a new parametric estimator for the existing causal tail coefficient and a permutation test. Simulations show that the methods work well and the ideas are applied to the motivating dataset.
Finding a suitable density function is essential for density-based clustering algorithms such as DBSCAN and DPC. A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in these algorithms. Such density suffers from capturing local features in complex datasets. To tackle this issue, we propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness. Furthermore, we develop a surrogate that can be efficiently computed in linear time and space and prove that it is asymptotically equivalent to the kernel diffusion density function. Extensive empirical experiments on benchmark and large-scale face image datasets show that the proposed approach not only achieves a significant improvement over classic density-based clustering algorithms but also outperforms the state-of-the-art face clustering methods by a large margin.