We consider the problem of testing for long-range dependence for time-varying coefficient regression models. The covariates and errors are assumed to be locally stationary, which allows complex temporal dynamics and heteroscedasticity. We develop KPSS, R/S, V/S, and K/S-type statistics based on the nonparametric residuals, and propose bootstrap approaches equipped with a difference-based long-run covariance matrix estimator for practical implementation. Under the null hypothesis, the local alternatives as well as the fixed alternatives, we derive the limiting distributions of the test statistics, establish the uniform consistency of the difference-based long-run covariance estimator, and justify the bootstrap algorithms theoretically. In particular, the exact local asymptotic power of our testing procedure enjoys the order $O( \log^{-1} n)$, the same as that of the classical KPSS test for long memory in strictly stationary series without covariates. We demonstrate the effectiveness of our tests by extensive simulation studies. The proposed tests are applied to a COVID-19 dataset in favor of long-range dependence in the cumulative confirmed series of COVID-19 in several countries, and to the Hong Kong circulatory and respiratory dataset, identifying a new type of 'spurious long memory'.
Fitting regression models with many multivariate responses and covariates can be challenging, but such responses and covariates sometimes have tensor-variate structure. We extend the classical multivariate regression model to exploit such structure in two ways: first, we impose four types of low-rank tensor formats on the regression coefficients. Second, we model the errors using the tensor-variate normal distribution that imposes a Kronecker separable format on the covariance matrix. We obtain maximum likelihood estimators via block-relaxation algorithms and derive their computational complexity and asymptotic distributions. Our regression framework enables us to formulate tensor-variate analysis of variance (TANOVA) methodology. This methodology, when applied in a one-way TANOVA layout, enables us to identify cerebral regions significantly associated with the interaction of suicide attempters or non-attemptor ideators and positive-, negative- or death-connoting words in a functional Magnetic Resonance Imaging study. Another application uses three-way TANOVA on the Labeled Faces in the Wild image dataset to distinguish facial characteristics related to ethnic origin, age group and gender.
Motivated by investigating the relationship between progesterone and the days in a menstrual cycle in a longitudinal study, we propose a multi-kink quantile regression model for longitudinal data analysis. It relaxes the linearity condition and assumes different regression forms in different regions of the domain of the threshold covariate. In this paper, we first propose a multi-kink quantile regression for longitudinal data. Two estimation procedures are proposed to estimate the regression coefficients and the kink points locations: one is a computationally efficient profile estimator under the working independence framework while the other one considers the within-subject correlations by using the unbiased generalized estimation equation approach. The selection consistency of the number of kink points and the asymptotic normality of two proposed estimators are established. Secondly, we construct a rank score test based on partial subgradients for the existence of kink effect in longitudinal studies. Both the null distribution and the local alternative distribution of the test statistic have been derived. Simulation studies show that the proposed methods have excellent finite sample performance. In the application to the longitudinal progesterone data, we identify two kink points in the progesterone curves over different quantiles and observe that the progesterone level remains stable before the day of ovulation, then increases quickly in five to six days after ovulation and then changes to stable again or even drops slightly
We introduce a procedure for conditional density estimation under logarithmic loss, which we call SMP (Sample Minmax Predictor). This estimator minimizes a new general excess risk bound for statistical learning. On standard examples, this bound scales as $d/n$ with $d$ the model dimension and $n$ the sample size, and critically remains valid under model misspecification. Being an improper (out-of-model) procedure, SMP improves over within-model estimators such as the maximum likelihood estimator, whose excess risk degrades under misspecification. Compared to approaches reducing to the sequential problem, our bounds remove suboptimal $\log n$ factors and can handle unbounded classes. For the Gaussian linear model, the predictions and risk bound of SMP are governed by leverage scores of covariates, nearly matching the optimal risk in the well-specified case without conditions on the noise variance or approximation error of the linear model. For logistic regression, SMP provides a non-Bayesian approach to calibration of probabilistic predictions relying on virtual samples, and can be computed by solving two logistic regressions. It achieves a non-asymptotic excess risk of $O((d + B^2R^2)/n)$, where $R$ bounds the norm of features and $B$ that of the comparison parameter; by contrast, no within-model estimator can achieve better rate than $\min({B R}/{\sqrt{n}}, {d e^{BR}}/{n} )$ in general. This provides a more practical alternative to Bayesian approaches, which require approximate posterior sampling, thereby partly addressing a question raised by Foster et al. (2018).
We develop a new method to fit the multivariate response linear regression model that exploits a parametric link between the regression coefficient matrix and the error covariance matrix. Specifically, we assume that the correlations between entries in the multivariate error random vector are proportional to the cosines of the angles between their corresponding regression coefficient matrix columns, so as the angle between two regression coefficient matrix columns decreases, the correlation between the corresponding errors increases. We highlight two models under which this parameterization arises: the latent variable reduced-rank regression model and the errors-in-variables regression model. We propose a novel non-convex weighted residual sum of squares criterion which exploits this parameterization and admits a new class of penalized estimators. The optimization is solved with an accelerated proximal gradient descent algorithm. Our method is used to study the association between microRNA expression and cancer drug activity measured on the NCI-60 cell lines. An R package implementing our method, MCMVR, is available at github.com/ajmolstad/MCMVR.
We assume a nonparametric regression model with signals given by the sum of a piecewise constant function and a smooth function. To detect the change-points and estimate the regression functions, we propose PCpluS, a combination of the fused Lasso and kernel smoothing. In contrast to existing approaches, it explicitly uses the assumption that the signal can be decomposed into a piecewise constant and a smooth function when detecting change-points. This is motivated by several applications and by theoretical results about partial linear model. Tuning parameters are selected by cross-validation. We argue that in this setting minimizing the L1-loss is superior to minimizing the L2-loss. We also highlight important consequences for cross-validation in piecewise constant change-point regression. Simulations demonstrate that our approach has a small average mean square error and detects change-points well, and we apply the methodology to genome sequencing data to detect copy number variations. Finally, we demonstrate its flexibility by combining it with smoothing splines and by proposing extensions to multivariate and filtered data.
Univariate and multivariate general linear regression models, subject to linear inequality constraints, arise in many scientific applications. The linear inequality restrictions on model parameters are often available from phenomenological knowledge and motivated by machine learning applications of high-consequence engineering systems (Agrell, 2019; Veiga and Marrel, 2012). Some studies on the multiple linear models consider known linear combinations of the regression coefficient parameters restricted between upper and lower bounds. In the present paper, we consider both univariate and multivariate general linear models subjected to this kind of linear restrictions. So far, research on univariate cases based on Bayesian methods is all under the condition that the coefficient matrix of the linear restrictions is a square matrix of full rank. This condition is not, however, always feasible. Another difficulty arises at the estimation step by implementing the Gibbs algorithm, which exhibits, in most cases, slow convergence. This paper presents a Bayesian method to estimate the regression parameters when the matrix of the constraints providing the set of linear inequality restrictions undergoes no condition. For the multivariate case, our Bayesian method estimates the regression parameters when the number of the constrains is less than the number of the regression coefficients in each multiple linear models. We examine the efficiency of our Bayesian method through simulation studies for both univariate and multivariate regressions. After that, we illustrate that the convergence of our algorithm is relatively faster than the previous methods. Finally, we use our approach to analyze two real datasets.
We propose three novel consistent specification tests for quantile regression models which generalize former tests in three ways. First, we allow the covariate effects to be quantile-dependent and nonlinear. Second, we allow parameterizing the conditional quantile functions by appropriate basis functions, rather than parametrically. We are hence able to test for functional forms beyond linearity, while retaining the linear effects as special cases. In both cases, the induced class of conditional distribution functions is tested with a Cram\'{e}r-von Mises type test statistic for which we derive the theoretical limit distribution and propose a bootstrap method. Third, to increase the power of the tests, we further suggest a modified test statistic. We highlight the merits of our tests in a detailed MC study and two real data examples. Our first application to conditional income distributions in Germany indicates that there are not only still significant differences between East and West but also across the quantiles of the conditional income distributions, when conditioning on age and year. The second application to data from the Australian national electricity market reveals the importance of using interaction effects for modelling the highly skewed and heavy-tailed distributions of energy prices conditional on day, time of day and demand.
We propose optimal Bayesian two-sample tests for testing equality of high-dimensional mean vectors and covariance matrices between two populations. In many applications including genomics and medical imaging, it is natural to assume that only a few entries of two mean vectors or covariance matrices are different. Many existing tests that rely on aggregating the difference between empirical means or covariance matrices are not optimal or yield low power under such setups. Motivated by this, we develop Bayesian two-sample tests employing a divide-and-conquer idea, which is powerful especially when the difference between two populations is sparse but large. The proposed two-sample tests manifest closed forms of Bayes factors and allow scalable computations even in high-dimensions. We prove that the proposed tests are consistent under relatively mild conditions compared to existing tests in the literature. Furthermore, the testable regions from the proposed tests turn out to be optimal in terms of rates. Simulation studies show clear advantages of the proposed tests over other state-of-the-art methods in various scenarios. Our tests are also applied to the analysis of the gene expression data of two cancer data sets.
Serverless computing is leading the way to a simplified and general purpose programming model for the cloud. A key enabler behind serverless is efficient load balancing, which routes continuous workloads to appropriate backend resources. However, current load balancing algorithms implemented in Kubernetes native serverless platforms are simple heuristics without performance guarantee. Although policies such as Pod or JFIQ yield asymptotically optimal mean response time, the information they depend on are usually unavailable. In addition, dispatching jobs with strict deadlines, fractional workloads, and maximum parallelism bound to limited resources online is difficult because the resource allocation decisions for jobs are intertwined. To design an online load balancing algorithm without assumptions on distributions while maximizing the social welfare, we construct several pseudo-social welfare functions and cost functions, where the latter is to estimate the marginal cost for provisioning services to every newly arrived job based on present resource surplus. The proposed algorithm, named OnSocMax, works by following the solutions of several convex pseudo-social welfare maximization problems. It is proved to be $\alpha$-competitive for some $\alpha$ at least 2. We also validate OnSocMax with simulations and the results show that it distinctly outperforms several handcrafted benchmarks.
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one per stage of intervention, that finds effective treatments for individual patients according to patient information history. DTRs can be estimated from models which include the interaction between treatment and a small number of covariates which are often chosen a priori. However, with increasingly large and complex data being collected, it is difficult to know which prognostic factors might be relevant in the treatment rule. Therefore, a more data-driven approach of selecting these covariates might improve the estimated decision rules and simplify models to make them easier to interpret. We propose a variable selection method for DTR estimation using penalized dynamic weighted least squares. Our method has the strong heredity property, that is, an interaction term can be included in the model only if the corresponding main terms have also been selected. Through simulations, we show our method has both the double robustness property and the oracle property, and the newly proposed methods compare favorably with other variable selection approaches.