We consider the problem of jointly modeling and clustering populations of tensors by introducing a high-dimensional tensor mixture model with heterogeneous covariances. To effectively tackle the high dimensionality of tensor objects, we employ plausible dimension reduction assumptions that exploit the intrinsic structures of tensors such as low-rankness in the mean and separability in the covariance. In estimation, we develop an efficient high-dimensional expectation-conditional-maximization (HECM) algorithm that breaks the intractable optimization in the M-step into a sequence of much simpler conditional optimization problems, each of which is convex, admits regularization and has closed-form updating formulas. Our theoretical analysis is challenged by both the non-convexity in the EM-type estimation and having access to only the solutions of conditional maximizations in the M-step, leading to the notion of dual non-convexity. We demonstrate that the proposed HECM algorithm, with an appropriate initialization, converges geometrically to a neighborhood that is within statistical precision of the true parameter. The efficacy of our proposed method is demonstrated through comparative numerical experiments and an application to a medical study, where our proposal achieves an improved clustering accuracy over existing benchmarking methods.
Hierarchical forecasting problems arise when time series have a natural group structure, and predictions at multiple levels of aggregation and disaggregation across the groups are needed. In such problems, it is often desired to satisfy the aggregation constraints in a given hierarchy, referred to as hierarchical coherence in the literature. Maintaining hierarchical coherence while producing accurate forecasts can be a challenging problem, especially in the case of probabilistic forecasting. We present a novel method capable of accurate and coherent probabilistic forecasts for hierarchical time series. We call it Deep Poisson Mixture Network (DPMN). It relies on the combination of neural networks and a statistical model for the joint distribution of the hierarchical multivariate time series structure. By construction, the model guarantees hierarchical coherence and provides simple rules for aggregation and disaggregation of the predictive distributions. We perform an extensive empirical evaluation comparing the DPMN to other state-of-the-art methods which produce hierarchically coherent probabilistic forecasts on multiple public datasets. Compared to existing coherent probabilistic models, we obtained a relative improvement in the overall Continuous Ranked Probability Score (CRPS) of 17.1% on Australian domestic tourism data, 24.2 on the Favorita grocery sales dataset, and 6.9% on a San Francisco Bay Area highway traffic dataset.
There are several challenges associated with inverse problems in which we seek to reconstruct a piecewise constant field, and which we model using multiple level sets. Adopting a Bayesian viewpoint, we impose prior distributions on both the level set functions that determine the piecewise constant regions as well as the parameters that determine their magnitudes. We develop a Gauss-Newton approach with a backtracking line search to efficiently compute the maximum a priori (MAP) estimate as a solution to the inverse problem. We use the Gauss-Newton Laplace approximation to construct a Gaussian approximation of the posterior distribution and use preconditioned Krylov subspace methods to sample from the resulting approximation. To visualize the uncertainty associated with the parameter reconstructions we compute the approximate posterior variance using a matrix-free Monte Carlo diagonal estimator, which we develop in this paper. We will demonstrate the benefits of our approach and solvers on synthetic test problems (photoacoustic and hydraulic tomography, respectively a linear and nonlinear inverse problem) as well as an application to X-ray imaging with real data.
We investigate a clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix. We start by considering Gaussian mixtures with two equally-sized components and derive a Max-Cut integer program based on maximum likelihood estimation. We prove its solutions achieve the optimal misclassification rate when the number of samples grows linearly in the dimension, up to a logarithmic factor. However, solving the Max-cut problem appears to be computationally intractable. To overcome this, we develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size. Although this sample complexity is worse than that of the Max-cut problem, we conjecture that no polynomial-time method can perform better. Furthermore, we gather numerical and theoretical evidence that supports the existence of a statistical-computational gap. Finally, we generalize the Max-Cut program to a $k$-means program that handles multi-component mixtures with possibly unequal weights. It enjoys similar optimality guarantees for mixtures of distributions that satisfy a transportation-cost inequality, encompassing Gaussian and strongly log-concave distributions.
Detection of change-points in a sequence of high-dimensional observations is a very challenging problem, and this becomes even more challenging when the sample size (i.e., the sequence length) is small. In this article, we propose some change-point detection methods based on clustering, which can be conveniently used in such high dimension, low sample size situations. First, we consider the single change-point problem. Using k-means clustering based on some suitable dissimilarity measures, we propose some methods for testing the existence of a change-point and estimating its location. High-dimensional behavior of these proposed methods are investigated under appropriate regularity conditions. Next, we extend our methods for detection of multiple change-points. We carry out extensive numerical studies to compare the performance of our proposed methods with some state-of-the-art methods.
The $k$-center problem is to choose a subset of size $k$ from a set of $n$ points such that the maximum distance from each point to its nearest center is minimized. Let $Q=\{Q_1,\ldots,Q_n\}$ be a set of polygons or segments in the region-based uncertainty model, in which each $Q_i$ is an uncertain point, where the exact locations of the points in $Q_i$ are unknown. The geometric objects segments and polygons can be models of a point set. We define the uncertain version of the $k$-center problem as a generalization in which the objective is to find $k$ points from $Q$ to cover the remaining regions of $Q$ with minimum or maximum radius of the cluster to cover at least one or all exact instances of each $Q_i$, respectively. We modify the region-based model to allow multiple points to be chosen from a region and call the resulting model the aggregated uncertainty model. All these problems contain the point version as a special case, so they are all NP-hard with a lower bound 1.822. We give approximation algorithms for uncertain $k$-center of a set of segments and polygons. We also have implemented some of our algorithms on a data-set to show our theoretical performance guarantees can be achieved in practice.
Offline policy learning (OPL) leverages existing data collected a priori for policy optimization without any active exploration. Despite the prevalence and recent interest in this problem, its theoretical and algorithmic foundations in function approximation settings remain under-developed. In this paper, we consider this problem on the axes of distributional shift, optimization, and generalization in offline contextual bandits with neural networks. In particular, we propose a provably efficient offline contextual bandit with neural network function approximation that does not require any functional assumption on the reward. We show that our method provably generalizes over unseen contexts under a milder condition for distributional shift than the existing OPL works. Notably, unlike any other OPL method, our method learns from the offline data in an online manner using stochastic gradient descent, allowing us to leverage the benefits of online learning into an offline setting. Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart. Finally, we demonstrate the empirical effectiveness of our method in a range of synthetic and real-world OPL problems.
This paper studies a distributed policy gradient in collaborative multi-agent reinforcement learning (MARL), where agents over a communication network aim to find the optimal policy to maximize the average of all agents' local returns. Due to the non-concave performance function of policy gradient, the existing distributed stochastic optimization methods for convex problems cannot be directly used for policy gradient in MARL. This paper proposes a distributed policy gradient with variance reduction and gradient tracking to address the high variances of policy gradient, and utilizes importance weight to solve the non-stationary problem in the sampling process. We then provide an upper bound on the mean-squared stationary gap, which depends on the number of iterations, the mini-batch size, the epoch size, the problem parameters, and the network topology. We further establish the sample and communication complexity to obtain an $\epsilon$-approximate stationary point. Numerical experiments on the control problem in MARL are performed to validate the effectiveness of the proposed algorithm.
This study debuts a new spline dimensional decomposition (SDD) for uncertainty quantification analysis of high-dimensional functions, including those endowed with high nonlinearity and nonsmoothness, if they exist, in a proficient manner. The decomposition creates an hierarchical expansion for an output random variable of interest with respect to measure-consistent orthonormalized basis splines (B-splines) in independent input random variables. A dimensionwise decomposition of a spline space into orthogonal subspaces, each spanned by a reduced set of such orthonormal splines, results in SDD. Exploiting the modulus of smoothness, the SDD approximation is shown to converge in mean-square to the correct limit. The computational complexity of the SDD method is polynomial, as opposed to exponential, thus alleviating the curse of dimensionality to the extent possible. Analytical formulae are proposed to calculate the second-moment properties of a truncated SDD approximation for a general output random variable in terms of the expansion coefficients involved. Numerical results indicate that a low-order SDD approximation of nonsmooth functions calculates the probabilistic characteristics of an output variable with an accuracy matching or surpassing those obtained by high-order approximations from several existing methods. Finally, a 34-dimensional random eigenvalue analysis demonstrates the utility of SDD in solving practical problems.
Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes. Recent progress on graph convolutional networks has proved that graph convolution is effective in combining structural and content information, and several recent methods based on it have achieved promising clustering performance on some real attributed networks. However, there is limited understanding of how graph convolution affects clustering performance and how to properly use it to optimize performance for different graphs. Existing methods essentially use graph convolution of a fixed and low order that only takes into account neighbours within a few hops of each node, which underutilizes node relations and ignores the diversity of graphs. In this paper, we propose an adaptive graph convolution method for attributed graph clustering that exploits high-order graph convolution to capture global cluster structure and adaptively selects the appropriate order for different graphs. We establish the validity of our method by theoretical analysis and extensive experiments on benchmark datasets. Empirical results show that our method compares favourably with state-of-the-art methods.
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.