In geosciences, the use of classical Euclidean methods is unsuitable for treating and analyzing some types of data, as this may not belong to a vector space. This is the case for correlation matrices, belonging to a subfamily of symmetric positive definite matrices, which in turn form a cone shape Riemannian manifold. We propose two novel applications for dealing with the problem of accounting with the non-linear behavior usually presented on multivariate geological data by exploiting the manifold features of correlations matrices. First, we employ an extension for the linear model of coregionalization (LMC) that alters the linear mixture, which is assumed fixed on the domain, and making it locally varying according to the local strength in the dependency of the coregionalized variables. The main challenge, once this relaxation on the LMC is assumed, is to solve appropriately the interpolation of the different known correlation matrices throughout the domain, in a reliable and coherent fashion. The present work adopts the non-euclidean framework to achieve our objective by locally averaging and interpolating the correlations between the variables, retaining the intrinsic geometry of correlation matrices. A second application deals with the problem of clustering of multivariate data.
Motivated by applications in combinatorial geometry, we consider the following question: Let $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ be an $m$-partition of a positive integer $n$, $S_i \subseteq \mathbb{C}^{\lambda_i}$ be finite sets, and let $S:=S_1 \times S_2 \times \ldots \times S_m \subset \mathbb{C}^n$ be the multi-grid defined by $S_i$. Suppose $p$ is an $n$-variate degree $d$ polynomial. How many zeros does $p$ have on $S$? We first develop a multivariate generalization of Combinatorial Nullstellensatz that certifies existence of a point $t \in S$ so that $p(t) \neq 0$. Then we show that a natural multivariate generalization of the DeMillo-Lipton-Schwartz-Zippel lemma holds, except for a special family of polynomials that we call $\lambda$-reducible. This yields a simultaneous generalization of Szemer\'edi-Trotter theorem and Schwartz-Zippel lemma into higher dimensions, and has applications in incidence geometry. Finally, we develop a symbolic algorithm that identifies certain $\lambda$-reducible polynomials. More precisely, our symbolic algorithm detects polynomials that include a cartesian product of hypersurfaces in their zero set. It is likely that using Chow forms the algorithm can be generalized to handle arbitrary $\lambda$-reducible polynomials, which we leave as an open problem.
Riemannian manifold Hamiltonian Monte Carlo (RMHMC) is a sampling algorithm that seeks to adapt proposals to the local geometry of the posterior distribution. The specific form of the Hamiltonian used in RMHMC necessitates {\it implicitly-defined} numerical integrators in order to sustain reversibility and volume-preservation, two properties that are necessary to establish detailed balance of RMHMC. In practice, these implicit equations are solved to a non-zero convergence tolerance via fixed-point iteration. However, the effect of these convergence thresholds on the ergodicity and computational efficiency properties of RMHMC are not well understood. The purpose of this research is to elucidate these relationships through numerous case studies. Our analysis reveals circumstances wherein the RMHMC algorithm is sensitive, and insensitive, to these convergence tolerances. Our empirical analysis examines several aspects of the computation: (i) we examine the ergodicity of the RMHMC Markov chain by employing statistical methods for comparing probability measures based on collections of samples; (ii) we investigate the degree to which detailed balance is violated by measuring errors in reversibility and volume-preservation; (iii) we assess the efficiency of the RMHMC Markov chain in terms of time-normalized ESS. In each of these cases, we investigate the sensitivity of these metrics to the convergence threshold and further contextualize our results in terms of comparison against Euclidean HMC. We propose a method by which one may select the convergence tolerance within a Bayesian inference application using techniques of stochastic approximation and we examine Newton's method, an alternative to fixed point iterations, which can eliminate much of the sensitivity of RMHMC to the convergence threshold.
For multivariate spatial Gaussian process (GP) models, customary specifications of cross-covariance functions do not exploit relational inter-variable graphs to ensure process-level conditional independence among the variables. This is undesirable, especially for highly multivariate settings, where popular cross-covariance functions such as the multivariate Mat\'ern suffer from a "curse of dimensionality" as the number of parameters and floating point operations scale up in quadratic and cubic order, respectively, in the number of variables. We propose a class of multivariate "Graphical Gaussian Processes" using a general construction called "stitching" that crafts cross-covariance functions from graphs and ensures process-level conditional independence among variables. For the Mat\'ern family of functions, stitching yields a multivariate GP whose univariate components are Mat\'ern GPs, and conforms to process-level conditional independence as specified by the graphical model. For highly multivariate settings and decomposable graphical models, stitching offers massive computational gains and parameter dimension reduction. We demonstrate the utility of the graphical Mat\'ern GP to jointly model highly multivariate spatial data using simulation examples and an application to air-pollution modelling.
A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them -- this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test distributions. Under an existing notion of inter- and extrapolation based on reweighting of sub-group likelihoods, we rigorously demonstrate that extrapolation is computationally much harder than interpolation, though their statistical complexity is not significantly different. Furthermore, we show that ERM -- or a noisy variant -- is provably minimax-optimal for both tasks. Our framework presents a new avenue for the formal analysis of domain generalization algorithms which may be of independent interest.
On the one hand, artificial neural networks (ANNs) are commonly labelled as black-boxes, lacking interpretability; an issue that hinders human understanding of ANNs' behaviors. A need exists to generate a meaningful sequential logic of the ANN for interpreting a production process of a specific output. On the other hand, decision trees exhibit better interpretability and expressive power due to their representation language and the existence of efficient algorithms to transform the trees into rules. However, growing a decision tree based on the available data could produce larger than necessary trees or trees that do not generalise well. In this paper, we introduce two novel multivariate decision tree (MDT) algorithms for rule extraction from ANNs: an Exact-Convertible Decision Tree (EC-DT) and an Extended C-Net algorithm. They both transform a neural network with Rectified Linear Unit activation functions into a representative tree, which can further be used to extract multivariate rules for reasoning. While the EC-DT translates an ANN in a layer-wise manner to represent exactly the decision boundaries implicitly learned by the hidden layers of the network, the Extended C-Net combines the decompositional approach from EC-DT with a C5 tree learning algorithm to form decision rules. The results suggest that while EC-DT is superior in preserving the structure and the fidelity of ANN, Extended C-Net generates the most compact and highly effective trees from ANN. Both proposed MDT algorithms generate rules including combinations of multiple attributes for precise interpretations for decision-making.
Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.
Spatio-temporal forecasting has numerous applications in analyzing wireless, traffic, and financial networks. Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data. Recent advances in deep learning allow for better modelling of spatial and temporal dependencies. While most of these models focus on obtaining accurate point forecasts, they do not characterize the prediction uncertainty. In this work, we consider the time-series data as a random realization from a nonlinear state-space model and target Bayesian inference of the hidden states for probabilistic forecasting. We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings. Thorough experimentation on several real world time-series datasets demonstrates that our approach provides better characterization of uncertainty while maintaining comparable accuracy to the state-of-the art point forecasting methods.
There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy- ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.
Leveraging biased click data for optimizing learning to rank systems has been a popular approach in information retrieval. Because click data is often noisy and biased, a variety of methods have been proposed to construct unbiased learning to rank (ULTR) algorithms for the learning of unbiased ranking models. Among them, automatic unbiased learning to rank (AutoULTR) algorithms that jointly learn user bias models (i.e., propensity models) with unbiased rankers have received a lot of attention due to their superior performance and low deployment cost in practice. Despite their differences in theories and algorithm design, existing studies on ULTR usually use uni-variate ranking functions to score each document or result independently. On the other hand, recent advances in context-aware learning-to-rank models have shown that multivariate scoring functions, which read multiple documents together and predict their ranking scores jointly, are more powerful than uni-variate ranking functions in ranking tasks with human-annotated relevance labels. Whether such superior performance would hold in ULTR with noisy data, however, is mostly unknown. In this paper, we investigate existing multivariate scoring functions and AutoULTR algorithms in theory and prove that permutation invariance is a crucial factor that determines whether a context-aware learning-to-rank model could be applied to existing AutoULTR framework. Our experiments with synthetic clicks on two large-scale benchmark datasets show that AutoULTR models with permutation-invariant multivariate scoring functions significantly outperform those with uni-variate scoring functions and permutation-variant multivariate scoring functions.
While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al.~\cite{AroraGeLiMaZh17} suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.