The maximum likelihood threshold (MLT) of a graph $G$ is the minimum number of samples to almost surely guarantee existence of the maximum likelihood estimate in the corresponding Gaussian graphical model. Recently a new characterization of the MLT in terms of rigidity-theoretic properties of $G$ was proved \cite{Betal}. This characterization was then used to give new combinatorial lower bounds on the MLT of any graph. We continue this line of research by exploiting combinatorial rigidity results to compute the MLT precisely for several families of graphs. These include graphs with at most $9$ vertices, graphs with at most 24 edges, every graph sufficiently close to a complete graph and graphs with bounded degrees.
The ever-growing size of modern space-time data sets, such as those collected by remote sensing, requires new techniques for their efficient and automated processing, including gap-filling of missing values. CUDA-based parallelization on GPU has become a popular way to dramatically increase computational efficiency of various approaches. Recently, we have proposed a computationally efficient and competitive, yet simple spatial prediction approach inspired from statistical physics models, called modified planar rotator (MPR) method. Its GPU implementation allowed additional impressive computational acceleration exceeding two orders of magnitude in comparison with CPU calculations. In the current study we propose a rather general approach to modelling spatial heterogeneity in GPU-implemented spatial prediction methods for two-dimensional gridded data by introducing spatial variability to model parameters. Predictions of unknown values are obtained from non-equilibrium conditional simulations, assuming ``local'' equilibrium conditions. We demonstrate that the proposed method leads to significant improvements in both prediction performance and computational efficiency.
In many applications, we want to influence the decisions of independent agents by designing incentives for their actions. We revisit a fundamental problem in this area, called GAME IMPLEMENTATION: Given a game in standard form and a set of desired strategies, can we design a set of payment promises such that if the players take the payment promises into account, then all undominated strategies are desired? Furthermore, we aim to minimize the cost, that is, the worst-case amount of payments. We study the tractability of computing such payment promises and determine more closely what obstructions we may have to overcome in doing so. We show that GAME IMPLEMENTATION is NP-hard even for two players, solving in particular a long open question (Eidenbenz et al. 2011) and suggesting more restrictions are necessary to obtain tractability results. We thus study the regime in which players have only a small constant number of strategies and obtain the following. First, this case remains NP-hard even if each player's utility depends only on three others. Second, we repair a flawed efficient algorithm for the case of both small number of strategies and small number of players. Among further results, we characterize sets of desired strategies that can be implemented at zero cost as a kind of stable core of the game.
We propose a method for obtaining maximum likelihood estimates (MLEs) of a Markov-Modulated Jump-Diffusion Model (MMJDM) when the data is a discrete time sample of the diffusion process, the jumps follow a Laplace distribution, and the parameters of the diffusion are controlled by a Markov Jump Process (MJP). The data can be viewed as incomplete observation of a model with a tractable likelihood function. Therefore we use the EM-algorithm to obtain MLEs of the parameters. We validate our method with simulated data. The motivation for obtaining estimates of this model is that stock prices have distinct drift and volatility at distinct periods of time. The assumption is that these phases are modulated by macroeconomic environments whose changes are given by discontinuities or jumps in prices. This model improves on the stock prices representation of classical models such as the model of Black and Scholes or Merton's Jump-Diffusion Model (JDM). We fit the model to the stock prices of Amazon and Netflix during a 15-years period and use our method to estimate the MLEs.
We consider a simple one-way averaging protocol on graphs. Initially, every node of the graph has a value. A node $u$ is chosen uniformly at random and $u$ samples $k$ neighbours $v_1,v_2,\cdots, v_k \in N(u)$ uniformly at random. Then, $u$ averages its value with $v$ as follows: $\xi_u(t+1) = \alpha \xi_u(t) + \frac{(1-\alpha)}{k} \sum_{i=1}^k \xi_{v_i}(t)$ for some $\alpha \in (0,1)$, where $\xi_u(t)$ is the value of node $u$ at time $t$. Note that, in contrast to neighbourhood value balancing, only $u$ changes its value. Hence, the sum (and the average) of the values of all nodes changes over time. Our results are two-fold. First, we show a bound on the convergence time (the time it takes until all values are roughly the same) that is asymptotically tight for some initial assignments of values to the nodes. Our second set of results concerns the ability of this protocol to approximate well the initial average of all values: we bound the probability that the final outcome is significantly away from the initial average. Interestingly, the variance of the outcome does not depend on the graph structure. The proof introduces an interesting generalisation of the duality between coalescing random walks and the voter model.
Surrogate models have shown to be an extremely efficient aid in solving engineering problems that require repeated evaluations of an expensive computational model. They are built by sparsely evaluating the costly original model and have provided a way to solve otherwise intractable problems. A crucial aspect in surrogate modelling is the assumption of smoothness and regularity of the model to approximate. This assumption is however not always met in reality. For instance in civil or mechanical engineering, some models may present discontinuities or non-smoothness, e.g., in case of instability patterns such as buckling or snap-through. Building a single surrogate model capable of accounting for these fundamentally different behaviors or discontinuities is not an easy task. In this paper, we propose a three-stage approach for the approximation of non-smooth functions which combines clustering, classification and regression. The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates that are eventually assembled. A sequence of well-known machine learning techniques are used: Dirichlet process mixtures models (DPMM), support vector machines and Gaussian process modelling. The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
We study differentially private (DP) stochastic optimization (SO) with loss functions whose worst-case Lipschitz parameter over all data points may be extremely large. To date, the vast majority of work on DP SO assumes that the loss is uniformly Lipschitz continuous over data (i.e. stochastic gradients are uniformly bounded over all data points). While this assumption is convenient, it often leads to pessimistic excess risk bounds. In many practical problems, the worst-case Lipschitz parameter of the loss over all data points may be extremely large due to outliers. In such cases, the error bounds for DP SO, which scale with the worst-case Lipschitz parameter of the loss, are vacuous. To address these limitations, this work provides near-optimal excess risk bounds that do not depend on the uniform Lipschitz parameter of the loss. Building on a recent line of work [WXDX20, KLZ22], we assume that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on uniformly Lipschitz DP SO, our excess risk scales with the $k$-th moment bound instead of the uniform Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers and/or heavy-tailed data. For convex and strongly convex loss functions, we provide the first asymptotically optimal excess risk bounds (up to a logarithmic factor). In contrast to [WXDX20, KLZ22], our bounds do not require the loss function to be differentiable/smooth. We also devise an accelerated algorithm for smooth losses that runs in linear time and has excess risk that is tight in certain practical parameter regimes. Additionally, our work is the first to address non-convex non-uniformly Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some practical machine learning models. Our Proximal-PL algorithm has near-optimal excess risk.
The adaptive processing of structured data is a long-standing research topic in machine learning that investigates how to automatically learn a mapping from a structured input to outputs of various nature. Recently, there has been an increasing interest in the adaptive processing of graphs, which led to the development of different neural network-based methodologies. In this thesis, we take a different route and develop a Bayesian Deep Learning framework for graph learning. The dissertation begins with a review of the principles over which most of the methods in the field are built, followed by a study on graph classification reproducibility issues. We then proceed to bridge the basic ideas of deep learning for graphs with the Bayesian world, by building our deep architectures in an incremental fashion. This framework allows us to consider graphs with discrete and continuous edge features, producing unsupervised embeddings rich enough to reach the state of the art on several classification tasks. Our approach is also amenable to a Bayesian nonparametric extension that automatizes the choice of almost all model's hyper-parameters. Two real-world applications demonstrate the efficacy of deep learning for graphs. The first concerns the prediction of information-theoretic quantities for molecular simulations with supervised neural models. After that, we exploit our Bayesian models to solve a malware-classification task while being robust to intra-procedural code obfuscation techniques. We conclude the dissertation with an attempt to blend the best of the neural and Bayesian worlds together. The resulting hybrid model is able to predict multimodal distributions conditioned on input graphs, with the consequent ability to model stochasticity and uncertainty better than most works. Overall, we aim to provide a Bayesian perspective into the articulated research field of deep learning for graphs.
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.