Binary spatio-temporal data are common in many application areas. Such data can be considered from many perspectives, including via deterministic or stochastic cellular automata, where local rules govern the transition probabilities that describe the evolution of the 0 and 1 states across space and time. One implementation of a stochastic cellular automata for such data is with a spatio-temporal generalized linear model (or mixed model), with the local rule covariates being included in the transformed mean response. However, in real world applications, we seldom have a complete understanding of the local rules and it is helpful to augment the transformed linear predictor with a latent spatio-temporal dynamic process. Here, we demonstrate for the first time that an echo state network (ESN) latent process can be used to enhance the local rule covariates. We implement this in a hierarchical Bayesian framework with regularized horseshoe priors on the ESN output weight matrices, which extends the ESN literature as well. Finally, we gain added expressiveness from the ESNs by considering an ensemble of ESN reservoirs, which we accommodate through model averaging. This is also new to the ESN literature. We demonstrate our methodology on a simulated process in which we assume we do not know all of the local CA rules, as well as a fire evolution data set, and data describing the spread of raccoon rabies in Connecticut, USA.
Generative Adversarial Networks (GANs) have shown immense potential in fields such as text and image generation. Only very recently attempts to exploit GANs to statistical-mechanics models have been reported. Here we quantitatively test this approach by applying it to a prototypical stochastic process on a lattice. By suitably adding noise to the original data we succeed in bringing both the Generator and the Discriminator loss functions close to their ideal value. Importantly, the discreteness of the model is retained despite the noise. As typical for adversarial approaches, oscillations around the convergence limit persist also at large epochs. This undermines model selection and the quality of the generated trajectories. We demonstrate that a simple multi-model procedure where stochastic trajectories are advanced at each step upon randomly selecting a Generator leads to a remarkable increase in accuracy. This is illustrated by quantitative analysis of both the predicted equilibrium probability distribution and of the escape-time distribution. Based on the reported findings, we believe that GANs are a promising tool to tackle complex statistical dynamics by machine learning techniques
We are interested in numerically solving a transitional model derived from the Bloch model. The Bloch equation describes the time evolution of the density matrix of a quantum system forced by an electromagnetic wave. In a high frequency and low amplitude regime, it asymptotically reduces to a non-stiff rate equation. As a middle ground, the transitional model governs the diagonal part of the density matrix. It fits in a general setting of linear problems with a high-frequency quasi-periodic forcing and an exponentially decaying forcing. The numerical resolution of such problems is challenging. Adapting high-order averaging techniques to this setting, we separate the slow (rate) dynamics from the fast (oscillatory and decay) dynamics to derive a new micro-macro problem. We derive estimates for the size of the micro part of the decomposition, and of its time derivatives, showing that this new problem is non-stiff. As such, we may solve this micro-macro problem with uniform accuracy using standard numerical schemes. To validate this approach, we present numerical results first on a toy problem and then on the transitional Bloch model.
Stock and flow diagrams are widely used in epidemiology to model the dynamics of populations. Although tools already exist for building these diagrams and simulating the systems they describe, we have created a new package called StockFlow, part of the AlgebraicJulia ecosystem, which uses ideas from category theory to overcome notable limitations of existing software. Compositionality is provided by the theory of decorated cospans: stock and flow diagrams can be composed to form larger ones in an intuitive way formalized by the operad of undirected wiring diagrams. Our approach also cleanly separates the syntax of stock and flow diagrams from the semantics they can be assigned. We consider semantics in ordinary differential equations, although others are possible. As an example, we explain code in StockFlow that implements a simplified version of a COVID-19 model used in Canada.
To quantify uncertainties in inverse problems of partial differential equations (PDEs), we formulate them into statistical inference problems using Bayes' formula. Recently, well-justified infinite-dimensional Bayesian analysis methods have been developed to construct dimension-independent algorithms. However, there are three challenges for these infinite-dimensional Bayesian methods: prior measures usually act as regularizers and are not able to incorporate prior information efficiently; complex noises, such as more practical non-i.i.d. distributed noises, are rarely considered; and time-consuming forward PDE solvers are needed to estimate posterior statistical quantities. To address these issues, an infinite-dimensional inference framework has been proposed based on the infinite-dimensional variational inference method and deep generative models. Specifically, by introducing some measure equivalence assumptions, we derive the evidence lower bound in the infinite-dimensional setting and provide possible parametric strategies that yield a general inference framework called the Variational Inverting Network (VINet). This inference framework can encode prior and noise information from learning examples. In addition, relying on the power of deep neural networks, the posterior mean and variance can be efficiently and explicitly generated in the inference stage. In numerical experiments, we design specific network structures that yield a computable VINet from the general inference framework. Numerical examples of linear inverse problems of an elliptic equation and the Helmholtz equation are presented to illustrate the effectiveness of the proposed inference framework.
For exchangeable data, mixture models are an extremely useful tool for density estimation due to their attractive balance between smoothness and flexibility. When additional covariate information is present, mixture models can be extended for flexible regression by modeling the mixture parameters, namely the weights and atoms, as functions of the covariates. These types of models are interpretable and highly flexible, allowing non only the mean but the whole density of the response to change with the covariates, which is also known as density regression. This article reviews Bayesian covariate-dependent mixture models and highlights which data types can be accommodated by the different models along with the methodological and applied areas where they have been used. In addition to being highly flexible, these models are also numerous; we focus on nonparametric constructions and broadly organize them into three categories: 1) joint models of the responses and covariates, 2) conditional models with single-weights and covariate-dependent atoms, and 3) conditional models with covariate-dependent weights. The diversity and variety of the available models in the literature raises the question of how to choose among them for the application at hand. We attempt to shed light on this question through a careful analysis of the predictive equations for the conditional mean and density function as well as predictive comparisons in three simulated data examples.
Theorems from universal algebra such as that of Murski\u{i} from the 1970s have a striking similarity to universal approximation results for neural nets along the lines of Cybenko's from the 1980s. We consider here a discrete analogue of the classical notion of a neural net which places these results in a unified setting. We introduce a learning algorithm based on polymorphisms of relational structures and show how to use it for a classical learning task.
In recurrent neural networks, learning long-term dependency is the main difficulty due to the vanishing and exploding gradient problem. Many researchers are dedicated to solving this issue and they proposed many algorithms. Although these algorithms have achieved great success, understanding how the information decays remains an open problem. In this paper, we study the dynamics of the hidden state in recurrent neural networks. We propose a new perspective to analyze the hidden state space based on an eigen decomposition of the weight matrix. We start the analysis by linear state space model and explain the function of preserving information in activation functions. We provide an explanation for long-term dependency based on the eigen analysis. We also point out the different behavior of eigenvalues for regression tasks and classification tasks. From the observations on well-trained recurrent neural networks, we proposed a new initialization method for recurrent neural networks, which improves consistently performance. It can be applied to vanilla-RNN, LSTM, and GRU. We test on many datasets, such as Tomita Grammars, pixel-by-pixel MNIST datasets, and machine translation datasets (Multi30k). It outperforms the Xavier initializer and kaiming initializer as well as other RNN-only initializers like IRNN and sp-RNN in several tasks.
We study a portioning setting in which a public resource such as time or money is to be divided among a given set of candidates, and each agent proposes a division of the resource. We consider two families of aggregation rules for this setting - those based on coordinate-wise aggregation and those that optimize some notion of welfare - as well as the recently proposed Independent Markets mechanism. We provide a detailed analysis of these rules from an axiomatic perspective, both for classic axioms, such as strategyproofness and Pareto optimality, and for novel axioms, which aim to capture proportionality in this setting. Our results indicate that a simple rule that computes the average of all proposals satisfies many of our axioms, including some that are violated by more sophisticated rules.
Developments in machine learning together with the increasing usage of sensor data challenge the reliance on deterministic logs, requiring new process mining solutions for uncertain, and in particular stochastically known, logs. In this work we formulate {trace recovery}, the task of generating a deterministic log from stochastically known logs that is as faithful to reality as possible. An effective trace recovery algorithm would be a powerful aid for maintaining credible process mining tools for uncertain settings. We propose an algorithmic framework for this task that recovers the best alignment between a stochastically known log and a process model, with three innovative features. Our algorithm, SKTR, 1) handles both Markovian and non-Markovian processes; 2) offers a quality-based balance between a process model and a log, depending on the available process information, sensor quality, and machine learning predictiveness power; and 3) offers a novel use of a synchronous product multigraph to create the log. An empirical analysis using five publicly available datasets, three of which use predictive models over standard video capturing benchmarks, shows an average relative accuracy improvement of more than 10 over a common baseline.
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.