Vector autoregressions (VARs) are popular in analyzing economic time series. However, VARs can be over-parameterized if the numbers of variables and lags are moderately large. Tensor VAR, a recent solution to over-parameterization, treats the coefficient matrix as a third-order tensor and estimates the corresponding tensor decomposition to achieve parsimony. In this paper, we employ the Tensor VAR structure with a CANDECOMP/PARAFAC (CP) decomposition and conduct Bayesian inference to estimate parameters. Firstly, we determine the rank by imposing the Multiplicative Gamma Prior to margins, i.e. elements in the decomposition, and accelerate the computation with an adaptive inferential scheme. Secondly, to obtain interpretable margins, we propose an interweaving algorithm to improve the mixing of margins. In the application of the US macroeconomic data, our models outperform standard VARs in point and density forecasting and yield a summary of the US economic dynamic.
Temporal irreversibility, often referred to as the arrow of time, is a fundamental concept in statistical mechanics. Markers of irreversibility also provide a powerful characterisation of information processing in biological systems. However, current approaches tend to describe temporal irreversibility in terms of a single scalar quantity, without disentangling the underlying dynamics that contribute to irreversibility. Here we propose a broadly applicable information-theoretic framework to characterise the arrow of time in multivariate time series, which yields qualitatively different types of irreversible information dynamics. This multidimensional characterisation reveals previously unreported high-order modes of irreversibility, and establishes a formal connection between recent heuristic markers of temporal irreversibility and metrics of information processing. We demonstrate the prevalence of high-order irreversibility in the hyperactive regime of a biophysical model of brain dynamics, showing that our framework is both theoretically principled and empirically useful. This work challenges the view of the arrow of time as a monolithic entity, enhancing both our theoretical understanding of irreversibility and our ability to detect it in practical applications.
We show how to learn discrete field theories from observational data of fields on a space-time lattice. For this, we train a neural network model of a discrete Lagrangian density such that the discrete Euler--Lagrange equations are consistent with the given training data. We, thus, obtain a structure-preserving machine learning architecture. Lagrangian densities are not uniquely defined by the solutions of a field theory. We introduce a technique to derive regularisers for the training process which optimise numerical regularity of the discrete field theory. Minimisation of the regularisers guarantees that close to the training data the discrete field theory behaves robust and efficient when used in numerical simulations. Further, we show how to identify structurally simple solutions of the underlying continuous field theory such as travelling waves. This is possible even when travelling waves are not present in the training data. This is compared to data-driven model order reduction based approaches, which struggle to identify suitable latent spaces containing structurally simple solutions when these are not present in the training data. Ideas are demonstrated on examples based on the wave equation and the Schr\"odinger equation.
This article re-examines Lawvere's abstract, category-theoretic proof of the fixed-point theorem whose contrapositive is a `universal' diagonal argument. The main result is that the necessary axioms for both the fixed-point theorem and the diagonal argument can be stripped back further, to a semantic analogue of a weak substructural logic lacking weakening or exchange.
Very distinct strategies can be deployed to recognize and characterize an unknown environment or a shape. A recent and promising approach, especially in robotics, is to reduce the complexity of the exploratory units to a minimum. Here, we show that this frugal strategy can be taken to the extreme by exploiting the power of statistical geometry and introducing new invariant features. We show that an elementary robot devoid of any orientation or observation system, exploring randomly, can access global information about an environment such as the values of the explored area and perimeter. The explored shapes are of arbitrary geometry and may even non-connected. From a dictionary, this most simple robot can thus identify various shapes such as famous monuments and even read a text.
Linear statistics of point processes yield Monte Carlo estimators of integrals. While the simplest approach relies on a homogeneous Poisson point process, more regularly spread point processes, such as scrambled low-discrepancy sequences or determinantal point processes, can yield Monte Carlo estimators with fast-decaying mean square error. Following the intuition that more regular configurations result in lower integration error, we introduce the repulsion operator, which reduces clustering by slightly pushing the points of a configuration away from each other. Our main theoretical result is that applying the repulsion operator to a homogeneous Poisson point process yields an unbiased Monte Carlo estimator with lower variance than under the original point process. On the computational side, the evaluation of our estimator is only quadratic in the number of integrand evaluations and can be easily parallelized without any communication across tasks. We illustrate our variance reduction result with numerical experiments and compare it to popular Monte Carlo methods. Finally, we numerically investigate a few open questions on the repulsion operator. In particular, the experiments suggest that the variance reduction also holds when the operator is applied to other motion-invariant point processes.
Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
This article revisits the fundamental problem of parameter selection for Gaussian process interpolation. By choosing the mean and the covariance functions of a Gaussian process within parametric families, the user obtains a family of Bayesian procedures to perform predictions about the unknown function, and must choose a member of the family that will hopefully provide good predictive performances. We base our study on the general concept of scoring rules, which provides an effective framework for building leave-one-out selection and validation criteria, and a notion of extended likelihood criteria based on an idea proposed by Fasshauer and co-authors in 2009, which makes it possible to recover standard selection criteria such as, for instance, the generalized cross-validation criterion. Under this setting, we empirically show on several test problems of the literature that the choice of an appropriate family of models is often more important than the choice of a particular selection criterion (e.g., the likelihood versus a leave-one-out selection criterion). Moreover, our numerical results show that the regularity parameter of a Mat{\'e}rn covariance can be selected effectively by most selection criteria.
Consider minimizing the entropy of a mixture of states by choosing each state subject to constraints. If the spectrum of each state is fixed, we expect that in order to reduce the entropy of the mixture, we should make the states less distinguishable in some sense. Here, we study a class of optimization problems that are inspired by this situation and shed light on the relevant notions of distinguishability. The motivation for our study is the recently introduced spin alignment conjecture. In the original version of the underlying problem, each state in the mixture is constrained to be a freely chosen state on a subset of $n$ qubits tensored with a fixed state $Q$ on each of the qubits in the complement. According to the conjecture, the entropy of the mixture is minimized by choosing the freely chosen state in each term to be a tensor product of projectors onto a fixed maximal eigenvector of $Q$, which maximally "aligns" the terms in the mixture. We generalize this problem in several ways. First, instead of minimizing entropy, we consider maximizing arbitrary unitarily invariant convex functions such as Fan norms and Schatten norms. To formalize and generalize the conjectured required alignment, we define alignment as a preorder on tuples of self-adjoint operators that is induced by majorization. We prove the generalized conjecture for Schatten norms of integer order, for the case where the freely chosen states are constrained to be classical, and for the case where only two states contribute to the mixture and $Q$ is proportional to a projector. The last case fits into a more general situation where we give explicit conditions for maximal alignment. The spin alignment problem has a natural "dual" formulation, versions of which have further generalizations that we introduce.
We present the Continuous Empirical Cubature Method (CECM), a novel algorithm for empirically devising efficient integration rules. The CECM aims to improve existing cubature methods by producing rules that are close to the optimal, featuring far less points than the number of functions to integrate. The CECM consists on a two-stage strategy. First, a point selection strategy is applied for obtaining an initial approximation to the cubature rule, featuring as many points as functions to integrate. The second stage consists in a sparsification strategy in which, alongside the indexes and corresponding weights, the spatial coordinates of the points are also considered as design variables. The positions of the initially selected points are changed to render their associated weights to zero, and in this way, the minimum number of points is achieved. Although originally conceived within the framework of hyper-reduced order models (HROMs), we present the method's formulation in terms of generic vector-valued functions, thereby accentuating its versatility across various problem domains. To demonstrate the extensive applicability of the method, we conduct numerical validations using univariate and multivariate Lagrange polynomials. In these cases, we show the method's capacity to retrieve the optimal Gaussian rule. We also asses the method for an arbitrary exponential-sinusoidal function in a 3D domain, and finally consider an example of the application of the method to the hyperreduction of a multiscale finite element model, showcasing notable computational performance gains. A secondary contribution of the current paper is the Sequential Randomized SVD (SRSVD) approach for computing the Singular Value Decomposition (SVD) in a column-partitioned format. The SRSVD is particularly advantageous when matrix sizes approach memory limitations.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.