Probabilistic programming combines general computer programming, statistical inference, and formal semantics to help systems make decisions when facing uncertainty. Probabilistic programs are ubiquitous, including having a significant impact on machine intelligence. While many probabilistic algorithms have been used in practice in different domains, their automated verification based on formal semantics is still a relatively new research area. In the last two decades, it has attracted much interest. Many challenges, however, remain. The work presented in this paper, probabilistic unifying relations (ProbURel), takes a step towards our vision to tackle these challenges. Our work is based on Hehner's predicative probabilistic programming, but there are several obstacles to the broader adoption of his work. Our contributions here include (1) the formalisation of its syntax and semantics by introducing an Iverson bracket notation to separate relations from arithmetic; (2) the formalisation of relations using Unifying Theories of Programming (UTP) and probabilities outside the brackets using summation over the topological space of the real numbers; (3) the constructive semantics for probabilistic loops using Kleene's fixed-point theorem; (4) the enrichment of its semantics from distributions to subdistributions and superdistributions to deal with the constructive semantics; (5) the unique fixed-point theorem to simplify the reasoning about probabilistic loops; and (6) the mechanisation of our theory in Isabelle/UTP, an implementation of UTP in Isabelle/HOL, for automated reasoning using theorem proving. We demonstrate our work with six examples, including problems in robot localisation, classification in machine learning, and the termination of probabilistic loops.
The study of diffeomorphism groups and their applications to problems in analysis and geometry has a long history. In geometric hydrodynamics, pioneered by V.~Arnold in the 1960s, one considers an ideal fluid flow as the geodesic motion on the infinite-dimensional group of volume-preserving diffeomorphisms of the fluid domain with respect to the metric defined by the kinetic energy. Similar considerations on the space of densities lead to a geometric description of optimal mass transport and the Kantorovich-Wasserstein metric. Likewise, information geometry associated with the Fisher-Rao metric and the Hellinger distance has an equally beautiful infinite-dimensional geometric description and can be regarded as a higher-order Sobolev analogue of optimal transportation. In this work we review various metrics on diffeomorphism groups relevant to this approach and introduce appropriate topology, smooth structures and dynamics on the corresponding infinite-dimensional manifolds. Our main goal is to demonstrate how, alongside topological hydrodynamics, Hamiltonian dynamics and optimal mass transport, information geometry with its elaborate toolbox has become yet another exciting field for applications of geometric analysis on diffeomorphism groups.
The training of modern machine learning models often consists in solving high-dimensional non-convex optimisation problems that are subject to large-scale data. In this context, momentum-based stochastic optimisation algorithms have become particularly widespread. The stochasticity arises from data subsampling which reduces computational cost. Both, momentum and stochasticity help the algorithm to converge globally. In this work, we propose and analyse a continuous-time model for stochastic gradient descent with momentum. This model is a piecewise-deterministic Markov process that represents the optimiser by an underdamped dynamical system and the data subsampling through a stochastic switching. We investigate longtime limits, the subsampling-to-no-subsampling limit, and the momentum-to-no-momentum limit. We are particularly interested in the case of reducing the momentum over time. Under convexity assumptions, we show convergence of our dynamical system to the global minimiser when reducing momentum over time and letting the subsampling rate go to infinity. We then propose a stable, symplectic discretisation scheme to construct an algorithm from our continuous-time dynamical system. In experiments, we study our scheme in convex and non-convex test problems. Additionally, we train a convolutional neural network in an image classification problem. Our algorithm {attains} competitive results compared to stochastic gradient descent with momentum.
We consider a nonlocal functional equation that is a generalization of the mathematical model used in behavioral sciences. The equation is built upon an operator that introduces a convex combination and a nonlinear mixing of the function arguments. We show that, provided some growth conditions of the coefficients, there exists a unique solution in the natural Lipschitz space. Furthermore, we prove that the regularity of the solution is inherited from the smoothness properties of the coefficients. As a natural numerical method to solve the general case, we consider the collocation scheme of piecewise linear functions. We prove that the method converges with the error bounded by the error of projecting the Lipschitz function onto the piecewise linear polynomial space. Moreover, provided sufficient regularity of the coefficients, the scheme is of the second order measured in the supremum norm. A series of numerical experiments verify the proved claims and show that the implementation is computationally cheap and exceeds the frequently used Picard iteration by orders of magnitude in the calculation time.
We extend prior work comparing linear multilevel models (MLM) and fixed effect (FE) models to the generalized linear model (GLM) setting, where the coefficient on a treatment variable is of primary interest. This leads to three key insights. (i) First, as in the linear setting, MLM can be thought of as a regularized form of FE. This explains why MLM can show large biases in its treatment coefficient estimates when group-level confounding is present. However, unlike the linear setting, there is not an exact equivalence between MLM and regularized FE coefficient estimates in GLMs. (ii) Second, we study a generalization of "bias-corrected MLM" (bcMLM) to the GLM setting. Neither FE nor bcMLM entirely solves MLM's bias problem in GLMs, but bcMLM tends to show less bias than does FE. (iii) Third, and finally, just like in the linear setting, MLM's default standard errors can misspecify the true intragroup dependence structure in the GLM setting, which can lead to downwardly biased standard errors. A cluster bootstrap is a more agnostic alternative. Ultimately, for non-linear GLMs, we recommend bcMLM for estimating the treatment coefficient, and a cluster bootstrap for standard errors and confidence intervals. If a bootstrap is not computationally feasible, then we recommend FE with cluster-robust standard errors.
Many models require integrals of high-dimensional functions: for instance, to obtain marginal likelihoods. Such integrals may be intractable, or too expensive to compute numerically. Instead, we can use the Laplace approximation (LA). The LA is exact if the function is proportional to a normal density; its effectiveness therefore depends on the function's true shape. Here, we propose the use of the probabilistic numerical framework to develop a diagnostic for the LA and its underlying shape assumptions, modelling the function and its integral as a Gaussian process and devising a "test" by conditioning on a finite number of function values. The test is decidedly non-asymptotic and is not intended as a full substitute for numerical integration - rather, it is simply intended to test the feasibility of the assumptions underpinning the LA with as minimal computation. We discuss approaches to optimize and design the test, apply it to known sample functions, and highlight the challenges of high dimensions.
Aperiodic autocorrelation is an important indicator of performance of sequences used in communications, remote sensing, and scientific instrumentation. Knowing a sequence's autocorrelation function, which reports the autocorrelation at every possible translation, is equivalent to knowing the magnitude of the sequence's Fourier transform. The phase problem is the difficulty in resolving this lack of phase information. We say that two sequences are equicorrelational to mean that they have the same aperiodic autocorrelation function. Sequences used in technological applications often have restrictions on their terms: they are not arbitrary complex numbers, but come from a more restricted alphabet. For example, binary sequences involve terms equal to only $+1$ and $-1$. We investigate the necessary and sufficient conditions for two sequences to be equicorrelational, where we take their alphabet into consideration. There are trivial forms of equicorrelationality arising from modifications that predictably preserve the autocorrelation, for example, negating a binary sequence or reversing the order of its terms. By a search of binary sequences up to length $44$, we find that nontrivial equicorrelationality among binary sequences does occur, but is rare. An integer $n$ is said to be equivocal when there are binary sequences of length $n$ that are nontrivially equicorrelational; otherwise $n$ is unequivocal. For $n \leq 44$, we found that the unequivocal lengths are $1$--$8$, $10$, $11$, $13$, $14$, $19$, $22$, $23$, $26$, $29$, $37$, and $38$. We pose open questions about the finitude of unequivocal numbers and the probability of nontrivial equicorrelationality occurring among binary sequences.
Methods for analyzing representations in neural systems are increasingly popular tools in neuroscience and mechanistic interpretability. Measures comparing neural activations across conditions, architectures, and species give scalable ways to understand information transformation within different neural networks. However, recent findings show that some metrics respond to spurious signals, leading to misleading results. Establishing benchmark test cases is thus essential for identifying the most reliable metric and potential improvements. We propose that compositional learning in recurrent neural networks (RNNs) can provide a test case for dynamical representation alignment metrics. Implementing this case allows us to evaluate if metrics can identify representations that develop throughout learning and determine if representations identified by metrics reflect the network's actual computations. Building both attractor and RNN based test cases, we show that the recently proposed Dynamical Similarity Analysis (DSA) is more noise robust and reliably identifies behaviorally relevant representations compared to prior metrics (Procrustes, CKA). We also demonstrate how such test cases can extend beyond metric evaluation to study new architectures. Specifically, testing DSA in modern (Mamba) state space models suggests that these models, unlike RNNs, may not require changes in recurrent dynamics due to their expressive hidden states. Overall, we develop test cases that showcase how DSA's enhanced ability to detect dynamical motifs makes it highly effective for identifying ongoing computations in RNNs and revealing how networks learn tasks.
In modern data analysis, it is common to select a model before performing statistical inference. Selective inference tools make adjustments for the model selection process in order to ensure reliable inference post selection. In this paper, we introduce an asymptotic pivot to infer about the effects of selected variables on conditional quantile functions. Utilizing estimators from smoothed quantile regression, our proposed pivot is easy to compute and yields asymptotically-exact selective inference without making strict distributional assumptions about the response variable. At the core of our pivot is the use of external randomization variables, which allows us to utilize all available samples for both selection and inference without partitioning the data into independent subsets or discarding any samples at any step. From simulation studies, we find that: (i) the asymptotic confidence intervals based on our pivot achieve the desired coverage rates, even in cases where sample splitting fails due to insufficient sample size for inference; (ii) our intervals are consistently shorter than those produced by sample splitting across various models and signal settings. We report similar findings when we apply our approach to study risk factors for low birth weights in a publicly accessible dataset of US birth records from 2022.
Speaker anonymization aims to conceal cues to speaker identity while preserving linguistic content. Current machine learning based approaches require substantial computational resources, hindering real-time streaming applications. To address these concerns, we propose a streaming model that achieves speaker anonymization with low latency. The system is trained in an end-to-end autoencoder fashion using a lightweight content encoder that extracts HuBERT-like information, a pretrained speaker encoder that extract speaker identity, and a variance encoder that injects pitch and energy information. These three disentangled representations are fed to a decoder that re-synthesizes the speech signal. We present evaluation results from two implementations of our system, a full model that achieves a latency of 230ms, and a lite version (0.1x in size) that further reduces latency to 66ms while maintaining state-of-the-art performance in naturalness, intelligibility, and privacy preservation.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.