The Oldenburger-Kolakoski sequence is the only infinite sequence over the alphabet $\{1,2\}$ that starts with $1$ and is its own run-length encoding. In the present work, we take a step back from this largely known and studied sequence by introducing some randomness in the choice of the letters written. This enables us to provide some results on the convergence of the density of $1$'s in the resulting sequence. When the choice of the letters is given by an infinite sequence of i.i.d. random variables or by a Markov chain, the average densities of letters converge. Moreover, in the case of i.i.d. random variables, we are able to prove that the densities even almost surely converge.
Mathematical models are essential for understanding and making predictions about systems arising in nature and engineering. Yet, mathematical models are a simplification of true phenomena, thus making predictions subject to uncertainty. Hence, the ability to quantify uncertainties is essential to any modelling framework, enabling the user to assess the importance of certain parameters on quantities of interest and have control over the quality of the model output by providing a rigorous understanding of uncertainty. Peridynamic models are a particular class of mathematical models that have proven to be remarkably accurate and robust for a large class of material failure problems. However, the high computational expense of peridynamic models remains a major limitation, hindering outer-loop applications that require a large number of simulations, for example, uncertainty quantification. This contribution provides a framework to make such computations feasible. By employing a Multilevel Monte Carlo (MLMC) framework, where the majority of simulations are performed using a coarse mesh, and performing relatively few simulations using a fine mesh, a significant reduction in computational cost can be realised, and statistics of structural failure can be estimated. The results show a speed-up factor of 16x over a standard Monte Carlo estimator, enabling the forward propagation of uncertain parameters in a computationally expensive peridynamic model. Furthermore, the multilevel method provides an estimate of both the discretisation error and sampling error, thus improving the confidence in numerical predictions. The performance of the approach is demonstrated through an examination of the statistical size effect in quasi-brittle materials.
Statistical models are central to machine learning with broad applicability across a range of downstream tasks. The models are controlled by free parameters that are typically estimated from data by maximum-likelihood estimation or approximations thereof. However, when faced with real-world datasets many of the models run into a critical issue: they are formulated in terms of fully-observed data, whereas in practice the datasets are plagued with missing data. The theory of statistical model estimation from incomplete data is conceptually similar to the estimation of latent-variable models, where powerful tools such as variational inference (VI) exist. However, in contrast to standard latent-variable models, parameter estimation with incomplete data often requires estimating exponentially-many conditional distributions of the missing variables, hence making standard VI methods intractable. We address this gap by introducing variational Gibbs inference (VGI), a new general-purpose method to estimate the parameters of statistical models from incomplete data. We validate VGI on a set of synthetic and real-world estimation tasks, estimating important machine learning models such as VAEs and normalising flows from incomplete data. The proposed method, whilst general-purpose, achieves competitive or better performance than existing model-specific estimation methods.
We study the secretary problem in which rank-ordered lists are generated by the Mallows model and the goal is to identify the highest-ranked candidate through a sequential interview process which does not allow rejected candidates to be revisited. The main difference between our formulation and existing models is that, during the selection process, we are given a fixed number of opportunities to query an infallible expert whether the current candidate is the highest-ranked or not. If the response is positive, the selection process terminates, otherwise, the search continues until a new potentially optimal candidate is identified. Our optimal interview strategy, as well as the expected number of candidates interviewed and the expected number of queries used, can be determined through the evaluation of well-defined recurrence relations. Specifically, if we are allowed to query $s-1$ times and to make a final selection without querying (thus, making $s$ selections in total) then the optimum scheme is characterized by $s$ thresholds that depend on the parameter $\theta$ of the Mallows distribution but are independent on the maximum number of queries.
Barycenters (aka Fr\'echet means) were introduced in statistics in the 1940's and popularized in the fields of shape statistics and, later, in optimal transport and matrix analysis. They provide the most natural extension of linear averaging to non-Euclidean geometries, which is perhaps the most basic and widely used tool in data science. In various setups, their asymptotic properties, such as laws of large numbers and central limit theorems, have been established, but their non-asymptotic behaviour is still not well understood. In this work, we prove finite sample concentration inequalities (namely, generalizations of Hoeffding's and Bernstein's inequalities) for barycenters of i.i.d. random variables in metric spaces with non-positive curvature in Alexandrov's sense. As a byproduct, we also obtain PAC guarantees for a stochastic online algorithm that computes the barycenter of a finite collection of points in a non-positively curved space. We also discuss extensions of our results to spaces with possibly positive curvature.
Hyperuniformity is the study of stationary point processes with a sub-Poisson variance in a large window. In other words, counting the points of a hyperuniform point process that fall in a given large region yields a small-variance Monte Carlo estimation of the volume. Hyperuniform point processes have received a lot of attention in statistical physics, both for the investigation of natural organized structures and the synthesis of materials. Unfortunately, rigorously proving that a point process is hyperuniform is usually difficult. A common practice in statistical physics and chemistry is to use a few samples to estimate a spectral measure called the structure factor. Its decay around zero provides a diagnostic of hyperuniformity. Different applied fields use however different estimators, and important algorithmic choices proceed from each field's lore. This paper provides a systematic survey and derivation of known or otherwise natural estimators of the structure factor. We also leverage the consistency of these estimators to contribute the first asymptotically valid statistical test of hyperuniformity. We benchmark all estimators and hyperuniformity diagnostics on a set of examples. In an effort to make investigations of the structure factor and hyperuniformity systematic and reproducible, we further provide the Python toolbox structure_factor, containing all the estimators and tools that we discuss.
We introduce a new algorithm and software for solving linear equations in symmetric diagonally dominant matrices with non-positive off-diagonal entries (SDDM matrices), including Laplacian matrices. We use pre-conditioned conjugate gradient (PCG) to solve the system of linear equations. Our preconditioner is a variant of the Approximate Cholesky factorization of Kyng and Sachdeva (FOCS 2016). Our factorization approach is simple: we eliminate matrix rows/columns one at a time and update the remaining matrix using sampling to approximate the outcome of complete Cholesky factorization. Unlike earlier approaches, our sampling always maintains a connectivity in the remaining non-zero structure. Our algorithm comes with a tuning parameter that upper bounds the number of samples made per original entry. We implement our algorithm in Julia, providing two versions, AC and AC2, that respectively use 1 and 2 samples per original entry. We compare their single-threaded performance to that of current state-of-the-art solvers Combinatorial Multigrid (CMG), BoomerAMG-preconditioned Krylov solvers from HyPre and PETSc, Lean Algebraic Multigrid (LAMG), and MATLAB's with Incomplete Cholesky Factorization (ICC). Our evaluation uses a broad class of problems, including all large SDDM matrices from the SuiteSparse collection and diverse programmatically generated instances. Our experiments suggest that our algorithm attains a level of robustness and reliability not seen before in SDDM solvers, while retaining good performance across all instances. Our code and data are public, and we provide a tutorial on how to replicate our tests. We hope that others will adopt this suite of tests as a benchmark, which we refer to as SDDM2023. Our solver code is available at: //github.com/danspielman/Laplacians.jl/ Our benchmarking data and tutorial are available at: //rjkyng.github.io/SDDM2023/
We study the spectral convergence of a symmetrized Graph Laplacian matrix induced by a Gaussian kernel evaluated on pairs of embedded data, sampled from a manifold with boundary, a sub-manifold of $\mathbb{R}^m$. Specifically, we deduce the convergence rates for eigenpairs of the discrete Graph-Laplacian matrix to the eigensolutions of the Laplace-Beltrami operator that are well-defined on manifolds with boundary, including the homogeneous Neumann and Dirichlet boundary conditions. For the Dirichlet problem, we deduce the convergence of the \emph{truncated Graph Laplacian}, which is recently numerically observed in applications, and provide a detailed numerical investigation on simple manifolds. Our method of proof relies on the min-max argument over a compact and symmetric integral operator, leveraging the RKHS theory for spectral convergence of integral operator and a recent pointwise asymptotic result of a Gaussian kernel integral operator on manifolds with boundary.
A common technique in reinforcement learning is to evaluate the value function from Monte Carlo simulations of a given policy, and use the estimated value function to obtain a new policy which is greedy with respect to the estimated value function. A well-known longstanding open problem in this context is to prove the convergence of such a scheme when the value function of a policy is estimated from data collected from a single sample path obtained from implementing the policy (see page 99 of [Sutton and Barto, 2018], page 8 of [Tsitsiklis, 2002]). We present a solution to the open problem by showing that a first-visit version of such a policy iteration scheme indeed converges to the optimal policy provided that the policy improvement step uses lookahead [Silver et al., 2016, Mnih et al., 2016, Silver et al., 2017b] rather than a simple greedy policy improvement. We provide results both for the original open problem in the tabular setting and also present extensions to the function approximation setting, where we show that the policy resulting from the algorithm performs close to the optimal policy within a function approximation error.
With apparently all research on estimation-of-distribution algorithms (EDAs) concentrated on pseudo-Boolean optimization and permutation problems, we undertake the first steps towards using EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems. To this aim, we propose a natural way to extend the known univariate EDAs to such variables. Different from a naive reduction to the binary case, it avoids additional constraints. Since understanding genetic drift is crucial for an optimal parameter choice, we extend the known quantitative analysis of genetic drift to EDAs for multi-valued variables. Roughly speaking, when the variables take $r$ different values, the time for genetic drift to become significant is $r$ times shorter than in the binary case. Consequently, the update strength of the probabilistic model has to be chosen $r$ times lower now. To investigate how desired model updates take place in this framework, we undertake a mathematical runtime analysis on the $r$-valued LeadingOnes problem. We prove that with the right parameters, the multi-valued UMDA solves this problem efficiently in $O(r\log(r)^2 n^2 \log(n))$ function evaluations. Overall, our work shows that EDAs can be adjusted to multi-valued problems, and it gives advice on how to set the main parameters.
This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.