In previous work, we introduced a method for determining convergence rates for integration methods for the kinetic Langevin equation for $M$-$\nabla$Lipschitz $m$-log-concave densities [arXiv:2302.10684, 2023]. In this article, we exploit this method to treat several additional schemes including the method of Brunger, Brooks and Karplus (BBK) and stochastic position/velocity Verlet. We introduce a randomized midpoint scheme for kinetic Langevin dynamics, inspired by the recent scheme of Bou-Rabee and Marsden [arXiv:2211.11003, 2022]. We also extend our approach to stochastic gradient variants of these schemes under minimal extra assumptions. We provide convergence rates of $\mathcal{O}(m/M)$, with explicit stepsize restriction, which are of the same order as the stability thresholds for Gaussian targets and are valid for a large interval of the friction parameter. We compare the contraction rate estimates of many kinetic Langevin integrators from molecular dynamics and machine learning. Finally we present numerical experiments for a Bayesian logistic regression example.
Interface problems have long been a major focus of scientific computing, leading to the development of various numerical methods. Traditional mesh-based methods often employ time-consuming body-fitted meshes with standard discretization schemes or unfitted meshes with tailored schemes to achieve controllable accuracy and convergence rate. Along another line, mesh-free methods bypass mesh generation but lack robustness in terms of convergence and accuracy due to the low regularity of solutions. In this study, we propose a novel method for solving interface problems within the framework of the random feature method. This approach utilizes random feature functions in conjunction with a partition of unity as approximation functions. It evaluates partial differential equations, boundary conditions, and interface conditions on collocation points in equal footing, and solves a linear least-squares system to obtain the approximate solution. To address the issue of low regularity, two sets of random feature functions are used to approximate the solution on each side of the interface, which are then coupled together via interface conditions. We validate our method through a series of increasingly complex numerical examples. Our findings show that despite the solution often being only continuous or even discontinuous, our method not only eliminates the need for mesh generation but also maintains high accuracy, akin to the spectral collocation method for smooth solutions. Remarkably, for the same accuracy requirement, our method requires two to three orders of magnitude fewer degrees of freedom than traditional methods, demonstrating its significant potential for solving interface problems with complex geometries.
In this work, we propose methods for speeding up linear regression distributively, while ensuring security. We leverage randomized sketching techniques, and improve straggler resilience in asynchronous systems. Specifically, we apply a random orthonormal matrix and then subsample \textit{blocks}, to simultaneously secure the information and reduce the dimension of the regression problem. In our setup, the transformation corresponds to an encoded encryption in an \textit{approximate gradient coding scheme}, and the subsampling corresponds to the responses of the non-straggling workers; in a centralized coded computing network. This results in a distributive \textit{iterative sketching} approach for an $\ell_2$-subspace embedding, \textit{i.e.} a new sketch is considered at each iteration. We also focus on the special case of the \textit{Subsampled Randomized Hadamard Transform}, which we generalize to block sampling; and discuss how it can be modified in order to secure the data.
The ensemble data assimilation of computational fluid dynamics simulations based on the lattice Boltzmann method (LBM) and the local ensemble transform Kalman filter (LETKF) is implemented and optimized on a GPU supercomputer based on NVIDIA A100 GPUs. To connect the LBM and LETKF parts, data transpose communication is optimized by overlapping computation, file I/O, and communication based on data dependency in each LETKF kernel. In two dimensional forced isotropic turbulence simulations with the ensemble size of $M=64$ and the number of grid points of $N_x=128^2$, the optimized implementation achieved $\times3.80$ speedup from the naive implementation, in which the LETKF part is not parallelized. The main computing kernel of the local problem is the eigenvalue decomposition (EVD) of $M\times M$ real symmetric dense matrices, which is computed by a newly developed batched EVD in $\verb|EigenG|$. The batched EVD in $\verb|EigenG|$ outperforms that in $\verb|cuSOLVER|$, and $\times65.3$ speedup was achieved.
In this work, we study the problem of finding the maximum value of a non-negative submodular function subject to a limit on the number of items selected, a ubiquitous problem that appears in many applications, such as data summarization and nonlinear regression. We provide the first deterministic, linear-time approximation algorithms for this problem that do not assume the objective is monotone. We present three deterministic, linear-time algorithms: a single-pass streaming algorithm with a ratio of $23.313 + \epsilon$, which is the first linear-time streaming algorithm; a simpler deterministic linear-time algorithm with a ratio of $11.657$; and a $(4 + O(\epsilon ))$-approximation algorithm. Finally, we present a deterministic algorithm that obtains ratio of $e + \epsilon$ in $O_{\epsilon}(n \log(n))$ time, close to the best known expected ratio of $e - 0.121$ in polynomial time.
In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions. We propose adaptive versions of GD and ProxGD that are based on observed gradient differences and, thus, have no added computational costs. Moreover, we prove convergence of our methods assuming only local Lipschitzness of the gradient. In addition, the proposed versions allow for even larger stepsizes than those initially suggested in [MM20].
Here, we introduce a new definition of regular point for piecewise-linear (PL) functions on combinatorial (PL triangulated) manifolds. This definition is given in terms of the restriction of the function to the link of the point. We show that our definition of regularity is distinct from other definitions that exist in the combinatorial topology literature. Next, we stratify the Jacobi set/critical locus of such a map as a poset stratified space. As an application, we consider the Reeb space of a PL function, stratify the Reeb space as well as the target of the function, and show that the Stein factorization is a map of stratified spaces.
We consider statistical inference of equality-constrained stochastic nonlinear optimization problems. We develop a fully online stochastic sequential quadratic programming (StoSQP) method to solve the problems, which can be regarded as applying Newton's method to the first-order optimality conditions (i.e., the KKT conditions). Motivated by recent designs of numerical second-order methods, we allow StoSQP to adaptively select any random stepsize $\bar{\alpha}_t$, as long as $\beta_t\leq \bar{\alpha}_t \leq \beta_t+\chi_t$, for some control sequences $\beta_t$ and $\chi_t=o(\beta_t)$. To reduce the dominant computational cost of second-order methods, we additionally allow StoSQP to inexactly solve quadratic programs via efficient randomized iterative solvers that utilize sketching techniques. Notably, we do not require the approximation error to diminish as iteration proceeds. For the developed method, we show that under mild assumptions (i) computationally, it can take at most $O(1/\epsilon^4)$ iterations (same as samples) to attain $\epsilon$-stationarity; (ii) statistically, its primal-dual sequence $1/\sqrt{\beta_t}\cdot (x_t - x^\star, \lambda_t - \lambda^\star)$ converges to a mean-zero Gaussian distribution with a nontrivial covariance matrix depending on the underlying sketching distribution. Additionally, we establish the almost-sure convergence rate of the iterate $(x_t, \lambda_t)$ along with the Berry-Esseen bound; the latter quantitatively measures the convergence rate of the distribution function. We analyze a plug-in limiting covariance matrix estimator, and demonstrate the performance of the method both on benchmark nonlinear problems in CUTEst test set and on linearly/nonlinearly constrained regression problems.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.