Focusing on hybrid diffusion dynamics involving continuous dynamics as well as discrete events, this article investigates the explicit approximations for nonlinear switching diffusion systems modulated by a Markov chain. Different kinds of easily implementable explicit schemes have been proposed to approximate the dynamical behaviors of switching diffusion systems with local Lipschitz continuous drift and diffusion coefficients in both finite and infinite intervals. Without additional restriction conditions except those which guarantee the exact solutions posses their dynamical properties, the numerical solutions converge strongly to the exact solutions in finite horizon, moreover, realize the approximation of long-time dynamical properties including the moment boundedness, stability and ergodicity. Some simulations and examples are provided to support the theoretical results and demonstrate the validity of the approach.
A singularly perturbed parabolic problem of convection-diffusion type with a discontinuous initial condition is examined. An analytic function is identified which matches the discontinuity in the initial condition and also satisfies the homogenous parabolic differential equation associated with the problem. The difference between this analytical function and the solution of the parabolic problem is approximated numerically, using an upwind finite difference operator combined with an appropriate layer-adapted mesh. The numerical method is shown to be parameter-uniform. Numerical results are presented to illustrate the theoretical error bounds established in the paper.
Let $P$ be a linear differential operator over $\mathcal{D} \subset \mathbb{R}^d$ and $U = (U_x)_{x \in \mathcal{D}}$ a second order stochastic process. In the first part of this article, we prove a new necessary and sufficient condition for all the trajectories of $U$ to verify the partial differential equation (PDE) $T(U) = 0$. This condition is formulated in terms of the covariance kernel of $U$. When compared to previous similar results, the novelty lies in that the equality $T(U) = 0$ is understood in the \textit{sense of distributions}, which is a relevant framework for PDEs. This theorem provides precious insights during the second part of this article, devoted to performing "physically informed" machine learning for the homogeneous 3 dimensional free space wave equation. We perform Gaussian process regression (GPR) on pointwise observations of a solution of this PDE. To do so, we propagate Gaussian processes (GP) priors over its initial conditions through the wave equation. We obtain explicit formulas for the covariance kernel of the propagated GP, which can then be used for GPR. We then explore the particular cases of radial symmetry and point source. For the former, we derive convolution-free GPR formulas; for the latter, we show a direct link between GPR and the classical triangulation method for point source localization used in GPS systems. Additionally, this Bayesian framework provides a new answer for the ill-posed inverse problem of reconstructing initial conditions for the wave equation with a limited number of sensors, and simultaneously enables the inference of physical parameters from these data. Finally, we illustrate this physically informed GPR on a number of practical examples.
We resolve an open problem posed by Joswig et al. by providing an $\tilde{O}(N)$ time, $O(\log^2(N))$-factor approximation algorithm for the min-Morse unmatched problem (MMUP) Let $\Lambda$ be the no. of critical cells of the optimal discrete Morse function and $N$ be the total no. of cells of a regular cell complex K. The goal of MMUP is to find $\Lambda$ for a given complex K. To begin with, we apply an approx. preserving graph reduction on MMUP to obtain a new problem namely the min-partial order problem (min-POP)(a strict generalization of the min-feedback arc set problem). The reduction involves introduction of rigid edges which are edges that demand strict inclusion in output solution. To solve min-POP, we use the Leighton- Rao divide-&-conquer paradigm that provides solutions to SDP-formulated instances of min-directed balanced cut with rigid edges (min-DBCRE). Our first algorithm for min-DBCRE extends Agarwal et al.'s rounding procedure for digraph formulation of ARV-algorithm to handle rigid edges. Our second algorithm to solve min-DBCRE SDP, adapts Arora et al.'s primal dual MWUM. In terms of applications, under the mild assumption1 of the size of topological features being significantly smaller compared to the size of the complex, we obtain an (a) $\tilde{O}(N)$ algorithm for computing homology groups $H_i(K,A)$ of a simplicial complex K, (where A is an arbitrary Abelian group.) (b) an $\tilde{O}(N^2)$ algorithm for computing persistent homology and (c) an $\tilde{O}(N)$ algorithm for computing the optimal discrete Morse-Witten function compatible with input scalar function as simple consequences of our approximation algorithm for MMUP thereby giving us the best known complexity bounds for each of these applications under the aforementioned assumption. Such an assumption is realistic in applied settings, and often a characteristic of modern massive datasets.
The existence of the {\em typical set} is key for data compression strategies and for the emergence of robust statistical observables in macroscopic physical systems. Standard approaches derive its existence from a restricted set of dynamical constraints. However, given the enormous consequences for the understanding of the system's dynamics, and its role underlying the presence of stable, almost deterministic statistical patterns, a question arises whether typical sets exist in much more general scenarios. We demonstrate here that the typical set can be defined and characterized from general forms of entropy for a much wider class of stochastic processes than it was previously thought. This includes processes showing arbitrary path dependence, long range correlations or dynamic sampling spaces; suggesting that typicality is a generic property of stochastic processes, regardless of their complexity. Our results impact directly in the understanding of the stability of complex systems, open the door to new data compression strategies and points to the existence of statistical mechanics-like approaches to systems arbitrarily away from equilibrium with dynamic phase spaces. We argue that the potential emergence of robust properties in complex stochastic systems provided by the existence of typical sets has special relevance to biological systems.
Learning rate schedules are ubiquitously used to speed up and improve optimisation. Many different policies have been introduced on an empirical basis, and theoretical analyses have been developed for convex settings. However, in many realistic problems the loss-landscape is high-dimensional and non convex -- a case for which results are scarce. In this paper we present a first analytical study of the role of learning rate scheduling in this setting, focusing on Langevin optimization with a learning rate decaying as $\eta(t)=t^{-\beta}$. We begin by considering models where the loss is a Gaussian random function on the $N$-dimensional sphere ($N\rightarrow \infty$), featuring an extensive number of critical points. We find that to speed up optimization without getting stuck in saddles, one must choose a decay rate $\beta<1$, contrary to convex setups where $\beta=1$ is generally optimal. We then add to the problem a signal to be recovered. In this setting, the dynamics decompose into two phases: an \emph{exploration} phase where the dynamics navigates through rough parts of the landscape, followed by a \emph{convergence} phase where the signal is detected and the dynamics enter a convex basin. In this case, it is optimal to keep a large learning rate during the exploration phase to escape the non-convex region as quickly as possible, then use the convex criterion $\beta=1$ to converge rapidly to the solution. Finally, we demonstrate that our conclusions hold in a common regression task involving neural networks.
We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set. Our algorithm assumes access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle satisfying certain standard unbiasedness and second moment assumptions. We show that the number of calls to the stochastic first-order oracle and the linear-minimization oracle required by the proposed algorithm, to obtain an $\epsilon$-stationary solution, are of order $\mathcal{O}_T(\epsilon^{-2})$ and $\mathcal{O}_T(\epsilon^{-3})$ respectively, where $\mathcal{O}_T$ hides constants in $T$. Notably, the dependence of these complexity bounds on $\epsilon$ and $T$ are separate in the sense that changing one does not impact the dependence of the bounds on the other. Moreover, our algorithm is parameter-free and does not require any (increasing) order of mini-batches to converge unlike the common practice in the analysis of stochastic conditional gradient-type algorithms.
We present a novel hybrid strategy based on machine learning to improve curvature estimation in the level-set method. The proposed inference system couples enhanced neural networks with standard numerical schemes to compute curvature more accurately. The core of our hybrid framework is a switching mechanism that relies on well established numerical techniques to gauge curvature. If the curvature magnitude is larger than a resolution-dependent threshold, it uses a neural network to yield a better approximation. Our networks are multilayer perceptrons fitted to synthetic data sets composed of sinusoidal- and circular-interface samples at various configurations. To reduce data set size and training complexity, we leverage the problem's characteristic symmetry and build our models on just half of the curvature spectrum. These savings lead to a powerful inference system able to outperform any of its numerical or neural component alone. Experiments with stationary, smooth interfaces show that our hybrid solver is notably superior to conventional numerical methods in coarse grids and along steep interface regions. Compared to prior research, we have observed outstanding gains in precision after training the regression model with data pairs from more than a single interface type and transforming data with specialized input preprocessing. In particular, our findings confirm that machine learning is a promising venue for reducing or removing mass loss in the level-set method.
We examine global non-asymptotic convergence properties of policy gradient methods for multi-agent reinforcement learning (RL) problems in Markov potential games (MPG). To learn a Nash equilibrium of an MPG in which the size of state space and/or the number of players can be very large, we propose new independent policy gradient algorithms that are run by all players in tandem. When there is no uncertainty in the gradient evaluation, we show that our algorithm finds an $\epsilon$-Nash equilibrium with $O(1/\epsilon^2)$ iteration complexity which does not explicitly depend on the state space size. When the exact gradient is not available, we establish $O(1/\epsilon^5)$ sample complexity bound in a potentially infinitely large state space for a sample-based algorithm that utilizes function approximation. Moreover, we identify a class of independent policy gradient algorithms that enjoys convergence for both zero-sum Markov games and Markov cooperative games with the players that are oblivious to the types of games being played. Finally, we provide computational experiments to corroborate the merits and the effectiveness of our theoretical developments.
Many practical problems need the output of a machine learning model to satisfy a set of constraints, $K$. Nevertheless, there is no known guarantee that classical neural network architectures can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any non-convex compact set $K$ and any continuous function $f:\mathbb{R}^n\rightarrow K$, there is a probabilistic transformer $\hat{F}$ whose randomized outputs all lie in $K$ and whose expected output uniformly approximates $f$. Our second main result is a "deep neural version" of Berge's Maximum Theorem (1963). The result guarantees that given an objective function $L$, a constraint set $K$, and a family of soft constraint sets, there is a probabilistic transformer $\hat{F}$ that approximately minimizes $L$ and whose outputs belong to $K$; moreover, $\hat{F}$ approximately satisfies the soft constraints. Our results imply the first universal approximation theorem for classical transformers with exact convex constraint satisfaction. They also yield that a chart-free universal approximation theorem for Riemannian manifold-valued functions subject to suitable geodesically convex constraints.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.