Our goal is to finally settle the persistent problem in Diophantine Approximation of finding best inhomogeneous linear approximates. Classical results from the theory of continued fractions solve the special homogeneous case in the form of a complete sequence of normal approximates. Real expansions that allow the notion of normality to percolate into the inhomogeneous setting will provide us with the general solution.
In this paper, we consider a class of possibly nonconvex, nonsmooth and non-Lipschitz optimization problems arising in many contemporary applications such as machine learning, variable selection and image processing. To solve this class of problems, we propose a proximal gradient method with extrapolation and line search (PGels). This method is developed based on a special potential function and successfully incorporates both extrapolation and non-monotone line search, which are two simple and efficient accelerating techniques for the proximal gradient method. Thanks to the line search, this method allows more flexibilities in choosing the extrapolation parameters and updates them adaptively at each iteration if a certain line search criterion is not satisfied. Moreover, with proper choices of parameters, our PGels reduces to many existing algorithms. We also show that, under some mild conditions, our line search criterion is well defined and any cluster point of the sequence generated by PGels is a stationary point of our problem. In addition, by assuming the Kurdyka-${\L}$ojasiewicz exponent of the objective in our problem, we further analyze the local convergence rate of two special cases of PGels, including the widely used non-monotone proximal gradient method as one case. Finally, we conduct some numerical experiments for solving the $\ell_1$ regularized logistic regression problem and the $\ell_{1\text{-}2}$ regularized least squares problem. Our numerical results illustrate the efficiency of PGels and show the potential advantage of combining two accelerating techniques.
When the sizes of the state and action spaces are large, solving MDPs can be computationally prohibitive even if the probability transition matrix is known. So in practice, a number of techniques are used to approximately solve the dynamic programming problem, including lookahead, approximate policy evaluation using an m-step return, and function approximation. In a recent paper, (Efroni et al. 2019) studied the impact of lookahead on the convergence rate of approximate dynamic programming. In this paper, we show that these convergence results change dramatically when function approximation is used in conjunction with lookout and approximate policy evaluation using an m-step return. Specifically, we show that when linear function approximation is used to represent the value function, a certain minimum amount of lookahead and multi-step return is needed for the algorithm to even converge. And when this condition is met, we characterize the finite-time performance of policies obtained using such approximate policy iteration. Our results are presented for two different procedures to compute the function approximation: linear least-squares regression and gradient descent.
We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types of convergence, owing to weak assumptions on the problem. In particular, there are no results that show convergence with a rate. In this article, we bridge these two worlds by assuming more regularity of the optimization problem, which allows us to prove convergence with an (optimal) sub-linear rate also in an infinite-dimensional setting. In particular, we assume that the objective function is the expected value of a family of convex differentiable functions. While we require that the full objective function is strongly convex, we do not assume that its constituent parts are so. Further, we require that the gradient satisfies a weak local Lipschitz continuity property, where the Lipschitz constant may grow polynomially given certain guarantees on the variance and higher moments near the minimum. We illustrate these results by discretizing a concrete infinite-dimensional classification problem with varying degrees of accuracy.
This paper establishes the (nearly) optimal approximation error characterization of deep rectified linear unit (ReLU) networks for smooth functions in terms of both width and depth simultaneously. To that end, we first prove that multivariate polynomials can be approximated by deep ReLU networks of width $\mathcal{O}(N)$ and depth $\mathcal{O}(L)$ with an approximation error $\mathcal{O}(N^{-L})$. Through local Taylor expansions and their deep ReLU network approximations, we show that deep ReLU networks of width $\mathcal{O}(N\ln N)$ and depth $\mathcal{O}(L\ln L)$ can approximate $f\in C^s([0,1]^d)$ with a nearly optimal approximation error $\mathcal{O}(\|f\|_{C^s([0,1]^d)}N^{-2s/d}L^{-2s/d})$. Our estimate is non-asymptotic in the sense that it is valid for arbitrary width and depth specified by $N\in\mathbb{N}^+$ and $L\in\mathbb{N}^+$, respectively.
We present a framework that allows for the non-asymptotic study of the $2$-Wasserstein distance between the invariant distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation in the strongly log-concave case. This allows us to study in a unified way a number of different integrators proposed in the literature for the overdamped and underdamped Langevin dynamics. In addition, we analyse a novel splitting method for the underdamped Langevin dynamics which only requires one gradient evaluation per time step. Under an additional smoothness assumption on a $d$--dimensional strongly log-concave distribution with condition number $\kappa$, the algorithm is shown to produce with an $\mathcal{O}\big(\kappa^{5/4} d^{1/4}\epsilon^{-1/2} \big)$ complexity samples from a distribution that, in Wasserstein distance, is at most $\epsilon>0$ away from the target distribution.
We present an a posteriori error estimate based on equilibrated stress reconstructions for the finite element approximation of a unilateral contact problem with weak enforcement of the contact conditions. We start by proving a guaranteed upper bound for the dual norm of the residual. This norm is shown to control the natural energy norm up to a boundary term, which can be removed under a saturation assumption. The basic estimate is then refined to distinguish the different components of the error, and is used as a starting point to design an algorithm including adaptive stopping criteria for the nonlinear solver and automatic tuning of a regularization parameter. We then discuss an actual way of computing the stress reconstruction based on the Arnold-Falk-Winther finite elements. Finally, after briefly discussing the efficiency of our estimators, we showcase their performance on a panel of numerical tests.
We consider n robots with limited visibility: each robot can observe other robots only up to a constant distance denoted as the viewing range. The robots operate in discrete rounds that are either fully synchronous (FSync) or semi-synchronized (SSync). Most previously studied formation problems in this setting seek to bring the robots closer together (e.g., Gathering or Chain-Formation). In this work, we introduce the Max-Line-Formation problem, which has a contrary goal: to arrange the robots on a straight line of maximal length. First, we prove that the problem is impossible to solve by robots with a constant sized circular viewing range. The impossibility holds under comparably strong assumptions: robots that agree on both axes of their local coordinate systems in FSync. On the positive side, we show that the problem is solvable by robots with a constant square viewing range, i.e., the robots can observe other robots that lie within a constant-sized square centered at their position. In this case, the robots need to agree on only one axis of their local coordinate systems. We derive two algorithms: the first algorithm considers oblivious robots and converges to the optimal configuration in time $\mathcal{O}(n^2 \cdot \log (n/\varepsilon))$ under the SSync scheduler. The other algorithm makes use of locally visible lights (LUMI). It is designed for the FSync scheduler and can solve the problem exactly in optimal time $\Theta(n)$. Afterward, we show that both the algorithmic and the analysis techniques can also be applied to the Gathering and Chain-Formation problem: we introduce an algorithm with a reduced viewing range for Gathering and give new and improved runtime bounds for the Chain-Formation problem.
Piecewise deterministic Markov processes (PDMPs) are a class of stochastic processes with applications in several fields of applied mathematics spanning from mathematical modeling of physical phenomena to computational methods. A PDMP is specified by three characteristic quantities: the deterministic motion, the law of the random event times, and the jump kernels. The applicability of PDMPs to real world scenarios is currently limited by the fact that these processes can be simulated only when these three characteristics of the process can be simulated exactly. In order to overcome this problem, we introduce discretisation schemes for PDMPs which make their approximate simulation possible. In particular, we design both first order and higher order schemes that rely on approximations of one or more of the three characteristics. For the proposed approximation schemes we study both pathwise convergence to the continuous PDMP as the step size converges to zero and convergence in law to the invariant measure of the PDMP in the long time limit. Moreover, we apply our theoretical results to several PDMPs that arise from the computational statistics and mathematical biology literature.
This paper presents new existence, dual representation and approximation results for the information projection in the infinite-dimensional setting for moment inequality models. These results are established under a general specification of the moment inequality model, nesting both conditional and unconditional models, and allowing for an infinite number of such inequalities. An important innovation of the paper is the exhibition of the dual variable as a weak vector-valued integral to formulate an approximation scheme of the $I$-projection's equivalent Fenchel dual problem. In particular, it is shown under suitable assumptions that the dual problem's optimum value can be approximated by the values of finite-dimensional programs, and that, in addition, every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the dual problem. This paper illustrates the verification of assumptions and the construction of the approximation scheme's parameters for the cases of unconditional and conditional first-order stochastic dominance constraints.
Finding approximate Nash equilibria in zero-sum imperfect-information games is challenging when the number of information states is large. Policy Space Response Oracles (PSRO) is a deep reinforcement learning algorithm grounded in game theory that is guaranteed to converge to an approximate Nash equilibrium. However, PSRO requires training a reinforcement learning policy at each iteration, making it too slow for large games. We show through counterexamples and experiments that DCH and Rectified PSRO, two existing approaches to scaling up PSRO, fail to converge even in small games. We introduce Pipeline PSRO (P2SRO), the first scalable general method for finding approximate Nash equilibria in large zero-sum imperfect-information games. P2SRO is able to parallelize PSRO with convergence guarantees by maintaining a hierarchical pipeline of reinforcement learning workers, each training against the policies generated by lower levels in the hierarchy. We show that unlike existing methods, P2SRO converges to an approximate Nash equilibrium, and does so faster as the number of parallel workers increases, across a variety of imperfect information games. We also introduce an open-source environment for Barrage Stratego, a variant of Stratego with an approximate game tree complexity of $10^{50}$. P2SRO is able to achieve state-of-the-art performance on Barrage Stratego and beats all existing bots.