In this article, the local convergence analysis of the multi-step seventh order method is presented for solving nonlinear equations. The point worth noting in our paper is that our analysis requires a weak hypothesis where the Fr\'echet derivative of the nonlinear operator satisfies the $\psi$-continuity condition and extends the applicability of the computation when both Lipschitz and H\"{o}lder conditions fail. The convergence in this study is shown under the hypotheses on the first order derivative without involving derivatives of the higher-order. To find a subset of the original convergence domain, a strategy is devised. As a result, the new Lipschitz constants are at least as tight as the old ones, allowing for a more precise convergence analysis in the local convergence case. Some numerical examples are provided to show the performance of the method presented in this contribution over some existing schemes.
We show $\textsf{EOPL}=\textsf{PLS}\cap\textsf{PPAD}$. Here the class $\textsf{EOPL}$ consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubacek and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse $\textsf{CLS}=\textsf{PLS}\cap\textsf{PPAD}$ by Fearnley et al. (STOC 2021). We also prove a companion result $\textsf{SOPL}=\textsf{PLS}\cap\textsf{PPADS}$, where $\textsf{SOPL}$ is the class associated with the Sink-of-Potential-Line problem.
In this paper we consider the generalized inverse iteration for computing ground states of the Gross-Pitaevskii eigenvector problem (GPE). For that we prove explicit linear convergence rates that depend on the maximum eigenvalue in magnitude of a weighted linear eigenvalue problem. Furthermore, we show that this eigenvalue can be bounded by the first spectral gap of a linearized Gross-Pitaevskii operator, recovering the same rates as for linear eigenvector problems. With this we establish the first local convergence result for the basic inverse iteration for the GPE without damping. We also show how our findings directly generalize to extended inverse iterations, such as the Gradient Flow Discrete Normalized (GFDN) proposed in [W. Bao, Q. Du, SIAM J. Sci. Comput., 25 (2004)] or the damped inverse iteration suggested in [P. Henning, D. Peterseim, SIAM J. Numer. Anal., 53 (2020)]. Our analysis also reveals why the inverse iteration for the GPE does not react favourably to spectral shifts. This empirical observation can now be explained with a blow-up of a weighting function that crucially contributes to the convergence rates. Our findings are illustrated by numerical experiments.
When the regression function belongs to the standard smooth classes consisting of univariate functions with derivatives up to the $(\gamma+1)$th order bounded in absolute values by a common constant everywhere or a.e., it is well known that the minimax optimal rate of convergence in mean squared error (MSE) is $\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ when $\gamma$ is finite and the sample size $n\rightarrow\infty$. From a nonasymptotic viewpoint that does not take $n$ to infinity, this paper shows that: for the standard H\"older and Sobolev classes, the minimax optimal rate is $\frac{\sigma^{2}\left(\gamma+1\right)}{n}$ ($\succsim\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$) when $\frac{n}{\sigma^{2}}\precsim\left(\gamma+1\right)^{2\gamma+3}$ and $\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ ($\succsim\frac{\sigma^{2}\left(\gamma+1\right)}{n}$) when $\frac{n}{\sigma^{2}}\succsim\left(\gamma+1\right)^{2\gamma+3}$. To establish these results, we derive upper and lower bounds on the covering and packing numbers for the generalized H\"older class where the absolute value of the $k$th ($k=0,...,\gamma$) derivative is bounded by a parameter $R_{k}$ and the $\gamma$th derivative is $R_{\gamma+1}-$Lipschitz (and also for the generalized ellipsoid class of smooth functions). Our bounds sharpen the classical metric entropy results for the standard classes, and give the general dependence on $\gamma$ and $R_{k}$. By deriving the minimax optimal MSE rates under various (well motivated) $R_{k}$s for the smooth classes with the help of our new entropy bounds, we show several interesting results that cannot be shown with the existing entropy bounds in the literature.
We consider the evolution of curve networks in two dimensions (2d) and surface clusters in three dimensions (3d). The motion of the interfaces is described by surface diffusion, with boundary conditions at the triple junction points/lines, where three interfaces meet, and at the boundary points/lines, where an interface meets a fixed planar boundary. We propose a parametric finite element method based on a suitable variational formulation. The constructed method is semi-implicit and can be shown to satisfy the volume conservation of each enclosed bubble and the unconditional energy-stability, thus preserving the two fundamental geometric structures of the flow. Besides, the method has very good properties with respect to the distribution of mesh points, thus no mesh smoothing or regularization technique is required. A generalization of the introduced scheme to the case of anisotropic surface energies and non-neutral external boundaries is also considered. Numerical results are presented for the evolution of two-dimensional curve networks and three-dimensional surface clusters in the cases of both isotropic and anisotropic surface energies.
A singularly perturbed parabolic problem of convection-diffusion type with a discontinuous initial condition is examined. An analytic function is identified which matches the discontinuity in the initial condition and also satisfies the homogenous parabolic differential equation associated with the problem. The difference between this analytical function and the solution of the parabolic problem is approximated numerically, using an upwind finite difference operator combined with an appropriate layer-adapted mesh. The numerical method is shown to be parameter-uniform. Numerical results are presented to illustrate the theoretical error bounds established in the paper.
In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze their convergence to an $\epsilon$-stationary point of the problem. We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the $T$ level case, can achieve a sample complexity of $\mathcal{O}(1/\epsilon^6)$ by using mini-batches of samples in each iteration. By modifying this algorithm using linearized stochastic estimates of the function values, we improve the sample complexity to $\mathcal{O}(1/\epsilon^4)$. {\color{black}This modification not only removes the requirement of having a mini-batch of samples in each iteration, but also makes the algorithm parameter-free and easy to implement}. To the best of our knowledge, this is the first time that such an online algorithm designed for the (un)constrained multi-level setting, obtains the same sample complexity of the smooth single-level setting, under standard assumptions (unbiasedness and boundedness of the second moments) on the stochastic first-order oracle.
Smooth minimax games often proceed by simultaneous or alternating gradient updates. Although algorithms with alternating updates are commonly used in practice, the majority of existing theoretical analyses focus on simultaneous algorithms for convenience of analysis. In this paper, we study alternating gradient descent-ascent (Alt-GDA) in minimax games and show that Alt-GDA is superior to its simultaneous counterpart~(Sim-GDA) in many settings. We prove that Alt-GDA achieves a near-optimal local convergence rate for strongly convex-strongly concave (SCSC) problems while Sim-GDA converges at a much slower rate. To our knowledge, this is the \emph{first} result of any setting showing that Alt-GDA converges faster than Sim-GDA by more than a constant. We further adapt the theory of integral quadratic constraints (IQC) and show that Alt-GDA attains the same rate \emph{globally} for a subclass of SCSC minimax problems. Empirically, we demonstrate that alternating updates speed up GAN training significantly and the use of optimism only helps for simultaneous algorithms.
We present a variational characterization for the R\'{e}nyi divergence of order infinity. Our characterization is related to guessing: the objective functional is a ratio of maximal expected values of a gain function applied to the probability of correctly guessing an unknown random variable. An important aspect of our variational characterization is that it remains agnostic to the particular gain function considered, as long as it satisfies some regularity conditions. Also, we define two variants of a tunable measure of information leakage, the maximal $\alpha$-leakage, and obtain closed-form expressions for these information measures by leveraging our variational characterization.
Physical systems are usually modeled by differential equations, but solving these differential equations analytically is often intractable. Instead, the differential equations can be solved numerically by discretization in a finite computational domain. The discretized equation is reduced to a large linear system, whose solution is typically found using an iterative solver. We start with an initial guess, x_0, and iterate the algorithm to obtain a sequence of solution vectors, x_m. The iterative algorithm is said to converge to solution $x$ if and only if x_m converges to $x$. Accuracy of the numerical solutions is important, especially in the design of safety critical systems such as airplanes, cars, or nuclear power plants. It is therefore important to formally guarantee that the iterative solvers converge to the "true" solution of the original differential equation. In this paper, we first formalize the necessary and sufficient conditions for iterative convergence in the Coq proof assistant. We then extend this result to two classical iterative methods: Gauss-Seidel iteration and Jacobi iteration. We formalize conditions for the convergence of the Gauss--Seidel classical iterative method, based on positive definiteness of the iterative matrix. We then formally state conditions for convergence of Jacobi iteration and instantiate it with an example to demonstrate convergence of iterative solutions to the direct solution of the linear system. We leverage recent developments of the Coq linear algebra and mathcomp library for our formalization.
The problem of finding a nonzero solution of a linear recurrence $Ly = 0$ with polynomial coefficients where $y$ has the form of a definite hypergeometric sum, related to the Inverse Creative Telescoping Problem of [14][Sec. 8], has now been open for three decades. Here we present an algorithm (implemented in a SageMath package) which, given such a recurrence and a quasi-triangular, shift-compatible factorial basis $\mathcal{B} = \langle P_k(n)\rangle_{k=0}^\infty$ of the polynomial space $\mathbb{K}[n]$ over a field $\mathbb{K}$ of characteristic zero, computes a recurrence satisfied by the coefficient sequence $c = \langle c_k\rangle_{k=0}^\infty$ of the solution $y_n = \sum_{k=0}^\infty c_kP_k(n)$ (where, thanks to the quasi-triangularity of $\mathcal{B}$, the sum on the right terminates for each $n \in \mathbb{N}$). More generally, if $\mathcal{B}$ is $m$-sieved for some $m \in \mathbb{N}$, our algorithm computes a system of $m$ recurrences satisfied by the $m$-sections of the coefficient sequence $c$. If an explicit nonzero solution of this system can be found, we obtain an explicit nonzero solution of $Ly = 0$.