In this paper we apply the ideas of New Q-Newton's method directly to a system of equations, utilising the specialties of the cost function $f=||F||^2$, where $F=(f_1,\ldots ,f_m)$. The first algorithm proposed here is a modification of Levenberg-Marquardt algorithm, where we prove some new results on global convergence and avoidance of saddle points. The second algorithm proposed here is a modification of New Q-Newton's method Backtracking, where we use the operator $\nabla ^2f(x)+\delta ||F(x)||^{\tau}$ instead of $\nabla ^2f(x)+\delta ||\nabla f(x)||^{\tau}$. This new version is more suitable than New Q-Newton's method Backtracking itself, while currently has better avoidance of saddle points guarantee than Levenberg-Marquardt algorithms. Also, a general scheme for second order methods for solving systems of equations is proposed. We will also discuss a way to avoid that the limit of the constructed sequence is a solution of $H(x)^{\intercal}F(x)=0$ but not of $F(x)=0$.
The Symmetric Tensor Approximation problem (STA) consists of approximating a symmetric tensor or a homogeneous polynomial by a linear combination of symmetric rank-1 tensors or powers of linear forms of low symmetric rank. We present two new Riemannian Newton-type methods for low rank approximation of symmetric tensor with complex coefficients.The first method uses the parametrization of the set of tensors of rank at most $r$ by weights and unit vectors.Exploiting the properties of the apolar product on homogeneous polynomials combined with efficient tools from complex optimization, we provide an explicit and tractable formulation of the Riemannian gradient and Hessian, leading to Newton iterations with local quadratic convergence. We prove that under some regularity conditions on non-defective tensors in the neighborhood of the initial point, the Newton iteration (completed with a trust-region scheme) is converging to a local minimum.The second method is a Riemannian Gauss--Newton method on the Cartesian product of Veronese manifolds. An explicit orthonormal basis of the tangent space of this Riemannian manifold is described. We deduce the Riemannian gradient and the Gauss--Newton approximation of the Riemannian Hessian. We present a new retraction operator on the Veronese manifold.We analyze the numerical behavior of these methods, with an initial point provided by Simultaneous Matrix Diagonalisation (SMD).Numerical experiments show the good numerical behavior of the two methods in different cases and in comparison with existing state-of-the-art methods.
We define a complexity class $\mathsf{IB}$ as the class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. We relate $\mathsf{IB}$ to other standard complexity classes, and demonstrate its applicability by finding natural $\mathsf{IB}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.
This paper focuses on the strong convergence of the truncated $\theta$-Milstein method for a class of nonautonomous stochastic differential delay equations whose drift and diffusion coefficients can grow polynomially. The convergence rate, which is close to one, is given under the weaker assumption than the monotone condition. To verify our theoretical findings, we present a numerical example.
Fourth-order differential equations play an important role in many applications in science and engineering. In this paper, we present a three-field mixed finite-element formulation for fourth-order problems, with a focus on the effective treatment of the different boundary conditions that arise naturally in a variational formulation. Our formulation is based on introducing the gradient of the solution as an explicit variable, constrained using a Lagrange multiplier. The essential boundary conditions are enforced weakly, using Nitsche's method where required. As a result, the problem is rewritten as a saddle-point system, requiring analysis of the resulting finite-element discretization and the construction of optimal linear solvers. Here, we discuss the analysis of the well-posedness and accuracy of the finite-element formulation. Moreover, we develop monolithic multigrid solvers for the resulting linear systems. Two and three-dimensional numerical results are presented to demonstrate the accuracy of the discretization and efficiency of the multigrid solvers proposed.
We apply the method of penalization to the Dirichlet problem for the Navier-Stokes-Fourier system governing the motion of a general viscous compressible fluid confined to a bounded Lipschitz domain. The physical domain is embedded into a large cube on which the periodic boundary conditions are imposed. The original boundary conditions are enforced through a singular friction term in the momentum equation and a heat source/sink term in the internal energy balance. The solutions of the penalized problem are shown to converge to the solution of the limit problem. Numerical experiments are performed to illustrate the efficiency of the method.
This study proposes an efficient Newton-type method for the optimal control of switched systems under a given mode sequence. A mesh-refinement-based approach is utilized to discretize continuous-time optimal control problems (OCPs) and formulate a nonlinear program (NLP), which guarantees the local convergence of a Newton-type method. A dedicated structure-exploiting algorithm (Riccati recursion) is proposed to perform a Newton-type method for the NLP efficiently because its sparsity structure is different from a standard OCP. The proposed method computes each Newton step with linear time-complexity for the total number of discretization grids as the standard Riccati recursion algorithm. Additionally, the computation is always successful if the solution is sufficiently close to a local minimum. Conversely, general quadratic programming (QP) solvers cannot accomplish this because the Hessian matrix is inherently indefinite. Moreover, a modification on the reduced Hessian matrix is proposed using the nature of the Riccati recursion algorithm as the dynamic programming for a QP subproblem to enhance the convergence. A numerical comparison is conducted with off-the-shelf NLP solvers, which demonstrates that the proposed method is up to two orders of magnitude faster. Whole-body optimal control of quadrupedal gaits is also demonstrated and shows that the proposed method can achieve the whole-body model predictive control (MPC) of robotic systems with rigid contacts.
The global minimum point of an optimization problem is of interest in engineering fields and it is difficult to be solved, especially for a nonconvex large-scale optimization problem. In this article, we consider a new memetic algorithm for this problem. That is to say, we use the determined points (the stationary points of the function) as the initial seeds of the evolutionary algorithm, other than the random initial seeds of the known evolutionary algorithms. Firstly, we revise the continuation Newton method with the deflation technique to find the stationary points from several determined initial points as many as possible. Then, we use those found stationary points as the initial evolutionary seeds of the quasi-genetic algorithm. After it evolves into several generations, we obtain a suboptimal point of the optimization problem. Finally, we use the continuation Newton method with this suboptimal point as the initial point to obtain the stationary point, and output the minimizer between this final stationary point and the found suboptimal point of the quasi-genetic algorithm.Finally, we compare it with the multi-start method (the built-in subroutine GlobalSearch.m of the MATLAB R2020a environment), the differential evolution algorithm (the DE method, the subroutine de.m of the MATLAB Central File Exchange 2021) and the branch-and-bound method (Couenne of a state-of-the-art open source solver for mixed integer nonlinear programming problems), respectively. Numerical results show that the proposed method performs well for the large-scale global optimization problems, especially the problems of which are difficult to be solved by the known global optimization methods.
We study the problem of maximizing Nash welfare (MNW) while allocating indivisible goods to asymmetric agents. The Nash welfare of an allocation is the weighted geometric mean of agents' utilities, and the allocation with maximum Nash welfare is known to satisfy several desirable fairness and efficiency properties. However, computing such an MNW allocation is APX-hard (hard to approximate) in general, even when agents have additive valuation functions. Hence, we aim to identify tractable classes which either admit a polynomial-time approximation scheme (PTAS) or an exact polynomial-time algorithm. To this end, we design a PTAS for finding an MNW allocation for the case of asymmetric agents with identical, additive valuations, thus generalizing a similar result for symmetric agents. Our techniques can also be adapted to give a PTAS for the problem of computing the optimal $p$-mean welfare. We also show that an MNW allocation can be computed exactly in polynomial time for identical agents with $k$-ary valuations when $k$ is a constant, where every agent has at most $k$ different values for the goods. Next, we consider the special case where every agent finds at most two goods valuable, and show that this class admits an efficient algorithm, even for general monotone valuations. In contrast, we show that when agents can value three or more goods, maximizing Nash welfare is APX-hard, even when agents are symmetric and have additive valuations. Finally, we show that for constantly many asymmetric agents with additive valuations, the MNW problem admits a fully polynomial-time approximation scheme (FPTAS).
In this work, we give provable sieving algorithms for the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP) on lattices in $\ell_p$ norm ($1\leq p\leq\infty$). The running time we obtain is better than existing provable sieving algorithms. We give a new linear sieving procedure that works for all $\ell_p$ norm ($1\leq p\leq\infty$). The main idea is to divide the space into hypercubes such that each vector can be mapped efficiently to a sub-region. We achieve a time complexity of $2^{2.751n+o(n)}$, which is much less than the $2^{3.849n+o(n)}$ complexity of the previous best algorithm. We also introduce a mixed sieving procedure, where a point is mapped to a hypercube within a ball and then a quadratic sieve is performed within each hypercube. This improves the running time, especially in the $\ell_2$ norm, where we achieve a time complexity of $2^{2.25n+o(n)}$, while the List Sieve Birthday algorithm has a running time of $2^{2.465n+o(n)}$. We adopt our sieving techniques to approximation algorithms for SVP and CVP in $\ell_p$ norm ($1\leq p\leq\infty$) and show that our algorithm has a running time of $2^{2.001n+o(n)}$, while previous algorithms have a time complexity of $2^{3.169n+o(n)}$.
Mesh adaptivity is a useful tool for efficient solution to partial differential equations in very complex geometries. In the present paper we discuss the use of polygonal mesh refinement in order to tackle two common issues: first, adaptively refine a provided good quality polygonal mesh preserving quality, second, improve the quality of a coarse poor quality polygonal mesh during the refinement process on very complex domains. For finite element methods and triangular meshes, convergence of a posteriori mesh refinement algorithms and optimality properties have been widely investigated, whereas convergence and optimality are still open problems for polygonal adaptive methods. In this article, we propose a new refinement method for convex cells with the aim of introducing some properties useful to tackle convergence and optimality for adaptive methods. The key issues in refining convex general polygons are: a refinement dependent only on the marked cells for refinement at each refinement step; a partial quality improvement, or, at least, a non degenerate quality of the mesh during the refinement iterations; a bound of the number of unknowns of the discrete problem with respect to the number of the cells in the mesh. Although these properties are quite common for refinement algorithms of triangular meshes, these issues are still open problems for polygonal meshes