Fourth-order differential equations play an important role in many applications in science and engineering. In this paper, we present a three-field mixed finite-element formulation for fourth-order problems, with a focus on the effective treatment of the different boundary conditions that arise naturally in a variational formulation. Our formulation is based on introducing the gradient of the solution as an explicit variable, constrained using a Lagrange multiplier. The essential boundary conditions are enforced weakly, using Nitsche's method where required. As a result, the problem is rewritten as a saddle-point system, requiring analysis of the resulting finite-element discretization and the construction of optimal linear solvers. Here, we discuss the analysis of the well-posedness and accuracy of the finite-element formulation. Moreover, we develop monolithic multigrid solvers for the resulting linear systems. Two and three-dimensional numerical results are presented to demonstrate the accuracy of the discretization and efficiency of the multigrid solvers proposed.
We introduce the Conic Blackwell Algorithm$^+$ (CBA$^+$) regret minimizer, a new parameter- and scale-free regret minimizer for general convex sets. CBA$^+$ is based on Blackwell approachability and attains $O(\sqrt{T})$ regret. We show how to efficiently instantiate CBA$^+$ for many decision sets of interest, including the simplex, $\ell_{p}$ norm balls, and ellipsoidal confidence regions in the simplex. Based on CBA$^+$, we introduce SP-CBA$^+$, a new parameter-free algorithm for solving convex-concave saddle-point problems, which achieves a $O(1/\sqrt{T})$ ergodic rate of convergence. In our simulations, we demonstrate the wide applicability of SP-CBA$^+$ on several standard saddle-point problems, including matrix games, extensive-form games, distributionally robust logistic regression, and Markov decision processes. In each setting, SP-CBA$^+$ achieves state-of-the-art numerical performance, and outperforms classical methods, without the need for any choice of step sizes or other algorithmic parameters.
Recent low-thrust space missions have highlighted the importance of designing trajectories that are robust against uncertainties. In its complete form, this process is formulated as a nonlinear constrained stochastic optimal control problem. This problem is among the most complex in control theory, and no practically applicable method to low-thrust trajectory optimization problems has been proposed to date. This paper presents a new algorithm to solve stochastic optimal control problems with nonlinear systems and constraints. The proposed algorithm uses the unscented transform to convert a stochastic optimal control problem into a deterministic problem, which is then solved by trajectory optimization methods such as differential dynamic programming. Two numerical examples, one of which applies the proposed method to low-thrust trajectory design, illustrate that it automatically introduces margins that improve robustness. Finally, Monte Carlo simulations are used to evaluate the robustness and optimality of the solution.
The optimization of parallel kinematic manipulators (PKM) involve several constraints that are difficult to formalize, thus making optimal synthesis problem highly challenging. The presence of passive joint limits as well as the singularities and self-collisions lead to a complicated relation between the input and output parameters. In this article, a novel optimization methodology is proposed by combining a local search, Nelder-Mead algorithm, with global search methodologies such as low discrepancy distribution for faster and more efficient exploration of the optimization space. The effect of the dimension of the optimization problem and the different constraints are discussed to highlight the complexities of closed-loop kinematic chain optimization. The work also presents the approaches used to consider constraints for passive joint boundaries as well as singularities to avoid internal collisions in such mechanisms. The proposed algorithm can also optimize the length of the prismatic actuators and the constraints can be added in modular fashion, allowing to understand the impact of given criteria on the final result. The application of the presented approach is used to optimize two PKMs of different degrees of freedom.
The aim of this paper is to propose an efficient adaptive finite element method for eigenvalue problems based on the multilevel correction scheme and inverse power method. This method involves solving associated boundary value problems on each adaptive partitions and very low dimensional eigenvalue problems on some special meshes which are controlled by the proposed algorithm. Since we Hence the efficiency of solving eigenvalue problems can be improved to be similar to the adaptive finite element method for the associated boundary value problems. The convergence and optimal complexity is theoretically verified and numerically demonstrated.
Previous papers have shown the impact of partial convergence of discretized PDE on the accuracy of tangent and adjoint linearizations. A series of papers suggested linearization of the fixed point iteration used in the solution process as a means of computing the sensitivities rather than linearizing the discretized PDE, as the lack of convergence of the nonlinear problem indicates that the discretized form of the governing equations has not been satisfied. These works showed that the accuracy of an approximate linearization depends in part on the convergence of the nonlinear system. This work shows an error analysis of the impact of the approximate linearization and the convergence of the nonlinear problem for both the tangent and adjoint modes and provides a series of results for an exact Newton solver, an inexact Newton solver, and a low storage explicit Runge-Kutta scheme to confirm the error analyses.
We develop a kernel projected Wasserstein distance for the two-sample test, an essential building block in statistics and machine learning: given two sets of samples, to determine whether they are from the same distribution. This method operates by finding the nonlinear mapping in the data space which maximizes the distance between projected distributions. In contrast to existing works about projected Wasserstein distance, the proposed method circumvents the curse of dimensionality more efficiently. We present practical algorithms for computing this distance function together with the non-asymptotic uncertainty quantification of empirical estimates. Numerical examples validate our theoretical results and demonstrate good performance of the proposed method.
This paper is devoted to discrete mechanical systems subject to external forces. We introduce a discrete version of systems with Rayleigh-type forces, obtain the equations of motion and characterize the equivalence for these systems. Additionally, we obtain a Noether's theorem and other theorem characterizing the Lie subalgebra of symmetries of a forced discrete Lagrangian system. Moreover, we develop a Hamilton-Jacobi theory for forced discrete Hamiltonian systems. These results are useful for the construction of so-called variational integrators, which, as we illustrate with some examples, are remarkably superior to the usual numerical integrators such as the Runge-Kutta method.
This paper presents a new parameter free partially penalized immersed finite element method and convergence analysis for solving second order elliptic interface problems. A lifting operator is introduced on interface edges to ensure the coercivity of the method without requiring an ad-hoc stabilization parameter. The optimal approximation capabilities of the immersed finite element space is proved via a novel new approach that is much simpler than that in the literature. A new trace inequality which is necessary to prove the optimal convergence of immersed finite element methods is established on interface elements. Optimal error estimates are derived rigorously with the constant independent of the interface location relative to the mesh. The new method and analysis have also been extended to variable coefficients and three-dimensional problems. Numerical examples are also provided to confirm the theoretical analysis and efficiency of the new method.
We present an O(n^6 ) linear programming model for the traveling salesman (TSP) and quadratic assignment (QAP) problems. The basic model is developed within the framework of the TSP. It does not involve the city-to-city variables-based, traditional TSP polytope referred to in the literature as "the TSP polytope." We do not model explicit Hamiltonian cycles of the cities. Instead, we use a time-dependent abstraction of TSP tours and develop a direct extended formulation of the linear assignment problem (LAP) polytope. The model is exact in the sense that it has integral extreme points which are in one-to-one correspondence with TSP tours. It can be solved optimally using any linear programming (LP) solver, hence offering a new (incidental) proof of the equality of the computational complexity classes "P" and "NP." The extensions of the model to the time-dependent traveling salesman problem (TDTSP) as well as the quadratic assignment problem (QAP) are straightforward. The reasons for the non-applicability of existing negative extended formulations results for "the TSP polytope" to the model in this paper as well as our software implementation and the computational experimentation we conducted are briefly discussed.
This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.