We construct a symplectic integrator for non-separable Hamiltonian systems combining an extended phase space approach of Pihajoki and the symmetric projection method. The resulting method is semiexplicit in the sense that the main time evolution step is explicit whereas the symmetric projection step is implicit. The symmetric projection binds potentially diverging copies of solutions, thereby remedying the main drawback of the extended phase space approach. Moreover, our semiexplicit method is symplectic in the original phase space. This is in contrast to existing extended phase space integrators, which are symplectic only in the extended phase space. We demonstrate that our method exhibits an excellent long-time preservation of invariants, and also that it tends to be as fast as Tao's explicit modified extended phase space integrator particularly with higher-order implementations and for higher-dimensional problems.
This paper addresses theory and applications of $\ell_p$-based Laplacian regularization in semi-supervised learning. The graph $p$-Laplacian for $p>2$ has been proposed recently as a replacement for the standard ($p=2$) graph Laplacian in semi-supervised learning problems with very few labels, where Laplacian learning is degenerate. In the first part of the paper we prove new discrete to continuum convergence results for $p$-Laplace problems on $k$-nearest neighbor ($k$-NN) graphs, which are more commonly used in practice than random geometric graphs. Our analysis shows that, on $k$-NN graphs, the $p$-Laplacian retains information about the data distribution as $p\to \infty$ and Lipschitz learning ($p=\infty$) is sensitive to the data distribution. This situation can be contrasted with random geometric graphs, where the $p$-Laplacian forgets the data distribution as $p\to \infty$. We also present a general framework for proving discrete to continuum convergence results in graph-based learning that only requires pointwise consistency and monotonicity. In the second part of the paper, we develop fast algorithms for solving the variational and game-theoretic $p$-Laplace equations on weighted graphs for $p>2$. We present several efficient and scalable algorithms for both formulations, and present numerical results on synthetic data indicating their convergence properties. Finally, we conduct extensive numerical experiments on the MNIST, FashionMNIST and EMNIST datasets that illustrate the effectiveness of the $p$-Laplacian formulation for semi-supervised learning with few labels. In particular, we find that Lipschitz learning ($p=\infty$) performs well with very few labels on $k$-NN graphs, which experimentally validates our theoretical findings that Lipschitz learning retains information about the data distribution (the unlabeled data) on $k$-NN graphs.
We consider the problem of finding nearly optimal solutions of optimization problems with random objective functions. Two concrete problems we consider are (a) optimizing the Hamiltonian of a spherical or Ising $p$-spin glass model, and (b) finding a large independent set in a sparse Erd\H{o}s-R\'{e}nyi graph. The following families of algorithms are considered: (a) low-degree polynomials of the input; (b) low-depth Boolean circuits; (c) the Langevin dynamics algorithm. We show that these families of algorithms fail to produce nearly optimal solutions with high probability. For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory (although we consider the search problem as opposed to the decision problem). Our proof uses the fact that these models are known to exhibit a variant of the overlap gap property (OGP) of near-optimal solutions. Specifically, for both models, every two solutions whose objectives are above a certain threshold are either close or far from each other. The crux of our proof is that the classes of algorithms we consider exhibit a form of stability. We show by an interpolation argument that stable algorithms cannot overcome the OGP barrier. The stability of Langevin dynamics is an immediate consequence of the well-posedness of stochastic differential equations. The stability of low-degree polynomials and Boolean circuits is established using tools from Gaussian and Boolean analysis -- namely hypercontractivity and total influence, as well as a novel lower bound for random walks avoiding certain subsets. In the case of Boolean circuits, the result also makes use of Linal-Mansour-Nisan's classical theorem. Our techniques apply more broadly to low influence functions and may apply more generally.
We consider the problem of recovering a signal from the magnitudes of affine measurements, which is also known as {\em affine phase retrieval}. In this paper, we formulate affine phase retrieval as an optimization problem and develop a second-order algorithm based on Newton method to solve it. Besides being able to convert into a phase retrieval problem, affine phase retrieval has its unique advantages in its solution. For example, the linear information in the observation makes it possible to solve this problem with second-order algorithms under complex measurements. Another advantage is that our algorithm doesn't have any special requirements for the initial point, while an appropriate initial value is essential for most non-convex phase retrieval algorithms. Starting from zero, our algorithm generates iteration point by Newton method, and we prove that the algorithm can quadratically converge to the true signal without any ambiguity for both Gaussian measurements and CDP measurements. In addition, we also use some numerical simulations to verify the conclusions and to show the effectiveness of the algorithm.
A variational integrator of arbitrarily high-order on the special orthogonal group $SO(n)$ is constructed using the polar decomposition and the constrained Galerkin method. It has the advantage of avoiding the second-order derivative of the exponential map that arises in traditional Lie group variational methods. In addition, a reduced Lie--Poisson integrator is constructed and the resulting algorithms can naturally be implemented by fixed-point iteration. The proposed methods are validated by numerical simulations on $SO(3)$ which demonstrate that they are comparable to variational Runge--Kutta--Munthe-Kaas methods in terms of computational efficiency. However, the methods we have proposed preserve the Lie group structure much more accurately and and exhibit better near energy preservation.
Understanding natural symmetries is key to making sense of our complex and ever-changing world. Recent work has shown that neural networks can learn such symmetries directly from data using Hamiltonian Neural Networks (HNNs). But HNNs struggle when trained on datasets where energy is not conserved. In this paper, we ask whether it is possible to identify and decompose conservative and dissipative dynamics simultaneously. We propose Dissipative Hamiltonian Neural Networks (D-HNNs), which parameterize both a Hamiltonian and a Rayleigh dissipation function. Taken together, they represent an implicit Helmholtz decomposition which can separate dissipative effects such as friction from symmetries such as conservation of energy. We train our model to decompose a damped mass-spring system into its friction and inertial terms and then show that this decomposition can be used to predict dynamics for unseen friction coefficients. Then we apply our model to real world data including a large, noisy ocean current dataset where decomposing the velocity field yields useful scientific insights.
This paper is devoted to the study of non-homogeneous Bingham flows. We introduce a second-order, divergence-conforming discretization for the Bingham constitutive equations, coupled with a discontinuous Galerkin scheme for the mass density. One of the main challenges when analyzing viscoplastic materials is the treatment of the yield stress. In order to overcome this issue, in this work we propose a local regularization, based on a Huber smoothing step. We also take advantage of the properties of the divergence conforming and discontinuous Galerkin formulations to incorporate upwind discretizations to stabilize the formulation. The stability of the continuous problem and the full-discrete scheme are analyzed. Further, a semismooth Newton method is proposed for solving the obtained fully-discretized system of equations at each time step. Finally, several numerical examples that illustrate the main features of the problem and the properties of the numerical scheme are presented.
In this paper we present a method for the solution of $\ell_1$-regularized convex quadratic optimization problems. It is derived by suitably combining a proximal method of multipliers strategy with a semi-smooth Newton method. The resulting linear systems are solved using a Krylov-subspace method, accelerated by appropriate general-purpose preconditioners, which are shown to be optimal with respect to the proximal parameters. Practical efficiency is further improved by warm-starting the algorithm using a proximal alternating direction method of multipliers. We show that the method achieves global convergence under feasibility assumptions. Furthermore, under additional standard assumptions, the method can achieve global linear and local superlinear convergence. The effectiveness of the approach is numerically demonstrated on $L^1$-regularized PDE-constrained optimization problems.
We consider the scenario where important signals are not strong enough to be separable from a large amount of noise. Such weak signals commonly exist in large-scale data analysis and play vital roles in many biomedical applications. Existing methods however are mostly underpowered for such weak signals. We address the challenge from the perspective of false negative control and develop a new method to efficiently regulate false negative proportion at a user-specified level. The new method is developed in a realistic setting with arbitrary covariance dependence between variables. We calibrate the overall dependence through a parameter whose scale is compatible with the existing phase diagram in high-dimensional sparse inference. Utilizing the new calibration, we asymptotically explicate the joint effect of covariance dependence, signal sparsity, and signal intensity on the proposed method. We interpret the results using a new phase diagram, which shows that the proposed method can efficiently retain a high proportion of signals even when they cannot be well-separated from noise. Finite sample performance of the proposed method is compared to those of several existing methods in simulation studies. The proposed method outperforms the others in adapting to a user-specified false negative control level. We apply the new method to analyze an fMRI dataset to locate voxels that are functionally relevant to saccadic eye movements. The new method exhibits a nice balance in identifying functional relevant regions and avoiding excessive noise voxels.
Finding a \emph{single} best solution is the most common objective in combinatorial optimization problems. However, such a single solution may not be applicable to real-world problems as objective functions and constraints are only "approximately" formulated for original real-world problems. To solve this issue, finding \emph{multiple} solutions is a natural direction, and diversity of solutions is an important concept in this context. Unfortunately, finding diverse solutions is much harder than finding a single solution. To cope with difficulty, we investigate the approximability of finding diverse solutions. As a main result, we propose a framework to design approximation algorithms for finding diverse solutions, which yields several outcomes including constant-factor approximation algorithms for finding diverse matchings in graphs and diverse common bases in two matroids and PTASes for finding diverse minimum cuts and interval schedulings.
Hamiltonian cycles in graphs were first studied in the 1850s. Since then, an impressive amount of research has been dedicated to identifying classes of graphs that allow Hamiltonian cycles, and to related questions. The corresponding decision problem, that asks whether a given graph is Hamiltonian (i.\,e.\ admits a Hamiltonian cycle), is one of Karp's famous NP-complete problems. In this paper we study graphs of bounded degree that are \emph{far} from being Hamiltonian, where a graph $G$ on $n$ vertices is \emph{far} from being Hamiltonian, if modifying a constant fraction of $n$ edges is necessary to make $G$ Hamiltonian. We give an explicit deterministic construction of a class of graphs of bounded degree that are locally Hamiltonian, but (globally) far from being Hamiltonian. Here, \emph{locally Hamiltonian} means that every subgraph induced by the neighbourhood of a small vertex set appears in some Hamiltonian graph. More precisely, we obtain graphs which differ in $\Theta(n)$ edges from any Hamiltonian graph, but non-Hamiltonicity cannot be detected in the neighbourhood of $o(n)$ vertices. Our class of graphs yields a class of hard instances for one-sided error property testers with linear query complexity. It is known that any property tester (even with two-sided error) requires a linear number of queries to test Hamiltonicity (Yoshida, Ito, 2010). This is proved via a randomised construction of hard instances. In contrast, our construction is deterministic. So far only very few deterministic constructions of hard instances for property testing are known. We believe that our construction may lead to future insights in graph theory and towards a characterisation of the properties that are testable in the bounded-degree model.