In this paper, we use composite optimization algorithms to solve sigmoid networks. We equivalently transfer the sigmoid networks to a convex composite optimization and propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction method of multipliers. Under the assumptions of the weak sharp minima and the regularity condition, the algorithm is guaranteed to converge to a globally optimal solution of the objective function even in the case of non-convex and non-smooth problems. Furthermore, the convergence results can be directly related to the amount of training data and provide a general guide for setting the size of sigmoid networks. Numerical experiments on Franke's function fitting and handwritten digit recognition show that the proposed algorithms perform satisfactorily and robustly.
Minimizing the weight of an edge set satisfying parity constraints is a challenging branch of combinatorial optimization as witnessed by the binary hypergraph chapter of Alexander Schrijver's book ``Combinatorial Optimization'' (Chapter 80). This area contains relevant graph theory problems including open cases of the Max Cut problem, or some multiflow problems. We clarify the interconnections of some problems and establish three levels of difficulties. On the one hand, we prove that the Shortest Odd Path problem in an undirected graph without cycles of negative total weight and several related problems are NP-hard, settling a long-standing open question asked by Lov\'asz (Open Problem 27 in Schrijver's book ``Combinatorial Optimization''. On the other hand, we provide a polynomial-time algorithm to the closely related and well-studied Minimum-weight Odd $\{s,t\}$-Join problem for non-negative weights, whose complexity, however, was not known; more generally, we solve the Minimum-weight Odd $T$-Join problem in FPT time when parameterized by $|T|$. If negative weights are also allowed, then finding a minimum-weight odd $\{s,t\}$-join is equivalent to the Minimum-weight Odd $T$-Join problem for arbitrary weights, whose complexity is only conjectured to be polynomially solvable. The analogous problems for digraphs are also considered.
In this paper we investigate formal verification problems for Neural Network computations. Various reachability problems will be in the focus, such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether valid inputs exist such that the given network computes a valid output? Does this property hold for all valid inputs? The former question's complexity has been investigated recently by S\"alzer and Lange for nets using the Rectified Linear Unit and the identity function as their activation functions. We complement their achievements by showing that the problem is NP-complete for piecewise linear functions with rational coefficients that are not linear, NP-hard for almost all suitable activation functions including non-linear ones that are continuous on an interval, complete for the Existential Theory of the Reals $\exists \mathbb R$ for every non-linear polynomial and $\exists \mathbb R$-hard for the exponential function and various sigmoidal functions. For the completeness results, linking the verification tasks with the theory of Constraint Satisfaction Problems turns out helpful.
We consider minimizing functions for which it is expensive to compute the (possibly stochastic) gradient. Such functions are prevalent in reinforcement learning, imitation learning and adversarial training. Our target optimization framework uses the (expensive) gradient computation to construct surrogate functions in a \emph{target space} (e.g. the logits output by a linear model for classification) that can be minimized efficiently. This allows for multiple parameter updates to the model, amortizing the cost of gradient computation. In the full-batch setting, we prove that our surrogate is a global upper-bound on the loss, and can be (locally) minimized using a black-box optimization algorithm. We prove that the resulting majorization-minimization algorithm ensures convergence to a stationary point of the loss. Next, we instantiate our framework in the stochastic setting and propose the $SSO$ algorithm, which can be viewed as projected stochastic gradient descent in the target space. This connection enables us to prove theoretical guarantees for $SSO$ when minimizing convex functions. Our framework allows the use of standard stochastic optimization algorithms to construct surrogates which can be minimized by any deterministic optimization method. To evaluate our framework, we consider a suite of supervised learning and imitation learning problems. Our experiments indicate the benefits of target optimization and the effectiveness of $SSO$.
We consider the consensus problem in a decentralized network, focusing on a compact submanifold that acts as a nonconvex constraint set. By leveraging the proximal smoothness of the compact submanifold, which encompasses the local singleton property and the local Lipschitz continuity of the projection operator on the manifold, and establishing the connection between the projection operator and general retraction, we show that the Riemannian gradient descent with a unit step size has locally linear convergence if the network has a satisfactory level of connectivity. Moreover, based on the geometry of the compact submanifold, we prove that a convexity-like regularity condition, referred to as the restricted secant inequality, always holds in an explicitly characterized neighborhood around the solution set of the nonconvex consensus problem. By leveraging this restricted secant inequality and imposing a weaker connectivity requirement on the decentralized network, we present a comprehensive analysis of the linear convergence of the Riemannian gradient descent, taking into consideration appropriate initialization and step size. Furthermore, if the network is well connected, we demonstrate that the local Lipschitz continuity endowed by proximal smoothness is a sufficient condition for the restricted secant inequality, thus contributing to the local error bound. We believe that our established results will find more application in the consensus problems over a more general proximally smooth set. Numerical experiments are conducted to validate our theoretical findings.
A parametric class of trust-region algorithms for unconstrained nonconvex optimization is considered where the value of the objective function is never computed. The class contains a deterministic version of the first-order Adagrad method typically used for minimization of noisy function, but also allows the use of (possibly approximate) second-order information when available. The rate of convergence of methods in the class is analyzed and is shown to be identical to that known for first-order optimization methods using both function and gradients values, recovering existing results for purely-first order variants and improving the explicit dependence on problem dimension. This rate is shown to be essentially sharp. A new class of methods is also presented, for which a slightly worse and essentially sharp complexity result holds. Limited numerical experiments show that the new methods' performance may be comparable to that of standard steepest descent, despite using significantly less information, and that this performance is relatively insensitive to noise.
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.
Meta-learning extracts the common knowledge acquired from learning different tasks and uses it for unseen tasks. It demonstrates a clear advantage on tasks that have insufficient training data, e.g., few-shot learning. In most meta-learning methods, tasks are implicitly related via the shared model or optimizer. In this paper, we show that a meta-learner that explicitly relates tasks on a graph describing the relations of their output dimensions (e.g., classes) can significantly improve the performance of few-shot learning. This type of graph is usually free or cheap to obtain but has rarely been explored in previous works. We study the prototype based few-shot classification, in which a prototype is generated for each class, such that the nearest neighbor search between the prototypes produces an accurate classification. We introduce "Gated Propagation Network (GPN)", which learns to propagate messages between prototypes of different classes on the graph, so that learning the prototype of each class benefits from the data of other related classes. In GPN, an attention mechanism is used for the aggregation of messages from neighboring classes, and a gate is deployed to choose between the aggregated messages and the message from the class itself. GPN is trained on a sequence of tasks from many-shot to few-shot generated by subgraph sampling. During training, it is able to reuse and update previously achieved prototypes from the memory in a life-long learning cycle. In experiments, we change the training-test discrepancy and test task generation settings for thorough evaluations. GPN outperforms recent meta-learning methods on two benchmark datasets in all studied cases.
We propose a Bayesian convolutional neural network built upon Bayes by Backprop and elaborate how this known method can serve as the fundamental construct of our novel, reliable variational inference method for convolutional neural networks. First, we show how Bayes by Backprop can be applied to convolutional layers where weights in filters have probability distributions instead of point-estimates; and second, how our proposed framework leads with various network architectures to performances comparable to convolutional neural networks with point-estimates weights. In the past, Bayes by Backprop has been successfully utilised in feedforward and recurrent neural networks, but not in convolutional ones. This work symbolises the extension of the group of Bayesian neural networks which encompasses all three aforementioned types of network architectures now.