IIn computational geometry, the construction of essential primitives like convex hulls, Voronoi diagrams and Delaunay triangulations require the evaluation of the signs of determinants, which are sums of products. The same signs are needed for the exact solution of linear programming problems and systems of linear inequalities. Computing these signs exactly with inexact floating point arithmetic is challenging, and we present yet another algorithm for this task. Our algorithm is efficient and uses only of floating point arithmetic, which is much faster than exact arithmetic. We prove that the algorithm is correct and provide efficient and tested \texttt{C++} code for it.
We present a new local-search algorithm for the $k$-median clustering problem. We show that local optima for this algorithm give a $(2.836+\epsilon)$-approximation; our result improves upon the $(3+\epsilon)$-approximate local-search algorithm of Arya et al. [STOC 01]. Moreover, a computer-aided analysis of a natural extension suggests that this approach may lead to an improvement over the best-known approximation guarantee for the problem. The new ingredient in our algorithm is the use of a potential function based on both the closest and second-closest facilities to each client. Specifically, the potential is the sum over all clients, of the distance of the client to its closest facility, plus (a small constant times) the truncated distance to its second-closest facility. We move from one solution to another only if the latter can be obtained by swapping a constant number of facilities, and has a smaller potential than the former. This refined potential allows us to avoid the bad local optima given by Arya et al. for the local-search algorithm based only on the cost of the solution.
We propose and analyze a pressure-stabilized projection Lagrange--Galerkin scheme for the transient Oseen problem. The proposed scheme inherits the following advantages from the projection Lagrange--Galerkin scheme. The first advantage is computational efficiency. The scheme decouples the computation of each component of the velocity and pressure. The other advantage is essential unconditional stability. Here we also use the equal-order approximation for the velocity and pressure, and add a symmetric pressure stabilization term. This enriched pressure space enables us to obtain accurate solutions for small viscosity. First, we show an error estimate for the velocity for small viscosity. Then we show convergence results for the pressure. Numerical examples of a test problem show higher accuracy of the proposed scheme for small viscosity.
We introduce a new type of programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis, and release an open-source dataset of Python Programming Puzzles (P3). Each puzzle is defined by a short Python program $f$, and the goal is to find an input which makes $f$ return True. The puzzles are objective in that each one is specified entirely by the source code of its verifier $f$, so evaluating $f$ is all that is needed to test a candidate solution. They do not require an answer key or input/output examples, nor do they depend on natural language understanding. The dataset is comprehensive in that it spans problems of a range of difficulties and domains, ranging from trivial string manipulation problems, to classic programming puzzles (e.g., Tower of Hanoi), to interview/competitive-programming problems (e.g., dynamic programming), to longstanding open problems in algorithms and mathematics (e.g., factoring). We develop baseline enumerative program synthesis, GPT-3 and Codex solvers that are capable of solving puzzles -- even without access to any reference solutions -- by learning from their own past solutions. Codex performs best, solving up to 18% of 397 test problems with a single try and 80% of the problems with 1,000 tries per problem. In a small user study, we find a positive correlation between puzzle-solving performance and coding experience, and between the puzzle difficulty for humans and AI solvers. Therefore, further improvements on P3 could have a significant impact on many program synthesis areas.
We propose a deterministic Kaczmarz algorithm for solving linear systems $A\x=\b$. Different from previous Kaczmarz algorithms, we use reflections in each step of the iteration. This generates a series of points distributed with patterns on a sphere centered at a solution. Firstly, we prove that taking the average of $O(\eta/\epsilon)$ points leads to an effective approximation of the solution up to relative error $\epsilon$, where $\eta$ is a parameter depending on $A$ and can be bounded above by the square of the condition number. We also show how to select these points efficiently. From the numerical tests, our Kaczmarz algorithm usually converges more quickly than the (block) randomized Kaczmarz algorithms. Secondly, when the linear system is consistent, the Kaczmarz algorithm returns the solution that has the minimal distance to the initial vector. This gives a method to solve the least-norm problem. Finally, we prove that our Kaczmarz algorithm indeed solves the linear system $A^TW^{-1}A \x = A^TW^{-1} \b$, where $W$ is the low-triangular matrix such that $W+W^T=2AA^T$. The relationship between this linear system and the original one is studied.
We consider the query complexity of finding a local minimum of a function defined on a graph, where at most $k$ rounds of interaction with the oracle are allowed. Rounds model parallel settings, where each query takes resources to complete and is executed on a separate processor. Thus the query complexity in $k$ rounds informs how many processors are needed to achieve a parallel time of $k$. We focus on the d-dimensional grid $[n]^d$, where the dimension $d$ is a constant, and consider two regimes for the number of rounds: constant and polynomial in n. We give algorithms and lower bounds that characterize the trade-off between the number of rounds of adaptivity and the query complexity of local search. When the number of rounds $k$ is constant, we show that the query complexity of local search in $k$ rounds is $\Theta\bigl(n^{\frac{d^{k+1} - d^k}{d^k - 1}}\bigl)$, for both deterministic and randomized algorithms. When the number of rounds is polynomial, i.e. $k = n^{\alpha}$ for $0 < \alpha < d/2$, the randomized query complexity is $\Theta\left(n^{d-1 - \frac{d-2}{d}\alpha}\right)$ for all $d \geq 5$. For $d=3$ and $d=4$, we show the same upper bound expression holds and give almost matching lower bounds. The local search analysis also enables us to characterize the query complexity of computing a Brouwer fixed point in rounds. Our proof technique for lower bounding the query complexity in rounds may be of independent interest as an alternative to the classical relational adversary method of Aaronson from the fully adaptive setting.
This paper studies quasi-Newton methods for solving strongly-convex-strongly-concave saddle point problems (SPP). We propose a variant of general greedy Broyden family update for SPP, which has explicit local superlinear convergence rate of ${\mathcal O}\big(\big(1-\frac{1}{n\kappa^2}\big)^{k(k-1)/2}\big)$, where $n$ is dimensions of the problem, $\kappa$ is the condition number and $k$ is the number of iterations. The design and analysis of proposed algorithm are based on estimating the square of indefinite Hessian matrix, which is different from classical quasi-Newton methods in convex optimization. We also present two specific Broyden family algorithms with BFGS-type and SR1-type updates, which enjoy the faster local convergence rate of $\mathcal O\big(\big(1-\frac{1}{n}\big)^{k(k-1)/2}\big)$.
This paper focuses on developing energy-efficient online data processing strategy of wireless powered MEC systems under stochastic fading channels. In particular, we consider a hybrid access point (HAP) transmitting RF energy to and processing the sensing data offloaded from multiple WDs. Under an average power constraint of the HAP, we aim to maximize the long-term average data sensing rate of the WDs while maintaining task data queue stability. We formulate the problem as a multi-stage stochastic optimization to control the energy transfer and task data processing in sequential time slots. Without the knowledge of future channel fading, it is very challenging to determine the sequential control actions that are tightly coupled by the battery and data buffer dynamics. To solve the problem, we propose an online algorithm named LEESE that applies the perturbed Lyapunov optimization technique to decompose the multi-stage stochastic problem into per-slot deterministic optimization problems. We show that each per-slot problem can be equivalently transformed into a convex optimization problem. To facilitate online implementation in large-scale MEC systems, instead of solving the per-slot problem with off-the-shelf convex algorithms, we propose a block coordinate descent (BCD)-based method that produces close-to-optimal solution in less than 0.04\% of the computation delay. Simulation results demonstrate that the proposed LEESE algorithm can provide 21.9\% higher data sensing rate than the representative benchmark methods considered, while incurring sub-millisecond computation delay suitable for real-time control under fading channel.
This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.
We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.
We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.