We describe a three precision variant of Newton's method for nonlinear equations. We evaluate the nonlinear residual in double precision, store the Jacobian matrix in single precision, and solve the equation for the Newton step with iterative refinement with a factorization in half precision. We analyze the method as an inexact Newton method. This analysis shows that, except for very poorly conditioned Jacobians, the number of nonlinear iterations needed is the same that one would get if one stored and factored the Jacobian in double precision. In many ill-conditioned cases one can use the low precision factorization as a preconditioner for a GMRES iteration. That approach can recover fast convergence of the nonlinear iteration. We present an example to illustrate the results.
We provide a variety of lower bounds for the well-known shortcut set problem: how much can one decrease the diameter of a directed graph on $n$ vertices and $m$ edges by adding $O(n)$ or $O(m)$ of shortcuts from the transitive closure of the graph. Our results are based on a vast simplification of the recent construction of Bodwin and Hoppenworth [FOCS 2023] which was used to show an $\widetilde{\Omega}(n^{1/4})$ lower bound for the $O(n)$-sized shortcut set problem. We highlight that our simplification completely removes the use of the convex sets by B\'ar\'any and Larman [Math. Ann. 1998] used in all previous lower bound constructions. Our simplification also removes the need for randomness and further removes some log factors. This allows us to generalize the construction to higher dimensions, which in turn can be used to show the following results. For $O(m)$-sized shortcut sets, we show an $\Omega(n^{1/5})$ lower bound, improving on the previous best $\Omega(n^{1/8})$ lower bound. For all $\varepsilon > 0$, we show that there exists a $\delta > 0$ such that there are $n$-vertex $O(n)$-edge graphs $G$ where adding any shortcut set of size $O(n^{2-\varepsilon})$ keeps the diameter of $G$ at $\Omega(n^\delta)$. This improves the sparsity of the constructed graph compared to a known similar result by Hesse [SODA 2003]. We also consider the sourcewise setting for shortcut sets: given a graph $G=(V,E)$, a set $S\subseteq V$, how much can we decrease the sourcewise diameter of $G$, $\max_{(s, v) \in S \times V, \text{dist}(s, v) < \infty} \text{dist}(s,v)$ by adding a set of edges $H$ from the transitive closure of $G$? We show that for any integer $d \ge 2$, there exists a graph $G=(V, E)$ on $n$ vertices and $S \subseteq V$ with $|S| = \widetilde{\Theta}(n^{3/(d+3)})$, such that when adding $O(n)$ or $O(m)$ shortcuts, the sourcewise diameter is $\widetilde{\Omega}(|S|^{1/3})$.
The mathematical theory of a novel variational approximation scheme for general second and fourth order partial differential equations \begin{equation}\label{eq: A} \partial_t u - \nabla\cdot\Big(u\nabla\frac{\delta\phi}{\delta u}(u)\Big|\nabla\frac{\delta\phi}{\delta u}(u)\Big|^{q-2}\Big) \ = \ 0, \quad\quad u\geq0, \end{equation} $q\in(1, +\infty)$, is developed.
The ability to measure the satisfaction of (groups of) voters is a crucial prerequisite for formulating proportionality axioms in approval-based participatory budgeting elections. Two common - but very different - ways to measure the satisfaction of a voter consider (i) the number of approved projects and (ii) the total cost of approved projects, respectively. In general, it is difficult to decide which measure of satisfaction best reflects the voters' true utilities. In this paper, we study proportionality axioms with respect to large classes of approval-based satisfaction functions. We establish logical implications among our axioms and related notions from the literature, and we ask whether outcomes can be achieved that are proportional with respect to more than one satisfaction function. We show that this is impossible for the two commonly used satisfaction functions when considering proportionality notions based on extended justified representation, but achievable for a notion based on proportional justified representation. For the latter result, we introduce a strengthening of priceability and show that it is satisfied by several polynomial-time computable rules, including the Method of Equal Shares and Phragm\`en's sequential rule.
Online algorithms with predictions have become a trending topic in the field of beyond worst-case analysis of algorithms. These algorithms incorporate predictions about the future to obtain performance guarantees that are of high quality when the predictions are good, while still maintaining bounded worst-case guarantees when predictions are arbitrarily poor. In general, the algorithm is assumed to be unaware of the prediction's quality. However, recent developments in the machine learning literature have studied techniques for providing uncertainty quantification on machine-learned predictions, which describes how certain a model is about its quality. This paper examines the question of how to optimally utilize uncertainty-quantified predictions in the design of online algorithms. In particular, we consider predictions augmented with uncertainty quantification describing the likelihood of the ground truth falling in a certain range, designing online algorithms with these probabilistic predictions for two classic online problems: ski rental and online search. In each case, we demonstrate that non-trivial modifications to algorithm design are needed to fully leverage the probabilistic predictions. Moreover, we consider how to utilize more general forms of uncertainty quantification, proposing a framework based on online learning that learns to exploit uncertainty quantification to make optimal decisions in multi-instance settings.
We proposed a parallel-in-time method based on preconditioner for Biot's consolidation model in poroelasticity. In order to achieve a fast and stable convergence for the matrix system of the Biot's model, we design two preconditioners with approximations of the Schur complement. The parallel-in-time method employs an inverted time-stepping scheme that iterates to solve the preconditioned linear system in the outer loop and advances the time step in the inner loop. This allows us to parallelize the iterations with a theoretical parallel efficiency that approaches 1 as the number of time steps and spatial steps grows. We demonstrate the stability, accuracy, and linear speedup of our method on HPC platform through numerical experiments.
We develop a Bayesian inference method for discretely-observed stochastic differential equations (SDEs). Inference is challenging for most SDEs, due to the analytical intractability of the likelihood function. Nevertheless, forward simulation via numerical methods is straightforward, motivating the use of approximate Bayesian computation (ABC). We propose a conditional simulation scheme for SDEs that is based on lookahead strategies for sequential Monte Carlo (SMC) and particle smoothing using backward simulation. This leads to the simulation of trajectories that are consistent with the observed trajectory, thereby increasing the ABC acceptance rate. We additionally employ an invariant neural network, previously developed for Markov processes, to learn the summary statistics function required in ABC. The neural network is incrementally retrained by exploiting an ABC-SMC sampler, which provides new training data at each round. Since the SDE simulation scheme differs from standard forward simulation, we propose a suitable importance sampling correction, which has the added advantage of guiding the parameters towards regions of high posterior density, especially in the first ABC-SMC round. Our approach achieves accurate inference and is about three times faster than standard (forward-only) ABC-SMC. We illustrate our method in four simulation studies, including three examples from the Chan-Karaolyi-Longstaff-Sanders SDE family.
Optimal algorithms are developed for robust detection of changes in non-stationary processes. These are processes in which the distribution of the data after change varies with time. The decision-maker does not have access to precise information on the post-change distribution. It is shown that if the post-change non-stationary family has a distribution that is least favorable in a well-defined sense, then the algorithms designed using the least favorable distributions are robust and optimal. Non-stationary processes are encountered in public health monitoring and space and military applications. The robust algorithms are applied to real and simulated data to show their effectiveness.
Precise arbitrary trajectory tracking for quadrotors is challenging due to unknown nonlinear dynamics, trajectory infeasibility, and actuation limits. To tackle these challenges, we present Deep Adaptive Trajectory Tracking (DATT), a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world. DATT builds on a novel feedforward-feedback-adaptive control structure trained in simulation using reinforcement learning. When deployed on real hardware, DATT is augmented with a disturbance estimator using L1 adaptive control in closed-loop, without any fine-tuning. DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields, including challenging scenarios where baselines completely fail. Moreover, DATT can efficiently run online with an inference time less than 3.2 ms, less than 1/4 of the adaptive nonlinear model predictive control baseline
We investigate the dimension-parametric complexity of the reachability problem in vector addition systems with states (VASS) and its extension with pushdown stack (pushdown VASS). Up to now, the problem is known to be $\mathcal{F}_k$-hard for VASS of dimension $3k+2$ (the complexity class $\mathcal{F}_k$ corresponds to the $k$th level of the fast-growing hierarchy), and no essentially better bound is known for pushdown VASS. We provide a new construction that improves the lower bound for VASS: $\mathcal{F}_k$-hardness in dimension $2k+3$. Furthermore, building on our new insights we show a new lower bound for pushdown VASS: $\mathcal{F}_k$-hardness in dimension $\frac k 2 + 4$. This dimension-parametric lower bound is strictly stronger than the upper bound for VASS, which suggests that the (still unknown) complexity of the reachability problem in pushdown VASS is higher than in plain VASS (where it is Ackermann-complete).
In the literature on Kleene algebra, a number of variants have been proposed which impose additional structure specified by a theory, such as Kleene algebra with tests (KAT) and the recent Kleene algebra with observations (KAO), or make specific assumptions about certain constants, as for instance in NetKAT. Many of these variants fit within the unifying perspective offered by Kleene algebra with hypotheses, which comes with a canonical language model constructed from a given set of hypotheses. For the case of KAT, this model corresponds to the familiar interpretation of expressions as languages of guarded strings. A relevant question therefore is whether Kleene algebra together with a given set of hypotheses is complete with respect to its canonical language model. In this paper, we revisit, combine and extend existing results on this question to obtain tools for proving completeness in a modular way. We showcase these tools by giving new and modular proofs of completeness for KAT, KAO and NetKAT, and we prove completeness for new variants of KAT: KAT extended with a constant for the full relation, KAT extended with a converse operation, and a version of KAT where the collection of tests only forms a distributive lattice.