This work deals with the numerical solution of systems of oscillatory second-order differential equations which often arise from the semi-discretization in space of partial differential equations. Since these differential equations exhibit pronounced or highly) oscillatory behavior, standard numerical methods are known to perform poorly. Our approach consists in directly discretizing the problem by means of Gautschi-type integrators based on $\operatorname{sinc}$ matrix functions. The novelty contained here is that of using a suitable rational approximation formula for the $\operatorname{sinc}$ matrix function to apply a rational Krylov-like approximation method with suitable choices of poles. In particular, we discuss the application of the whole strategy to a finite element discretization of the wave equation.
The Periodic Event Scheduling Problem (PESP) is the central mathematical tool for periodic timetable optimization in public transport. PESP can be formulated in several ways as a mixed-integer linear program with typically general integer variables. We investigate the split closure of these formulations and show that split inequalities are identical with the recently introduced flip inequalities. While split inequalities are a general mixed-integer programming technique, flip inequalities are defined in purely combinatorial terms, namely cycles and arc sets of the digraph underlying the PESP instance. It is known that flip inequalities can be separated in pseudo-polynomial time. We prove that this is best possible unless P $=$ NP, but also observe that the complexity becomes linear-time if the cycle defining the flip inequality is fixed. Moreover, introducing mixed-integer-compatible maps, we compare the split closures of different formulations, and show that reformulation or binarization by subdivision do not lead to stronger split closures. Finally, we estimate computationally how much of the optimality gap of the instances of the benchmark library PESPlib can be closed exclusively by split cuts, and provide better dual bounds for five instances.
A stable and high-order accurate solver for linear and nonlinear parabolic equations is presented. An additive Runge-Kutta method is used for the time stepping, which integrates the linear stiff terms by an implicit singly diagonally implicit Runge-Kutta (ESDIRK) method and the nonlinear terms by an explicit Runge-Kutta (ERK) method. In each time step, the implicit solve is performed by the recently developed Hierarchical Poincar\'e-Steklov (HPS) method. This is a fast direct solver for elliptic equations that decomposes the space domain into a hierarchical tree of subdomains and builds spectral collocation solvers locally on the subdomains. These ideas are naturally combined in the presented method since the singly diagonal coefficient in ESDIRK and a fixed time-step ensures that the coefficient matrix in the implicit solve of HPS remains the same for all time stages. This means that the precomputed inverse can be efficiently reused, leading to a scheme with complexity (in two dimensions) $\mathcal{O}(N^{1.5})$ for the precomputation where the solution operator to the elliptic problems is built, and then $\mathcal{O}(N)$ for each time step. The stability of the method is proved for first order in time and any order in space, and numerical evidence substantiates a claim of stability for a much broader class of time discretization methods.Numerical experiments supporting the accuracy of efficiency of the method in one and two dimensions are presented.
Quantum algorithms for optimization problems are of general interest. Despite recent progress in classical lower bounds for nonconvex optimization under different settings and quantum lower bounds for convex optimization, quantum lower bounds for nonconvex optimization are still widely open. In this paper, we conduct a systematic study of quantum query lower bounds on finding $\epsilon$-approximate stationary points of nonconvex functions, and we consider the following two important settings: 1) having access to $p$-th order derivatives; or 2) having access to stochastic gradients. The classical query lower bounds is $\Omega\big(\epsilon^{-\frac{1+p}{p}}\big)$ regarding the first setting, and $\Omega(\epsilon^{-4})$ regarding the second setting (or $\Omega(\epsilon^{-3})$ if the stochastic gradient function is mean-squared smooth). In this paper, we extend all these classical lower bounds to the quantum setting. They match the classical algorithmic results respectively, demonstrating that there is no quantum speedup for finding $\epsilon$-stationary points of nonconvex functions with $p$-th order derivative inputs or stochastic gradient inputs, whether with or without the mean-squared smoothness assumption. Technically, our quantum lower bounds are obtained by showing that the sequential nature of classical hard instances in all these settings also applies to quantum queries, preventing any quantum speedup other than revealing information of the stationary points sequentially.
We consider the problem of recovering conditional independence relationships between $p$ jointly distributed Hilbertian random elements given $n$ realizations thereof. We operate in the sparse high-dimensional regime, where $n \ll p$ and no element is related to more than $d \ll p$ other elements. In this context, we propose an infinite-dimensional generalization of the graphical lasso. We prove model selection consistency under natural assumptions and extend many classical results to infinite dimensions. In particular, we do not require finite truncation or additional structural restrictions. The plug-in nature of our method makes it applicable to any observational regime, whether sparse or dense, and indifferent to serial dependence. Importantly, our method can be understood as naturally arising from a coherent maximum likelihood philosophy.
This work studies minimization problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth and possibly satisfies additional properties. We consider two kinds of zero-order projected gradient descent algorithms, which differ in the form of the gradient estimator. The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016). We present an improved analysis of this algorithm on the class of highly smooth and strongly convex functions studied in the prior work, and we derive rates of convergence for two more general classes of non-convex functions. Namely, we consider highly smooth functions satisfying the Polyak-{\L}ojasiewicz condition and the class of highly smooth functions with no additional property. The second algorithm is based on randomization over the $\ell_1$ sphere, and it extends to the highly smooth setting the algorithm that was recently proposed for Lipschitz convex functions in Akhavan et al. (2022). We show that, in the case of noiseless oracle, this novel algorithm enjoys better bounds on bias and variance than the $\ell_2$ randomization and the commonly used Gaussian randomization algorithms, while in the noisy case both $\ell_1$ and $\ell_2$ algorithms benefit from similar improved theoretical guarantees. The improvements are achieved thanks to a new proof techniques based on Poincar\'e type inequalities for uniform distributions on the $\ell_1$ or $\ell_2$ spheres. The results are established under weak (almost adversarial) assumptions on the noise. Moreover, we provide minimax lower bounds proving optimality or near optimality of the obtained upper bounds in several cases.
We investigate error bounds for numerical solutions of divergence structure linear elliptic PDEs on compact manifolds without boundary. Our focus is on a class of monotone finite difference approximations, which provide a strong form of stability that guarantees the existence of a bounded solution. In many settings including the Dirichlet problem, it is easy to show that the resulting solution error is proportional to the formal consistency error of the scheme. We make the surprising observation that this need not be true for PDEs posed on compact manifolds without boundary. We propose a particular class of approximation schemes built around an underlying monotone scheme with consistency error $O(h^\alpha)$. By carefully constructing barrier functions, we prove that the solution error is bounded by $O(h^{\alpha/(d+1)})$ in dimension $d$. We also provide a specific example where this predicted convergence rate is observed numerically. Using these error bounds, we further design a family of provably convergent approximations to the solution gradient.
We initiate the study of the algorithmic problem of certifying lower bounds on the discrepancy of random matrices: given an input matrix $A \in \mathbb{R}^{m \times n}$, output a value that is a lower bound on $\mathsf{disc}(A) = \min_{x \in \{\pm 1\}^n} ||Ax||_\infty$ for every $A$, but is close to the typical value of $\mathsf{disc}(A)$ with high probability over the choice of a random $A$. This problem is important because of its connections to conjecturally-hard average-case problems such as negatively-spiked PCA, the number-balancing problem and refuting random constraint satisfaction problems. We give the first polynomial-time algorithms with non-trivial guarantees for two main settings. First, when the entries of $A$ are i.i.d. standard Gaussians, it is known that $\mathsf{disc} (A) = \Theta (\sqrt{n}2^{-n/m})$ with high probability. Our algorithm certifies that $\mathsf{disc}(A) \geq \exp(- O(n^2/m))$ with high probability. As an application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein on the computational hardness of the detection problem in the negatively-spiked Wishart model. Second, we consider the integer partitioning problem: given $n$ uniformly random $b$-bit integers $a_1, \ldots, a_n$, certify the non-existence of a perfect partition, i.e. certify that $\mathsf{disc} (A) \geq 1$ for $A = (a_1, \ldots, a_n)$. Under the scaling $b = \alpha n$, it is known that the probability of the existence of a perfect partition undergoes a phase transition from 1 to 0 at $\alpha = 1$; our algorithm certifies the non-existence of perfect partitions for some $\alpha = O(n)$. We also give efficient non-deterministic algorithms with significantly improved guarantees. Our algorithms involve a reduction to the Shortest Vector Problem.
The problem of distributed optimization requires a group of networked agents to compute a parameter that minimizes the average of their local cost functions. While there are a variety of distributed optimization algorithms that can solve this problem, they are typically vulnerable to ``Byzantine'' agents that do not follow the algorithm. Recent attempts to address this issue focus on single dimensional functions, or assume certain statistical properties of the functions at the agents. In this paper, we provide two resilient, scalable, distributed optimization algorithms for multi-dimensional functions. Our schemes involve two filters, (1) a distance-based filter and (2) a min-max filter, which each remove neighborhood states that are extreme (defined precisely in our algorithms) at each iteration. We show that these algorithms can mitigate the impact of up to $F$ (unknown) Byzantine agents in the neighborhood of each regular agent. In particular, we show that if the network topology satisfies certain conditions, all of the regular agents' states are guaranteed to converge to a bounded region that contains the minimizer of the average of the regular agents' functions.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.