We introduce a new class of numerical schemes which allow for low regularity approximations to the expectation $ \mathbb{E}(|u_{k}(\tau, v^{\eta})|^2)$, where $u_k$ denotes the $k$-th Fourier coefficient of the solution $u$ of the dispersive equation and $ v^{\eta}(x) $ the associated random initial data. This quantity plays an important role in physics, in particular in the study of wave turbulence where one needs to adopt a statistical approach in order to obtain deep insight into the generic long-time behaviour of solutions to dispersive equations. Our new class of schemes is based on Wick's theorem and Feynman diagrams together with a resonance based discretisation (see arXiv:2005.01649) set in a more general context: we introduce a novel combinatorial structure called paired decorated forests which are two decorated trees whose decorations on the leaves come in pair. The character of the scheme draws its inspiration from the treatment of singular stochastic partial differential equations via Regularity Structures. In contrast to classical approaches, we do not discretize the PDE itself, but rather its expectation. This allows us to heavily exploit the optimal resonance structure and underlying gain in regularity on the finite dimensional (discrete) level.
We say that a Hamilton cycle $C=(x_1,\ldots,x_n)$ in a graph $G$ is $k$-symmetric, if the mapping $x_i\mapsto x_{i+n/k}$ for all $i=1,\ldots,n$, where indices are considered modulo $n$, is an automorphism of $G$. In other words, if we lay out the vertices $x_1,\ldots,x_n$ equidistantly on a circle and draw the edges of $G$ as straight lines, then the drawing of $G$ has $k$-fold rotational symmetry, i.e., all information about the graph is compressed into a $360^\circ/k$ wedge of the drawing. The maximum $k$ for which there exists a $k$-symmetric Hamilton cycle in $G$ is referred to as the Hamilton compression of $G$. We investigate the Hamilton compression of four different families of vertex-transitive graphs, namely hypercubes, Johnson graphs, permutahedra and Cayley graphs of abelian groups. In several cases we determine their Hamilton compression exactly, and in other cases we provide close lower and upper bounds. The constructed cycles have a much higher compression than several classical Gray codes known from the literature. Our constructions also yield Gray codes for bitstrings, combinations and permutations that have few tracks and/or that are balanced.
We study the classical problem of approximating a non-decreasing function $f: \mathcal{X} \to \mathcal{Y}$ in $L^p(\mu)$ norm by sequentially querying its values, for known compact real intervals $\mathcal{X}$, $\mathcal{Y}$ and a known probability measure $\mu$ on $\cX$. For any function~$f$ we characterize the minimum number of evaluations of $f$ that algorithms need to guarantee an approximation $\hat{f}$ with an $L^p(\mu)$ error below $\epsilon$ after stopping. Unlike worst-case results that hold uniformly over all $f$, our complexity measure is dependent on each specific function $f$. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function $f$, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the $L^p(\mu)$ error of GreedyBox decreases much faster for piecewise-$C^2$ functions than predicted by the algorithm (without any knowledge on the smoothness of $f$). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.
The solutions of scalar ordinary differential equations become more complex as their coefficients increase in magnitude. As a consequence, when a standard solver is applied to such an equation, its running time grows with the magnitudes of the equation's coefficients. It is well known, however, that scalar ordinary differential equations with slowly-varying coefficients admit slowly-varying phase functions whose cost to represent via standard techniques is largely independent of the magnitude of the equation's coefficients. This observation is the basis of most methods for the asymptotic approximation of the solutions of ordinary differential equations, including the WKB method. Here, we introduce two numerical algorithms for constructing phase functions for scalar ordinary differential equations inspired by the classical Levin method for the calculation of oscillatory integrals. In the case of a large class of scalar ordinary differential equations with slowly-varying coefficients, their running times are independent of the magnitude of the equation's coefficients. The results of extensive numerical experiments demonstrating the properties of our algorithms are presented.
We show that unless P=NP, there cannot be a polynomial-time algorithm that finds a point within Euclidean distance $c^n$ (for any constant $c \ge 0$) of a local minimizer of an $n$-variate quadratic function over a polytope. This result (even with $c=0$) answers a question of Pardalos and Vavasis that appeared in 1992 on a list of seven open problems in complexity theory for numerical optimization. Our proof technique also implies that the problem of deciding whether a quadratic function has a local minimizer over an (unbounded) polyhedron, and that of deciding if a quartic polynomial has a local minimizer are NP-hard.
We propose reinforcement learning to control the dynamical self-assembly of the dodecagonal quasicrystal (DDQC) from patchy particles. The patchy particles have anisotropic interactions with other particles and form DDQC. However, their structures at steady states are significantly influenced by the kinetic pathways of their structural formation. We estimate the best policy of temperature control trained by the Q-learning method and demonstrate that we can generate DDQC with few defects using the estimated policy. The temperature schedule obtained by reinforcement learning can reproduce the desired structure more efficiently than the conventional pre-fixed temperature schedule, such as annealing. To clarify the success of the learning, we also analyse a simple model describing the kinetics of structural changes through the motion in a triple-well potential. We have found that reinforcement learning autonomously discovers the critical temperature at which structural fluctuations enhance the chance of forming a globally stable state. The estimated policy guides the system toward the critical temperature to assist the formation of DDQC.
We are interested in numerical algorithms for computing the electrical field generated by a charge distribution localized on scale $l$ in an infinite heterogeneous correlated random medium, in a situation where the medium is only known in a box of diameter $L\gg l$ around the support of the charge. We show that the algorithm of Lu, Otto and Wang, suggesting optimal Dirichlet boundary conditions motivated by the multipole expansion of Bella, Giunti and Otto, still performs well in correlated media. With overwhelming probability, we obtain a convergence rate in terms of $l$, $L$ and the size of the correlations for which optimality is supported with numerical simulations. These estimates are provided for ensembles which satisfy a multi-scale logarithmic Sobolev inequality, where our main tool is an extension of the semi-group estimates established by the first author. As part of our strategy, we construct sub-linear second-order correctors in this correlated setting which is of independent interest.
Consider the point line-geometry ${\mathcal P}_t(n,k)$ having as points all the $[n,k]$-linear codes having minimum dual distance at least $t+1$ and where two points $X$ and $Y$ are collinear whenever $X\cap Y$ is a $[n,k-1]$-linear code having minimum dual distance at least $t+1$. We are interested in the collinearity graph $\Lambda_t(n,k)$ of ${\mathcal P}_t(n,k).$ The graph $\Lambda_t(n,k)$ is a subgraph of the Grassmann graph and also a subgraph of the graph $\Delta_t(n,k)$ of the linear codes having minimum dual distance at least $t+1$ introduced in~[M. Kwiatkowski, M. Pankov, On the distance between linear codes, Finite Fields Appl. 39 (2016), 251--263, doi:10.1016/j.ffa.2016.02.004, arXiv:1506.00215]. We shall study the structure of $\Lambda_t(n,k)$ in relation to that of $\Delta_t(n,k)$ and we will characterize the set of its isolated vertices. We will then focus on $\Lambda_1(n,k)$ and $\Lambda_2(n,k)$ providing necessary and sufficient conditions for them to be connected.
This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.
Given a boolean formula $\Phi$(X, Y, Z), the Max\#SAT problem asks for finding a partial model on the set of variables X, maximizing its number of projected models over the set of variables Y. We investigate a strict generalization of Max\#SAT allowing dependencies for variables in X, effectively turning it into a synthesis problem. We show that this new problem, called DQMax\#SAT, subsumes both the DQBF and DSSAT problems. We provide a general resolution method, based on a reduction to Max\#SAT, together with two improvements for dealing with its inherent complexity. We further discuss a concrete application of DQMax\#SAT for symbolic synthesis of adaptive attackers in the field of program security. Finally, we report preliminary results obtained on the resolution of benchmark problems using a prototype DQMax\#SAT solver implementation.
Partial differential equations (PDEs) are used to describe a variety of physical phenomena. Often these equations do not have analytical solutions and numerical approximations are used instead. One of the common methods to solve PDEs is the finite element method. Computing derivative information of the solution with respect to the input parameters is important in many tasks in scientific computing. We extend JAX automatic differentiation library with an interface to Firedrake finite element library. High-level symbolic representation of PDEs allows bypassing differentiating through low-level possibly many iterations of the underlying nonlinear solvers. Differentiating through Firedrake solvers is done using tangent-linear and adjoint equations. This enables the efficient composition of finite element solvers with arbitrary differentiable programs. The code is available at github.com/IvanYashchuk/jax-firedrake.