We introduce a new class of numerical schemes which allow for low regularity approximations to the expectation $ \mathbb{E}(|u_{k}(\tau, v^{\eta})|^2)$, where $u_k$ denotes the $k$-th Fourier coefficient of the solution $u$ of the dispersive equation and $ v^{\eta}(x) $ the associated random initial data. This quantity plays an important role in physics, in particular in the study of wave turbulence where one needs to adopt a statistical approach in order to obtain deep insight into the generic long-time behaviour of solutions to dispersive equations. Our new class of schemes is based on Wick's theorem and Feynman diagrams together with a resonance based discretisation (see arXiv:2005.01649) set in a more general context: we introduce a novel combinatorial structure called paired decorated forests which are two decorated trees whose decorations on the leaves come in pair. The character of the scheme draws its inspiration from the treatment of singular stochastic partial differential equations via Regularity Structures. In contrast to classical approaches, we do not discretize the PDE itself, but rather its expectation. This allows us to heavily exploit the optimal resonance structure and underlying gain in regularity on the finite dimensional (discrete) level.
We study a class of fully-discrete schemes for the numerical approximation of solutions of stochastic Cahn--Hilliard equations with cubic nonlinearity and driven by additive noise. The spatial (resp. temporal) discretization is performed with a spectral Galerkin method (resp. a tamed exponential Euler method). We consider two situations: space-time white noise in dimension $d=1$ and trace-class noise in dimensions $d=1,2,3$. In both situations, we prove weak error estimates, where the weak order of convergence is twice the strong order of convergence with respect to the spatial and temporal discretization parameters. To prove these results, we show appropriate regularity estimates for solutions of the Kolmogorov equation associated with the stochastic Cahn--Hilliard equation, which have not been established previously and may be of interest in other contexts.
The classical Hennessy-Milner theorem is an important tool in the analysis of concurrent processes; it guarantees that any two non-bisimilar states in finitely branching labelled transition systems can be distinguished by a modal formula. Numerous variants of this theorem have since been established for a wide range of logics and system types, including quantitative versions where lower bounds on behavioural distance (e.g.~in weighted, metric, or probabilistic transition systems) are witnessed by quantitative modal formulas. Both the qualitative and the quantitative versions have been accommodated within the framework of coalgebraic logic, with distances taking values in quantales, subject to certain restrictions, such as being so-called value quantales. While previous quantitative coalgebraic Hennessy-Milner theorems apply only to liftings of set functors to (pseudo-)metric spaces, in the present work we provide a quantitative coalgebraic Hennessy-Milner theorem that applies more widely to functors native to metric spaces; notably, we thus cover, for the first time, the well-known Hennessy-Milner theorem for continuous probabilistic transition systems, where transitions are given by Borel measures on metric spaces, as an instance. In the process, we also relax the restrictions imposed on the quantale, and additionally parametrize the technical account over notions of closure and, hence, density, providing associated variants of the Stone-Weierstrass theorem; this allows us to cover, for instance, behavioural ultrametrics.
Structure learning via MCMC sampling is known to be very challenging because of the enormous search space and the existence of Markov equivalent DAGs. Theoretical results on the mixing behavior are lacking. In this work, we prove the rapid mixing of a random walk Metropolis-Hastings algorithm, which reveals that the complexity of Bayesian learning of sparse equivalence classes grows only polynomially in $n$ and $p$, under some high-dimensional assumptions. A series of high-dimensional consistency results is obtained, including the strong selection consistency of an empirical Bayes model for structure learning. Our proof is based on two new results. First, we derive a general mixing time bound on finite state spaces, which can be applied to various local MCMC schemes for other model selection problems. Second, we construct greedy search paths on the space of equivalence classes with node degree constraints by proving a combinatorial property of the comparison between two DAGs. Simulation studies on the proposed MCMC sampler are conducted to illustrate the main theoretical findings.
Dependency graphs have proven to be a very successful model to represent the syntactic structure of sentences of human languages. In these graphs, widely accepted to be trees, vertices are words and arcs connect syntactically-dependent words. The tendency of these dependencies to be short has been demonstrated using random baselines for the sum of the lengths of the edges or its variants. A ubiquitous baseline is the expected sum in projective orderings (wherein edges do not cross and the root word of the sentence is not covered by any edge). It was shown that said expected value can be computed in $O(n)$ time. In this article we focus on planar orderings (where the root word can be covered) and present two main results. First, we show the relationship between the expected sum in planar arrangements and the expected sum in projective arrangements. Second, we also derive a $O(n)$-time algorithm to calculate the expected value of the sum of edge lengths. These two results stem from another contribution of the present article, namely a characterization of planarity that, given a sentence, yields either the number of planar permutations or an efficient algorithm to generate uniformly random planar permutations of the words. Our research paves the way for replicating past research on dependency distance minimization using random planar linearizations as random baseline.
We consider the approximation of some optimal control problems for the Navier-Stokes equation via a Dynamic Programming approach. These control problems arise in many industrial applications and are very challenging from the numerical point of view since the semi-discretization of the dynamics corresponds to an evolutive system of ordinary differential equations in very high dimension. The typical approach is based on the Pontryagin maximum principle and leads to a two point boundary value problem. Here we present a different approach based on the value function and the solution of a Bellman, a challenging problem in high dimension. We mitigate the curse of dimensionality via a recent multilinear approximation of the dynamics coupled with a dynamic programming scheme on a tree structure. We discuss several aspects related to the implementation of this new approach and we present some numerical examples to illustrate the results on classical control problems studied in the literature.
TSP is a classical NP-hard combinatorial optimization problem with many practical variants. LKH is one of the state-of-the-art local search algorithms for the TSP. LKH-3 is a powerful extension of LKH that can solve many TSP variants. Both LKH and LKH-3 associate a candidate set to each city to improve the efficiency, and have two different methods, $\alpha$-measure and POPMUSIC, to decide the candidate sets. In this work, we first propose a Variable Strategy Reinforced LKH (VSR-LKH) algorithm, which incorporates three reinforcement learning methods (Q-learning, Sarsa, Monte Carlo) with LKH, for the TSP. We further propose a new algorithm called VSR-LKH-3 that combines the variable strategy reinforcement learning method with LKH-3 for typical TSP variants, including the TSP with time windows (TSPTW) and Colored TSP (CTSP). The proposed algorithms replace the inflexible traversal operations in LKH and LKH-3 and let the algorithms learn to make a choice at each search step by reinforcement learning. Both LKH and LKH-3, with either $\alpha$-measure or POPMUSIC, can be significantly improved by our methods. Extensive experiments on 236 widely-used TSP benchmarks with up to 85,900 cities demonstrate the excellent performance of VSR-LKH. VSR-LKH-3 also significantly outperforms the state-of-the-art heuristics for TSPTW and CTSP.
In this work, we propose a novel framework for the numerical solution of time-dependent conservation laws with implicit schemes via primal-dual hybrid gradient methods. We solve an initial value problem (IVP) for the partial differential equation (PDE) by casting it as a saddle point of a min-max problem and using iterative optimization methods to find the saddle point. Our approach is flexible with the choice of both time and spatial discretization schemes. It benefits from the implicit structure and gains large regions of stability, and overcomes the restriction on the mesh size in time by explicit schemes from Courant--Friedrichs--Lewy (CFL) conditions (really via von Neumann stability analysis). Nevertheless, it is highly parallelizable and easy-to-implement. In particular, no nonlinear inversions are required! Specifically, we illustrate our approach using the finite difference scheme and discontinuous Galerkin method for the spatial scheme; backward Euler and backward differentiation formulas for implicit discretization in time. Numerical experiments illustrate the effectiveness and robustness of the approach. In future work, we will demonstrate that our idea of replacing an initial-value evolution equation with this primal-dual hybrid gradient approach has great advantages in many other situations.
Finite element methods based on cut-cells are becoming increasingly popular because of their advantages over formulations based on body-fitted meshes for problems with moving interfaces. In such methods, the cells (or elements) which are cut by the interface between two different domains need to be integrated using special techniques in order to obtain optimal convergence rates and accurate fluxes across the interface. The adaptive integration technique in which the cells are recursively subdivided is one of the popular techniques for the numerical integration of cut-cells due to its advantages over tessellation, particularly for problems involving complex geometries in three dimensions. Although adaptive integration does not impose any limitations on the representation of the geometry of immersed solids as it requires only point location algorithms, it becomes computationally expensive for recovering optimal convergence rates. This paper presents a comprehensive assessment of the adaptive integration of cut-cells for applications in computational fluid dynamics and fluid-structure interaction. We assess the effect of the accuracy of integration of cut cells on convergence rates in velocity and pressure fields, and then on forces and displacements for fluid-structure interaction problems by studying several examples in two and three dimensions. By taking the computational cost and the accuracy of forces and displacements into account, we demonstrate that numerical results of acceptable accuracy for FSI problems involving laminar flows can be obtained with only fewer levels of refinement. In particular, we show that three levels of adaptive refinement are sufficient for obtaining force and displacement values of acceptable accuracy for laminar fluid-structure interaction problems.
In this work we address the reduction of face degrees of freedom (DOFs) for discrete elasticity complexes. Specifically, using serendipity techniques, we develop a reduced version of a recently introduced two-dimensional complex arising from traces of the three-dimensional elasticity complex. The keystone of the reduction process is a new estimate of symmetric tensor-valued polynomial fields in terms of boundary values, completed with suitable projections of internal values for higher degrees. We prove an extensive set of new results for the original complex and show that the reduced complex has the same homological and analytical properties as the original one. This paper also contains an appendix with proofs of general Poincar\'e--Korn-type inequalities for hybrid fields.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.