In this paper we consider the spatial semi-discretization of conservative PDEs. Such finite dimensional approximations of infinite dimensional dynamical systems can be described as flows in suitable matrix spaces, which in turn leads to the need to solve polynomial matrix equations, a classical and important topic both in theoretical and in applied mathematics. Solving numerically these equations is challenging due to the presence of several conservation laws which our finite models incorporate and which must be retained while integrating the equations of motion. In the last thirty years, the theory of geometric integration has provided a variety of techniques to tackle this problem. These numerical methods require to solve both direct and inverse problems in matrix spaces. We present two algorithms to solve a cubic matrix equation arising in the geometric integration of isospectral flows. This type of ODEs includes finite models of ideal hydrodynamics, plasma dynamics, and spin particles, which we use as test problems for our algorithms.
We study algorithmic applications of a natural discretization for the hard-sphere model and the Widom-Rowlinson model in a region $\mathbb{V}\subset\mathbb{R}^d$. These models are used in statistical physics to describe mixtures of one or multiple particle types subjected to hard-core interactions. For each type, particles follow a Poisson point process with a type specific activity parameter (fugacity). The Gibbs distribution is characterized by the mixture of these point processes conditioned that no two particles are closer than a type-dependent distance threshold. A key part in better understanding the Gibbs distribution is its normalizing constant, called partition function. We give sufficient conditions that the partition function of a discrete hard-core model on a geometric graph based on a point set $X \subset \mathbb{V}$ closely approximates those of such continuous models. Previously, this was only shown for the hard-sphere model on cubic regions $\mathbb{V}=[0, \ell)^d$ when $X$ is exponential in the volume of the region $\nu(\mathbb{V})$, limiting algorithmic applications. In the same setting, our refined analysis only requires a quadratic number of points, which we argue to be tight. We use our improved discretization results to approximate the partition functions of the hard-sphere model and the Widom-Rowlinson efficiently in $\nu(\mathbb{V})$. For the hard-sphere model, we obtain the first quasi-polynomial deterministic approximation algorithm for the entire fugacity regime for which, so far, only randomized approximations are known. Furthermore, we simplify a recently introduced fully polynomial randomized approximation algorithm. Similarly, we obtain the best known deterministic and randomized approximation bounds for the Widom-Rowlinson model. Moreover, we obtain approximate sampling algorithms for the respective spin systems within the same fugacity regimes.
Computing the top eigenvectors of a matrix is a problem of fundamental interest to various fields. While the majority of the literature has focused on analyzing the reconstruction error of low-rank matrices associated with the retrieved eigenvectors, in many applications one is interested in finding one vector with high Rayleigh quotient. In this paper we study the problem of approximating the top-eigenvector. Given a symmetric matrix $\mathbf{A}$ with largest eigenvalue $\lambda_1$, our goal is to find a vector \hu that approximates the leading eigenvector $\mathbf{u}_1$ with high accuracy, as measured by the ratio $R(\hat{\mathbf{u}})=\lambda_1^{-1}{\hat{\mathbf{u}}^T\mathbf{A}\hat{\mathbf{u}}}/{\hat{\mathbf{u}}^T\hat{\mathbf{u}}}$. We present a novel analysis of the randomized SVD algorithm of \citet{halko2011finding} and derive tight bounds in many cases of interest. Notably, this is the first work that provides non-trivial bounds of $R(\hat{\mathbf{u}})$ for randomized SVD with any number of iterations. Our theoretical analysis is complemented with a thorough experimental study that confirms the efficiency and accuracy of the method.
Fractional differential equations (FDEs) describe subdiffusion behavior of dynamical systems. Its non-local structure requires taking into account the whole evolution history during the time integration, which then possibly causes additional memory use to store the history, growing in time. An alternative to a quadrature for the history integral is to approximate the fractional kernel with the sum of exponentials, which is equivalent to considering the FDE solution as a sum of solutions to a system of ODEs. One possibility to construct this system is to approximate the Laplace spectrum of the fractional kernel with a rational function. In this paper, we use the adaptive Antoulas--Anderson (AAA) algorithm for the rational approximation of the kernel spectrum which yields only a small number of real valued poles. We propose a numerical scheme based on this idea and study its stability and convergence properties. In addition, we apply the algorithm to a time-fractional Cahn-Hilliard problem.
We present a novel energy-based numerical analysis of semilinear diffusion-reaction boundary value problems. Based on a suitable variational setting, the proposed computational scheme can be seen as an energy minimisation approach. More specifically, this procedure aims to generate a sequence of numerical approximations, which results from the iterative solution of related (stabilised) linearised discrete problems, and tends to a local minimum of the underlying energy functional. Simultaneously, the finite-dimensional approximation spaces are adaptively refined; this is implemented in terms of a new mesh refinement strategy in the context of finite element discretisations, which again relies on the energy structure of the problem under consideration, and does not involve any a posteriori error indicators. In combination, the resulting adaptive algorithm consists of an iterative linearisation procedure on a sequence of hierarchically refined discrete spaces, which we prove to converge towards a solution of the continuous problem in an appropriate sense. Numerical experiments demonstrate the robustness and reliability of our approach for a series of examples.
A key advantage of isogeometric discretizations is their accurate and well-behaved eigenfrequencies and eigenmodes. For degree two and higher, however, optical branches of spurious outlier frequencies and modes may appear due to boundaries or reduced continuity at patch interfaces. In this paper, we introduce a variational approach based on perturbed eigenvalue analysis that eliminates outlier frequencies without negatively affecting the accuracy in the remainder of the spectrum and modes. We then propose a pragmatic iterative procedure that estimates the perturbation parameters in such a way that the outlier frequencies are effectively reduced. We demonstrate that our approach allows for a much larger critical time-step size in explicit dynamics calculations. In addition, we show that the critical time-step size obtained with the proposed approach does not depend on the polynomial degree of spline basis functions.
In micro-fluidics not only does capillarity dominate but also thermal fluctuations become important. On the level of the lubrication approximation, this leads to a quasi-linear fourth-order parabolic equation for the film height $h$ driven by space-time white noise. The gradient flow structure of its deterministic counterpart, the thin-film equation, which encodes the balance between driving capillary and limiting viscous forces, provides the guidance for the thermodynamically consistent introduction of fluctuations. We follow this route on the level of a spatial discretization of the gradient flow structure. Starting from an energetically conformal finite-element (FE) discretization, we point out that the numerical mobility function introduced by Gr\"un and Rumpf can be interpreted as a discretization of the metric tensor in the sense of a mixed FE method with lumping. While this discretization was devised in order to preserve the so-called entropy estimate, we use this to show that the resulting high-dimensional stochastic differential equation (SDE) preserves pathwise and pointwise strict positivity, at least in case of the physically relevant mobility function arising from the no-slip boundary condition. As a consequence, this discretization gives rise to a consistent invariant measure, namely a discretization of the Brownian excursion (up to the volume constraint), and thus features an entropic repulsion. The price to pay over more naive discretizations is that when writing the SDE in It\^o's form, which is the basis for the Euler-Mayurama time discretization, a correction term appears. To conclude, we perform various numerical experiments to compare the behavior of our discretization to that of the more naive finite difference discretization of the equation.
We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs under a vertex-percolation subcriticality condition. We show that this subcriticality condition is optimal in the sense that the problem of (approximately) sampling weighted rooted graphlets becomes impossible for infinite graphs and intractable for finite graphs if the condition does not hold. We apply our rooted graphlet sampling algorithm as a subroutine to give a fast perfect sampling algorithm for polymer models and a fast perfect sampling algorithm for weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. We apply this polymer model algorithm to give improved sampling algorithms for spin systems at low temperatures on expander graphs and other structured families of graphs: under the least restrictive conditions known we give near linear-time algorithms, while previous algorithms in these regimes required large polynomial running times.
The theory of reinforcement learning currently suffers from a mismatch between its empirical performance and the theoretical characterization of its performance, with consequences for, e.g., the understanding of sample efficiency, safety, and robustness. The linear quadratic regulator with unknown dynamics is a fundamental reinforcement learning setting with significant structure in its dynamics and cost function, yet even in this setting there is a gap between the best known regret lower-bound of $\Omega_p(\sqrt{T})$ and the best known upper-bound of $O_p(\sqrt{T}\,\text{polylog}(T))$. The contribution of this paper is to close that gap by establishing a novel regret upper-bound of $O_p(\sqrt{T})$. Our proof is constructive in that it analyzes the regret of a concrete algorithm, and simultaneously establishes an estimation error bound on the dynamics of $O_p(T^{-1/4})$ which is also the first to match the rate of a known lower-bound. The two keys to our improved proof technique are (1) a more precise upper- and lower-bound on the system Gram matrix and (2) a self-bounding argument for the expected estimation error of the optimal controller.
Multi-material problems often exhibit complex geometries along with physical responses presenting large spatial gradients or discontinuities. In these cases, providing high-quality body-fitted finite element analysis meshes and obtaining accurate solutions remain challenging. Immersed boundary techniques provide elegant solutions for such problems. Enrichment methods alleviate the need for generating conforming analysis grids by capturing discontinuities within mesh elements. Additionally, increased accuracy of physical responses and geometry description can be achieved with higher-order approximation bases. In particular, using B-splines has become popular with the development of IsoGeometric Analysis. In this work, an eXtended IsoGeometric Analysis (XIGA) approach is proposed for multi-material problems. The computational domain geometry is described implicitly by level set functions. A novel generalized Heaviside enrichment strategy is employed to accommodate an arbitrary number of materials without artificially stiffening the physical response. Higher-order B-spline functions are used for both geometry representation and analysis. Boundary and interface conditions are enforced weakly via Nitsche's method, and a new face-oriented ghost stabilization methodology is used to mitigate numerical instabilities arising from small material integration subdomains. Two- and three-dimensional heat transfer and elasticity problems are solved to validate the approach. Numerical studies provide insight into the ability to handle multiple materials considering sharp-edged and curved interfaces, as well as the impact of higher-order bases and stabilization on the solution accuracy and conditioning.
In 2017, Krenn reported that certain problems related to the perfect matchings and colourings of graphs emerge out of studying the constructability of general quantum states using modern photonic technologies. He realized that if we can prove that the \emph{weighted matching index} of a graph, a parameter defined in terms of perfect matchings and colourings of the graph is at most 2, that could lead to exciting insights on the potential of resources of quantum inference. Motivated by this, he conjectured that the {weighted matching index} of any graph is at most 2. The first result on this conjecture was by Bogdanov, who proved that the \emph{(unweighted) matching index} of graphs (non-isomorphic to $K_4$) is at most 2, thus classifying graphs non-isomorphic to $K_4$ into Type 0, Type 1 and Type 2. By definition, the weighted matching index of Type 0 graphs is 0. We give a structural characterization for Type 2 graphs, using which we settle Krenn's conjecture for Type 2 graphs. Using this characterization, we provide a simple $O(|V||E|)$ time algorithm to find the unweighted matching index of any graph. In view of our work, Krenn's conjecture remains to be proved only for Type 1 graphs. We give upper bounds for the weighted matching index in terms of connectivity parameters for such graphs. Using these bounds, for a slightly simplified version, we settle Krenn's conjecture for the class of graphs with vertex connectivity at most 2 and the class of graphs with maximum degree at most 4. Krenn has been publicizing his conjecture in various ways since 2017. He has even declared a reward for a resolution of his conjecture. We hope that this article will popularize the problem among computer scientists.