We design, analyze, and implement a new conservative Discontinuous Galerkin (DG) method for the simulation of solitary wave solutions to the generalized Korteweg-de Vries (KdV) Equation. The key feature of our method is the conservation, at the numerical level, of the mass, energy and Hamiltonian that are conserved by exact solutions of all KdV equations. To our knowledge, this is the first DG method that conserves all these three quantities, a property critical for the accurate long-time evolution of solitary waves. To achieve the desired conservation properties, our novel idea is to introduce two stabilization parameters in the numerical fluxes as new unknowns which then allow us to enforce the conservation of energy and Hamiltonian in the formulation of the numerical scheme. We prove the conservation properties of the scheme which are corroborated by numerical tests. This idea of achieving conservation properties by implicitly defining penalization parameters, that are traditionally specified a priori, can serve as a framework for designing physics-preserving numerical methods for other types of problems.
Data discovery systems help users identify relevant data among large table collections. Users express their discovery needs with a program or a set of keywords. Users may express complex queries using programs but it requires expertise. Keyword search is accessible to a larger audience but limits the types of queries supported. An interesting approach is learned discovery systems which find tables given natural language questions. Unfortunately, these systems require a training dataset for each table collection. And because collecting training data is expensive, this limits their adoption. In this paper, we introduce a self-supervised approach to assemble training datasets and train learned discovery systems without human intervention. It requires addressing several challenges, including the design of self-supervised strategies for data discovery, table representation strategies to feed to the models, and relevance models that work well with the synthetically generated questions. We combine all the above contributions into a system, S2LD, that solves the problem end to end. The evaluation results demonstrate the new techniques outperform state-of-the-art approaches on wellknown benchmarks. All in all, the technique is a stepping stone towards building learned discovery systems. The code is open-sourced at //github.com/TheDataStation/open_table_discovery.
The ability to deal with complex geometries and to go to higher orders is the main advantage of space-time finite element methods. Therefore, we want to develop a solid background from which we can construct appropriate space-time methods. In this paper, we will treat time as another space direction, which is the main idea of space-time methods. First, we will briefly discuss how exactly the vectorial wave equation is derived from Maxwell's equations in a space-time structure, taking into account Ohm's law. Then we will derive a space-time variational formulation for the vectorial wave equation using different trial and test spaces. This paper has two main goals. First, we prove unique solvability for the resulting Galerkin--Petrov variational formulation. Second, we analyze the discrete equivalent of the equation in a tensor product and show conditional stability, i.e. a CFL condition. Understanding the vectorial wave equation and the corresponding space-time finite element methods is crucial for improving the existing theory of Maxwell's equations and paves the way to computations of more complicated electromagnetic problems.
Consider a Urysohn integral equation $x - \mathcal{K} (x) = f$, where $f$ and the integral operator $\mathcal{K}$ with kernel of the type of Green's function are given. In the computation of approximate solutions of the given integral equation by Galerkin method, all the integrals are needed to be evaluated by some numerical integration formula. This gives rise to the discrete version of the Galerkin method. For $r \geq 1$, a space of piecewise polynomials of degree $\leq r-1$ with respect to a uniform partition is chosen to be the approximating space. For the appropriate choice of a numerical integration formula, an asymptotic series expansion of the discrete iterated Galerkin solution is obtained at the above partition points. Richardson extrapolation is used to improve the order of convergence. Using this method we can restore the rate of convergence when the error is measured in the continuous case. A numerical example is given to illustrate this theory.
When solving time-dependent hyperbolic conservation laws on cut cell meshes one has to overcome the small cell problem: standard explicit time stepping is not stable on small cut cells if the time step is chosen with respect to larger background cells. The domain of dependence (DoD) stabilization is designed to solve this problem in a discontinuous Galerkin framework. It adds a penalty term to the space discretization that restores proper domains of dependency. In this contribution we introduce the DoD stabilization for solving the advection equation in 2d with higher order. We show an $L^2$ stability result for the stabilized semi-discrete scheme for arbitrary polynomial degrees $p$ and provide numerical results for convergence tests indicating orders of $p+1$ in the $L^1$ norm and between $p+\frac 1 2$ and $p+1$ in the $L^{\infty}$ norm.
Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating algorithms achieve a sublinear convergence rate of $\mathcal{O}(1/T)$, under strong convexity, for the determination of a minimizer of a weighted-sum of the two functions, parameterized by the number of steps applied on each of them. An extension to the convex case is presented for which the rate weakens to $\mathcal{O}(1/\sqrt{T})$. These rates are valid also in the non-smooth case. Importantly, by varying the proportion of steps applied to each function, one can determine an approximation to the Pareto front.
The property that the velocity $\boldsymbol{u}$ belongs to $L^\infty(0,T;L^2(\Omega)^d)$ is an essential requirement in the definition of energy solutions of models for incompressible fluids; it is, therefore, highly desirable that the solutions produced by discretisation methods are uniformly stable in the $L^\infty(0,T;L^2(\Omega)^d)$-norm. In this work, we establish that this is indeed the case for Discontinuous Galerkin (DG) discretisations (in time and space) of non-Newtonian implicitly constituted models with $p$-structure, in general, assuming that $p\geq \frac{3d+2}{d+2}$; the time discretisation is equivalent to a RadauIIA Implicit Runge-Kutta method. To aid in the proof, we derive Gagliardo-Nirenberg-type inequalities on DG spaces, which might be of independent interest
Flow in variably saturated porous media is typically modelled by the Richards equation, a nonlinear elliptic-parabolic equation which is notoriously challenging to solve numerically. In this paper, we propose a robust and fast iterative solver for Richards' equation. The solver relies on an adaptive switching algorithm, based on rigorously derived a posteriori indicators, between two linearization methods: L-scheme and Newton. Although a combined L-scheme/Newton strategy was introduced previously in [List & Radu (2016)], here, for the first time we propose a reliable and robust criteria for switching between these schemes. The performance of the solver, which can be in principle applied to any spatial discretization and linearization methods, is illustrated through several numerical examples.
In this paper we will consider distributed Linear-Quadratic Optimal Control Problems dealing with Advection-Diffusion PDEs for high values of the P\'eclet number. In this situation, computational instabilities occur, both for steady and unsteady cases. A Streamline Upwind Petrov-Galerkin technique is used in the optimality system to overcome these unpleasant effects. We will apply a finite element method discretization in a optimize-then-discretize approach. For the parabolic case, a space-time framework will be considered and stabilization will also occur in the bilinear forms involving time derivatives. Then we will build Reduced Order Models on this discretization procedure and two possible settings can be analyzed: whether or not stabilization is needed in the online phase, too. In order to build the reduced bases for state, control, and adjoint variables we will consider a Proper Orthogonal Decomposition algorithm in a partitioned approach. The discussion is supported by computational experiments, where relative errors between the FEM and ROM solutions are studied together with the respective computational times.
The degree to which subjects differ from each other with respect to certain properties measured by a set of variables, plays an important role in many statistical methods. For example, classification, clustering, and data visualization methods all require a quantification of differences in the observed values. We can refer to the quantification of such differences, as distance. An appropriate definition of a distance depends on the nature of the data and the problem at hand. For distances between numerical variables, there exist many definitions that depend on the size of the observed differences. For categorical data, the definition of a distance is more complex, as there is no straightforward quantification of the size of the observed differences. Consequently, many proposals exist that can be used to measure differences based on categorical variables. In this paper, we introduce a general framework that allows for an efficient and transparent implementation of distances between observations on categorical variables. We show that several existing distances can be incorporated into the framework. Moreover, our framework quite naturally leads to the introduction of new distance formulations and allows for the implementation of flexible, case and data specific distance definitions. Furthermore, in a supervised classification setting, the framework can be used to construct distances that incorporate the association between the response and predictor variables and hence improve the performance of distance-based classifiers.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.