We introduce a novel minimal order hybrid Discontinuous Galerkin (HDG) and a novel mass conserving mixed stress (MCS) method for the approximation of incompressible flows. For this we employ the $H(\operatorname{div})$-conforming linear Brezzi-Douglas-Marini space and the lowest order Raviart-Thomas space for the approximation of the velocity and the vorticity, respectively. Our methods are based on the physically correct diffusive flux $-\nu \varepsilon(u)$ and provide exactly divergence-free discrete velocity solutions, optimal (pressure robust) error estimates and a minimal number of coupling degrees of freedom. For the stability analysis we introduce a new Korn-like inequality for vector-valued element-wise $H^1$ and normal continuous functions. Numerical examples conclude the work where the theoretical findings are validated and the novel methods are compared in terms of condition numbers with respect to discrete stability parameters.
Recently a machine learning approach to Monte-Carlo simulations called Neural Markov Chain Monte-Carlo (NMCMC) is gaining traction. In its most popular form it uses the neural networks to construct normalizing flows which are then trained to approximate the desired target distribution. As this distribution is usually defined via a Hamiltonian or action, the standard learning algorithm requires estimation of the action gradient with respect to the fields. In this contribution we present another gradient estimator (and the corresponding [PyTorch implementation) that avoids this calculation, thus potentially speeding up training for models with more complicated actions. We also study the statistical properties of several gradient estimators and show that our formulation leads to better training results.
We show how probabilistic numerics can be used to convert an initial value problem into a Gauss--Markov process parametrised by the dynamics of the initial value problem. Consequently, the often difficult problem of parameter estimation in ordinary differential equations is reduced to hyperparameter estimation in Gauss--Markov regression, which tends to be considerably easier. The method's relation and benefits in comparison to classical numerical integration and gradient matching approaches is elucidated. In particular, the method can, in contrast to gradient matching, handle partial observations, and has certain routes for escaping local optima not available to classical numerical integration. Experimental results demonstrate that the method is on par or moderately better than competing approaches.
Since almost twenty years, modified Patankar--Runge--Kutta (MPRK) methods have proven to be efficient and robust numerical schemes that preserve positivity and conservativity of the production-destruction system irrespectively of the time step size chosen. Due to these advantageous properties they are used for a wide variety of applications. Nevertheless, until now, an analytic investigation of the stability of MPRK schemes is still missing, since the usual approach by means of Dahlquist's equation is not feasible. Therefore, we consider a positive and conservative 2D test problem and provide statements usable for a stability analysis of general positive and conservative time integrator schemes based on the center manifold theory. We use this approach to investigate the Lyapunov stability of the second order MPRK22($\alpha$) and MPRK22ncs($\alpha$) schemes. We prove that MPRK22($\alpha$) schemes are unconditionally stable and derive the stability regions of MPRK22ncs($\alpha$) schemes. Finally, numerical experiments are presented, which confirm the theoretical results.
We propose and explore a new, general-purpose method for the implicit time integration of elastica. Key to our approach is the use of a mixed variational principle. In turn its finite element discretization leads to an efficient alternating projections solver with a superset of the desirable properties of many previous fast solution strategies. This framework fits a range of elastic constitutive models and remains stable across a wide span of timestep sizes, material parameters (including problems that are quasi-static and approximately rigid). It is efficient to evaluate and easily applicable to volume, surface, and rods models. We demonstrate the efficacy of our approach on a number of simulated examples across all three codomains.
We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. This can be seen as a general framework in which the conventional orthonormal wavelet estimation is the case where warping distribution is the standard uniform c.d.f. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.
Singular source terms in sub-diffusion equations may lead to the unboundedness of solutions, which will bring a severe reduction of convergence order of existing time-stepping schemes. In this work, we propose two efficient time-stepping schemes for solving sub-diffusion equations with a class of source terms mildly singular in time. One discretization is based on the Gr{\"u}nwald-Letnikov and backward Euler methods. First-order error estimate with respect to time is rigorously established for singular source terms and nonsmooth initial data. The other scheme derived from the second-order backward differentiation formula (BDF) is proved to possess second-order accuracy in time. Further, piecewise linear finite element and lumped mass finite element discretizations in space are applied and analyzed rigorously. Numerical investigations confirm our theoretical results.
Normalizing Flows (NFs) are universal density estimators based on Neural Networks. However, this universality is limited: the density's support needs to be diffeomorphic to a Euclidean space. In this paper, we propose a novel method to overcome this limitation without sacrificing universality. The proposed method inflates the data manifold by adding noise in the normal space, trains an NF on this inflated manifold, and, finally, deflates the learned density. Our main result provides sufficient conditions on the manifold and the specific choice of noise under which the corresponding estimator is exact. Our method has the same computational complexity as NFs and does not require computing an inverse flow. We also show that, if the embedding dimension is much larger than the manifold dimension, noise in the normal space can be well approximated by Gaussian noise. This allows using our method for approximating arbitrary densities on unknown manifolds provided that the manifold dimension is known.
In the present work, we first introduce a general framework for modelling complex multiscale fluids and then focus on the derivation and analysis of a new hybrid continuum-kinetic model. In particular, we combine conservation of mass and momentum for an isentropic macroscopic model with a kinetic representation of the microscopic behaviour. After introducing a small scale of interest, we compute the complex stress tensor by means of Irving-Kirkwood formula. The latter requires an expansion of kinetic distribution around an equilibrium state and a successive homogenization over the fast in time and small in space scale dynamics. For a new hybrid continuum-kinetic model the results of linear stability analysis indicates a conditional stability in the relevant low-speed regimes and instability for high speed regimes for higher modes. Extensive numerical experiments confirm that the proposed multiscale model can reflect new phenomena of complex fluids not being present in standard Newtonian fluids. Consequently, the proposed general technique can be successfully used to derive new interesting systems combining the macro and micro structure of a given physical problem.
In this work, we propose an interesting method that aims to approximate an activation function over some domain by polynomials of the presupposing low degree. The main idea behind this method can be seen as an extension of the ordinary least square method and includes the gradient of activation function into the cost function to minimize.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.