The focus of the present research is on the analysis of local energy stability of high-order (including split-form) summation-by-parts methods, with e.g. two-point entropy-conserving fluxes, approximating non-linear conservation laws. Our main finding is that local energy stability, i.e., the numerical growth rate does not exceed the growth rate of the continuous problem, is not guaranteed even when the scheme is non-linearly stable and that this may have adverse implications for simulation results. We show that entropy-conserving two-point fluxes are inherently locally energy unstable, as they can be dissipative or anti-dissipative. Unfortunately, these fluxes are at the core of many commonly used high-order entropy-stable extensions, including split-form summation-by-parts discontinuous Galerkin spectral element methods (or spectral collocation methods). For the non-linear Burgers equation, we further demonstrate numerically that such schemes cause exponential growth of errors during the simulation. Furthermore, we encounter a similar abnormal behaviour for the compressible Euler equations, for a smooth exact solution of a density wave. Finally, for the same case, we demonstrate numerically that other commonly known split-forms, such as the Kennedy and Gruber splitting, are also locally energy unstable.
In this paper we introduce a new approach to compute rigorously solutions of Cauchy problems for a class of semi-linear parabolic partial differential equations. Expanding solutions with Chebyshev series in time and Fourier series in space, we introduce a zero finding problem $F(a)=0$ on a Banach algebra $X$ of Fourier-Chebyshev sequences, whose solution solves the Cauchy problem. The challenge lies in the fact that the linear part $\mathcal{L} = DF(0)$ has an infinite block diagonal structure with blocks becoming less and less diagonal dominant at infinity. We introduce analytic estimates to show that $\mathcal{L}$ is an invertible linear operator on $X$, and we obtain explicit, rigorous and computable bounds for the operator norm $\| \mathcal{L}^{-1}\|_{B(X)}$. These bounds are then used to verify the hypotheses of a Newton-Kantorovich type argument which shows that the (Newton-like) operator $\mathcal{T}(a)=a - \mathcal{L}^{-1} F(a)$ is a contraction on a small ball centered at a numerical approximation of the Cauchy problem. The contraction mapping theorem yields a fixed point which corresponds to a classical (strong) solution of the Cauchy problem. The approach is simple to implement, numerically stable and is applicable to a class of PDE models, which include for instance Fisher's equation and the Swift-Hohenberg equation. We apply our approach to each of these models.
We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton-like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long-standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.
This paper studies generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD). In this regime, we derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimal-norm interpolator, as a theoretical justification of using SGD in practice.
The Asymmetric Numeral Systems (ANS) is a class of entropy encoders by Duda that had an immense impact on the data compression, substituting arithmetic and Huffman coding. The optimality of ANS was studied by Duda et al. but the precise asymptotic behaviour of its redundancy (in comparison to the entropy) was not completely understood. In this paper we establish an optimal bound on the redundancy for the tabled ANS (tANS), the most popular ANS variant. Given a sequence $a_1,\ldots,a_n$ of letters from an alphabet $\{0,\ldots,\sigma-1\}$ such that each letter $a$ occurs in it $f_a$ times and $n=2^r$, the tANS encoder using Duda's ``precise initialization'' to fill tANS tables transforms this sequence into a bit string of length (frequencies are not included in the encoding size): $$ \sum\limits_{a\in [0..\sigma)}f_a\cdot\log\frac{n}{f_a}+O(\sigma+r), $$ where $O(\sigma + r)$ can be bounded by $\sigma\log e+r$. The $r$-bit term is an encoder artifact indispensable to ANS; the rest incurs a redundancy of $O(\frac{\sigma}{n})$ bits per letter. We complement this bound by a series of examples showing that an $\Omega(\sigma+r)$ redundancy is necessary when $\sigma > n/3$, where $\Omega(\sigma + r)$ is at least $\frac{\sigma-1}{4}+r-2$. We argue that similar examples exist for any methods that distribute letters in tANS tables using only the knowledge about frequencies. Thus, we refute Duda's conjecture that the redundancy is $O(\frac{\sigma}{n^2})$ bits per letter. We also propose a new variant of range ANS (rANS), called rANS with fixed accuracy, that is parameterized by $k \ge 1$. In this variant the integer division, which is unavoidable in rANS, is performed only in cases when its result belongs to $[2^k..2^{k+1})$. Hence, the division can be computed by faster methods provided $k$ is small. We bound the redundancy for the rANS with fixed accuracy $k$ by $\frac{n}{2^k-1}\log e+r$.
The immersed boundary (IB) method is a non-body conforming approach to fluid-structure interaction (FSI) that uses an Eulerian description of the momentum, viscosity, and incompressibility of a coupled fluid-structure system and a Lagrangian description of the deformations, stresses, and resultant forces of the immersed structure. Integral transforms with Dirac delta function kernels couple Eulerian and Lagrangian variables. In practice, discretizations of these integral transforms use regularized delta function kernels, and although a number of different types of regularized delta functions have been proposed, there has been limited prior work to investigate the impact of the choice of kernel function on the accuracy of the methodology. This work systematically studies the effect of the choice of regularized delta function in several fluid-structure interaction benchmark tests using the immersed finite element/difference (IFED) method, which is an extension of the IB method that uses finite element structural discretizations combined with a Cartesian grid finite difference method for the incompressible Navier-Stokes equations. Further, many IB-type methods evaluate the delta functions at the nodes of the structural mesh, and this requires the Lagrangian mesh to be relatively fine compared to the background Eulerian grid to avoid leaks. The IFED formulation offers the possibility to avoid leaks with relatively coarse structural meshes by evaluating the delta function on a denser collection of interaction points. This study investigates the effect of varying the relative mesh widths of the Lagrangian and Eulerian discretizations. Although this study is done within the context of the IFED method, the effect of different kernels could be important not just for this method, but also for other IB-type methods more generally.
In transient simulations of particulate Stokes flow, to accurately capture the interaction between the constituent particles and the confining wall, the discretization of the wall often needs to be locally refined in the region approached by the particles. Consequently, standard fast direct solvers lose their efficiency since the linear system changes at each time step. This manuscript presents a new computational approach that avoids this issue by pre-constructing a fast direct solver for the wall ahead of time, computing a low-rank factorization to capture the changes due to the refinement, and solving the problem on the refined discretization via a Woodbury formula. Numerical results illustrate the efficiency of the solver in accelerating particulate Stokes simulations.
For the general class of residual distribution (RD) schemes, including many finite element (such as continuous/discontinuous Galerkin) and flux reconstruction methods, an approach to construct entropy conservative/ dissipative semidiscretizations by adding suitable correction terms has been proposed by Abgrall (J.~Comp.~Phys. 372: pp. 640--666, 2018). In this work, the correction terms are characterized as solutions of certain optimization problems and are adapted to the SBP-SAT framework, focusing on discontinuous Galerkin methods. Novel generalizations to entropy inequalities, multiple constraints, and kinetic energy preservation for the Euler equations are developed and tested in numerical experiments. For all of these optimization problems, explicit solutions are provided. Additionally, the correction approach is applied for the first time to obtain a fully discrete entropy conservative/dissipative RD scheme. Here, the application of the deferred correction (DeC) method for the time integration is essential. This paper can be seen as describing a systematic method to construct structure preserving discretization, at least for the considered example.
The aim of this paper is to study the recovery of a spatially dependent potential in a (sub)diffusion equation from overposed final time data. We construct a monotone operator one of whose fixed points is the unknown potential. The uniqueness of the identification is theoretically verified by using the monotonicity of the operator and a fixed point argument. Moreover, we show a conditional stability in Hilbert spaces under some suitable conditions on the problem data. Next, a completely discrete scheme is developed, by using Galerkin finite element method in space and finite difference method in time, and then a fixed point iteration is applied to reconstruct the potential. We prove the linear convergence of the iterative algorithm by the contraction mapping theorem, and present a thorough error analysis for the reconstructed potential. Our derived \textsl{a priori} error estimate provides a guideline to choose discretization parameters according to the noise level. The analysis relies heavily on some suitable nonstandard error estimates for the direct problem as well as the aforementioned conditional stability. Numerical experiments are provided to illustrate and complement our theoretical analysis.
When assessing the performance of wireless communication systems operating over fading channels, one often encounters the problem of computing expectations of some functional of sums of independent random variables (RVs). The outage probability (OP) at the output of Equal Gain Combining (EGC) and Maximum Ratio Combining (MRC) receivers is among the most important performance metrics that falls within this framework. In general, closed form expressions of expectations of functionals applied to sums of RVs are out of reach. A naive Monte Carlo (MC) simulation is of course an alternative approach. However, this method requires a large number of samples for rare event problems (small OP values for instance). Therefore, it is of paramount importance to use variance reduction techniques to develop fast and efficient estimation methods. In this work, we use importance sampling (IS), being known for its efficiency in requiring less computations for achieving the same accuracy requirement. In this line, we propose a state-dependent IS scheme based on a stochastic optimal control (SOC) formulation to calculate rare events quantities that could be written in a form of an expectation of some functional of sums of independent RVs. Our proposed algorithm is generic and can be applicable without any restriction on the univariate distributions of the different fading envelops/gains or on the functional that is applied to the sum. We apply our approach to the Log-Normal distribution to compute the OP at the output of diversity receivers with and without co-channel interference. For each case, we show numerically that the proposed state-dependent IS algorithm compares favorably to most of the well-known estimators dealing with similar problems.
The positive definiteness of discrete time-fractional derivatives is fundamental to the numerical stability (in the energy sense) for time-fractional phase-field models. A novel technique is proposed to estimate the minimum eigenvalue of discrete convolution kernels generated by the nonuniform L1, half-grid based L1 and time-averaged L1 formulas of the fractional Caputo's derivative. The main discrete tools are the discrete orthogonal convolution kernels and discrete complementary convolution kernels. Certain variational energy dissipation laws at discrete levels of the variable-step L1-type methods are then established for time-fractional Cahn-Hilliard model.They are shown to be asymptotically compatible, in the fractional order limit $\alpha\rightarrow1$, with the associated energy dissipation law for the classical Cahn-Hilliard equation. Numerical examples together with an adaptive time-stepping procedure are provided to demonstrate the effectiveness of the proposed methods.