The Fourier extension method, also known as the Fourier continuation method, is a method for approximating non-periodic functions on an interval using truncated Fourier series with period larger than the interval on which the function is defined. When the function being approximated is known at only finitely many points, the approximation is constructed as a projection based on this discrete set of points. In this paper we address the issue of estimating the absolute error in the approximation. The error can be expressed in terms of a system of discrete orthogonal polynomials on an arc of the unit circle, and these polynomials are then evaluated asymptotically using Riemann--Hilbert methods.
We examine a family of discrete probability distributions that describes the "spillage number" in the extended balls-in-bins model. The spillage number is defined as the number of balls that occupy their bins minus the total number of occupied bins. This probability distribution can be characterised as a normed version of the expansion of the noncentral Stirling numbers of the second kind in terms of the central Stirling numbers of the second kind. Alternatively it can be derived in a natural way from the extended balls-in-bins model. We derive the generating functions for this distribution and important moments of the distribution. We also derive an algorithm for recursive computation of the mass values for the distribution. Finally, we examine the asymptotic behaviour of the spillage distribution and the performance of an approximation to the distribution.
In this paper, we present two variations of an algorithm for signal reconstruction from one-bit or two-bit noisy observations of the discrete Fourier transform (DFT). The one-bit observations of the DFT correspond to the sign of its real part, whereas, the two-bit observations of the DFT correspond to the signs of both the real and imaginary parts of the DFT. We focus on images for analysis and simulations, thus using the sign of the 2D-DFT. This choice of the class of signals is inspired by previous works on this problem. For our algorithm, we show that the expected mean squared error (MSE) in signal reconstruction is asymptotically proportional to the inverse of the sampling rate. The samples are affected by additive zero-mean noise of known distribution. We solve this signal estimation problem by designing an algorithm that uses contraction mapping, based on the Banach fixed point theorem. Numerical tests with four benchmark images are provided to show the effectiveness of our algorithm. Various metrics for image reconstruction quality assessment such as PSNR, SSIM, ESSIM, and MS-SSIM are employed. On all four benchmark images, our algorithm outperforms the state-of-the-art in all of these metrics by a significant margin.
The FEAST eigensolver is extended to the computation of the singular triplets of a large matrix $A$ with the singular values in a given interval. It is subspace iteration in nature applied to an approximate spectral projector associated with the cross-product matrix $A^TA$ and constructs approximate left and right singular subspaces corresponding to the desired singular values, onto which $A$ is projected to obtain approximations to the desired singular triplets. Approximate spectral projectors are constructed using the Chebyshev--Jackson series expansion other than contour integration and quadrature rules, and they are proven to be always symmetric positive semi-definite with the eigenvalues in $[0,1]$. Compact estimates are established for pointwise approximation errors of a specific step function that corresponds to the exact spectral projector, the accuracy of the approximate spectral projector, the number of desired singular triplets,the distance between the desired right singular subspace and the subspace generated each iteration, and the convergence of the FEAST SVDsolver. Practical selection strategies are proposed for the series degree and the subspace dimension. Numerical experiments illustrate that the FEAST SVDsolver is robust and efficient.
We utilize Cauchy's argument principle in combination with the Jacobian of a holomorphic function in several complex variables and the first moment of a ratio of two correlated complex normal random variables to prove explicit formulas for the density and the mean distribution of complex zeros of random polynomials spanned by orthogonal polynomials on the unit circle and on the unit disk. We then inquire into the consequences of their asymptotical evaluations.
The immersed boundary (IB) method is a non-body conforming approach to fluid-structure interaction (FSI) that uses an Eulerian description of the momentum, viscosity, and incompressibility of a coupled fluid-structure system and a Lagrangian description of the deformations, stresses, and resultant forces of the immersed structure. Integral transforms with Dirac delta function kernels couple Eulerian and Lagrangian variables. In practice, discretizations of these integral transforms use regularized delta function kernels, and although a number of different types of regularized delta functions have been proposed, there has been limited prior work to investigate the impact of the choice of kernel function on the accuracy of the methodology. This work systematically studies the effect of the choice of regularized delta function in several fluid-structure interaction benchmark tests using the immersed finite element/difference (IFED) method, which is an extension of the IB method that uses finite element structural discretizations combined with a Cartesian grid finite difference method for the incompressible Navier-Stokes equations. Further, many IB-type methods evaluate the delta functions at the nodes of the structural mesh, and this requires the Lagrangian mesh to be relatively fine compared to the background Eulerian grid to avoid leaks. The IFED formulation offers the possibility to avoid leaks with relatively coarse structural meshes by evaluating the delta function on a denser collection of interaction points. This study investigates the effect of varying the relative mesh widths of the Lagrangian and Eulerian discretizations. Although this study is done within the context of the IFED method, the effect of different kernels could be important not just for this method, but also for other IB-type methods more generally.
The Fr\'{e}chet distance is a well-studied similarity measure between curves that is widely used throughout computer science. Motivated by applications where curves stem from paths and walks on an underlying graph (such as a road network), we define and study the Fr\'{e}chet distance for paths and walks on graphs. When provided with a distance oracle of $G$ with $O(1)$ query time, the classical quadratic-time dynamic program can compute the Fr\'{e}chet distance between two walks $P$ and $Q$ in a graph $G$ in $O(|P| \cdot |Q|)$ time. We show that there are situations where the graph structure helps with computing Fr\'{e}chet distance: when the graph $G$ is planar, we apply existing (approximate) distance oracles to compute a $(1+\varepsilon)$-approximation of the Fr\'{e}chet distance between any shortest path $P$ and any walk $Q$ in $O(|G| \log |G| / \sqrt{\varepsilon} + |P| + \frac{|Q|}{\varepsilon } )$ time. We generalise this result to near-shortest paths, i.e. $\kappa$-straight paths, as we show how to compute a $(1+\varepsilon)$-approximation between a $\kappa$-straight path $P$ and any walk $Q$ in $O(|G| \log |G| / \sqrt{\varepsilon} + |P| + \frac{\kappa|Q|}{\varepsilon } )$ time. Our algorithmic results hold for both the strong and the weak discrete Fr\'{e}chet distance over the shortest path metric in $G$. Finally, we show that additional assumptions on the input, such as our assumption on path straightness, are indeed necessary to obtain truly subquadratic running time. We provide a conditional lower bound showing that the Fr\'{e}chet distance, or even its $1.01$-approximation, between arbitrary \emph{paths} in a weighted planar graph cannot be computed in $O((|P|\cdot|Q|)^{1-\delta})$ time for any $\delta > 0$ unless the Orthogonal Vector Hypothesis fails. For walks, this lower bound holds even when $G$ is planar, unit-weight and has $O(1)$ vertices.
In this paper, a nonlinear system of fractional ordinary differential equations with multiple scales in time is investigated. We are interested in the effective long-term computation of the solution. The main challenge is how to obtain the solution of the coupled problem at a lower computational cost. We analysize a multiscale method for the nonlinear system where the fast system has a periodic applied force and the slow equation contains fractional derivatives as a simplication of the atherosclerosis with a plaque growth. A local periodic equation is derived to approximate the original system and the error estimates are given. Then a finite difference method is designed to approximate the original and the approximate problems. We construct four examples, including three with exact solutions and one following the original problem setting, to test the accuracy and computational efficiency of the proposed method. It is observed that, the computational time is very much reduced and the multiscale method performs very well in comparison to fully resolved simulation for the case of small time scale separation. The larger the time scale separation is, the more effective the multiscale method is.
For the general class of residual distribution (RD) schemes, including many finite element (such as continuous/discontinuous Galerkin) and flux reconstruction methods, an approach to construct entropy conservative/ dissipative semidiscretizations by adding suitable correction terms has been proposed by Abgrall (J.~Comp.~Phys. 372: pp. 640--666, 2018). In this work, the correction terms are characterized as solutions of certain optimization problems and are adapted to the SBP-SAT framework, focusing on discontinuous Galerkin methods. Novel generalizations to entropy inequalities, multiple constraints, and kinetic energy preservation for the Euler equations are developed and tested in numerical experiments. For all of these optimization problems, explicit solutions are provided. Additionally, the correction approach is applied for the first time to obtain a fully discrete entropy conservative/dissipative RD scheme. Here, the application of the deferred correction (DeC) method for the time integration is essential. This paper can be seen as describing a systematic method to construct structure preserving discretization, at least for the considered example.
The aim of this paper is to study the recovery of a spatially dependent potential in a (sub)diffusion equation from overposed final time data. We construct a monotone operator one of whose fixed points is the unknown potential. The uniqueness of the identification is theoretically verified by using the monotonicity of the operator and a fixed point argument. Moreover, we show a conditional stability in Hilbert spaces under some suitable conditions on the problem data. Next, a completely discrete scheme is developed, by using Galerkin finite element method in space and finite difference method in time, and then a fixed point iteration is applied to reconstruct the potential. We prove the linear convergence of the iterative algorithm by the contraction mapping theorem, and present a thorough error analysis for the reconstructed potential. Our derived \textsl{a priori} error estimate provides a guideline to choose discretization parameters according to the noise level. The analysis relies heavily on some suitable nonstandard error estimates for the direct problem as well as the aforementioned conditional stability. Numerical experiments are provided to illustrate and complement our theoretical analysis.
We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.