We present the numerical analysis of a finite element method (FEM) for one-dimensional Dirichlet problems involving the logarithmic Laplacian (the pseudo-differential operator that appears as a first-order expansion of the fractional Laplacian as the exponent $s\to 0^+$). Our analysis exhibits new phenomena in this setting; in particular, using recently obtained regularity results, we prove rigorous error estimates and provide a logarithmic order of convergence in the energy norm using suitable \emph{log}-weighted spaces. Numerical evidence suggests that this type of rate cannot be improved. Moreover, we show that the stiffness matrix of logarithmic problems can be obtained as the derivative of the fractional stiffness matrix evaluated at $s=0$. Lastly, we investigate the relationship between the discrete eigenvalue problem and its convergence to the continuous one.
We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.
Covariance matrices of random vectors contain information that is crucial for modelling. Certain structures and patterns of the covariances (or correlations) may be used to justify parametric models, e.g., autoregressive models. Until now, there have been only few approaches for testing such covariance structures systematically and in a unified way. In the present paper, we propose such a unified testing procedure, and we will exemplify the approach with a large variety of covariance structure models. This includes common structures such as diagonal matrices, Toeplitz matrices, and compound symmetry but also the more involved autoregressive matrices. We propose hypothesis tests for these structures, and we use bootstrap techniques for better small-sample approximation. The structures of the proposed tests invite for adaptations to other covariance patterns by choosing the hypothesis matrix appropriately. We prove their correctness for large sample sizes. The proposed methods require only weak assumptions. With the help of a simulation study, we assess the small sample properties of the tests. We also analyze a real data set to illustrate the application of the procedure.
Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.
The variational quantum algorithms are crucial for the application of NISQ computers. Such algorithms require short quantum circuits, which are more amenable to implementation on near-term hardware, and many such methods have been developed. One of particular interest is the so-called variational quantum state diagonalization method, which constitutes an important algorithmic subroutine and can be used directly to work with data encoded in quantum states. In particular, it can be applied to discern the features of quantum states, such as entanglement properties of a system, or in quantum machine learning algorithms. In this work, we tackle the problem of designing a very shallow quantum circuit, required in the quantum state diagonalization task, by utilizing reinforcement learning (RL). We use a novel encoding method for the RL-state, a dense reward function, and an $\epsilon$-greedy policy to achieve this. We demonstrate that the circuits proposed by the reinforcement learning methods are shallower than the standard variational quantum state diagonalization algorithm and thus can be used in situations where hardware capabilities limit the depth of quantum circuits. The methods we propose in the paper can be readily adapted to address a wide range of variational quantum algorithms.
In this paper, we prove convergence for contractive time discretisation schemes for semi-linear stochastic evolution equations with irregular Lipschitz nonlinearities, initial values, and additive or multiplicative Gaussian noise on $2$-smooth Banach spaces $X$. The leading operator $A$ is assumed to generate a strongly continuous semigroup $S$ on $X$, and the focus is on non-parabolic problems. The main result concerns convergence of the uniform strong error $$E_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|_X^p\Big)^{1/p} \to 0\quad (k \to 0),$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$ for final time $T>0$. This generalises previous results to a larger class of admissible nonlinearities and noise, as well as rough initial data from the Hilbert space case to more general spaces. We present a proof based on a regularisation argument. Within this scope, we extend previous quantified convergence results for more regular nonlinearity and noise from Hilbert to $2$-smooth Banach spaces. The uniform strong error cannot be estimated in terms of the simpler pointwise strong error $$E_k := \bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|_X^p\bigg)^{1/p},$$ which most of the existing literature is concerned with. Our results are illustrated for a variant of the Schr\"odinger equation, for which previous convergence results were not applicable.
Spatial regression models are central to the field of spatial statistics. Nevertheless, their estimation in case of large and irregular gridded spatial datasets presents considerable computational challenges. To tackle these computational problems, Arbia \citep{arbia_2014_pairwise} introduced a pseudo-likelihood approach (called pairwise likelihood, say PL) which required the identification of pairs of observations that are internally correlated, but mutually conditionally uncorrelated. However, while the PL estimators enjoy optimal theoretical properties, their practical implementation when dealing with data observed on irregular grids suffers from dramatic computational issues (connected with the identification of the pairs of observations) that, in most empirical cases, negatively counter-balance its advantages. In this paper we introduce an algorithm specifically designed to streamline the computation of the PL in large and irregularly gridded spatial datasets, dramatically simplifying the estimation phase. In particular, we focus on the estimation of Spatial Error models (SEM). Our proposed approach, efficiently pairs spatial couples exploiting the KD tree data structure and exploits it to derive the closed-form expressions for fast parameter approximation. To showcase the efficiency of our method, we provide an illustrative example using simulated data, demonstrating the computational advantages if compared to a full likelihood inference are not at the expenses of accuracy.
The numerical solution of the generalized eigenvalue problem for a singular matrix pencil is challenging due to the discontinuity of its eigenvalues. Classically, such problems are addressed by first extracting the regular part through the staircase form and then applying a standard solver, such as the QZ algorithm, to that regular part. Recently, several novel approaches have been proposed to transform the singular pencil into a regular pencil by relatively simple randomized modifications. In this work, we analyze three such methods by Hochstenbach, Mehl, and Plestenjak that modify, project, or augment the pencil using random matrices. All three methods rely on the normal rank and do not alter the finite eigenvalues of the original pencil. We show that the eigenvalue condition numbers of the transformed pencils are unlikely to be much larger than the $\delta$-weak eigenvalue condition numbers, introduced by Lotz and Noferini, of the original pencil. This not only indicates favorable numerical stability but also reconfirms that these condition numbers are a reliable criterion for detecting simple finite eigenvalues. We also provide evidence that, from a numerical stability perspective, the use of complex instead of real random matrices is preferable even for real singular matrix pencils and real eigenvalues. As a side result, we provide sharp left tail bounds for a product of two independent random variables distributed with the generalized beta distribution of the first kind or Kumaraswamy distribution.
A numerical algorithm for regularization of the solution of the source problem for the diffusion-logistic model based on information about the process at fixed moments of time of integral type has been developed. The peculiarity of the problem under study is the discrete formulation in space and impossibility to apply classical algorithms for its numerical solution. The regularization of the problem is based on the application of A.N. Tikhonov's approach and a priori information about the source of the process. The problem was formulated in a variational formulation and solved by the global tensor optimization method. It is shown that in the case of noisy data regularization improves the accuracy of the reconstructed source.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.