In this paper, we study two kinds of structure-preserving splitting methods, including the Lie--Trotter type splitting method and the finite difference type method, for the stochasticlogarithmic Schr\"odinger equation (SlogS equation) via a regularized energy approximation. We first introduce a regularized SlogS equation with a small parameter $0<\epsilon\ll1$ which approximates the SlogS equation and avoids the singularity near zero density. Then we present a priori estimates, the regularized entropy and energy, and the stochastic symplectic structure of the proposed numerical methods. Furthermore, we derive both the strong convergence rates and the convergence rates of the regularized entropy and energy. To the best of our knowledge, this is the first result concerning the construction and analysis of numerical methods for stochastic Schr\"odinger equations with logarithmic nonlinearities.
In graph analysis, a classic task consists in computing similarity measures between (groups of) nodes. In latent space random graphs, nodes are associated to unknown latent variables. One may then seek to compute distances directly in the latent space, using only the graph structure. In this paper, we show that it is possible to consistently estimate entropic-regularized Optimal Transport (OT) distances between groups of nodes in the latent space. We provide a general stability result for entropic OT with respect to perturbations of the cost matrix. We then apply it to several examples of random graphs, such as graphons or $\epsilon$-graphs on manifolds. Along the way, we prove new concentration results for the so-called Universal Singular Value Thresholding estimator, and for the estimation of geodesic distances on a manifold.
In this article, we propose a higher order approximation to Caputo fractional (C-F) derivative using graded mesh and standard central difference approximation for space derivatives, in order to obtain the approximate solution of time fractional partial differential equations (TFPDE). The proposed approximation for C-F derivative tackles the singularity at origin effectively and is easily applicable to diverse problems. The stability analysis and truncation error bounds of the proposed scheme are discussed, along with this, analyzed the required regularity of the solution. Few numerical examples are presented to support the theory.
A Multiplicative-Exponential Linear Logic (MELL) proof-structure can be expanded into a set of resource proof-structures: its Taylor expansion. We introduce a new criterion characterizing (and deciding in the finite case) those sets of resource proof-structures that are part of the Taylor expansion of some MELL proof-structure, through a rewriting system acting both on resource and MELL proof-structures. We also prove semi-decidability of the type inhabitation problem for cut-free MELL proof-structures.
The solution of time fractional partial differential equations in general exhibit a weak singularity near the initial time. In this article we propose a method for solving time fractional diffusion equation with nonlocal diffusion term. The proposed method comprises L1 scheme on graded mesh, finite element method and Newton's method. We discuss the well-posedness of the weak formulation at discrete level and derive \emph{a priori} error estimates for fully-discrete formulation in $L^2(\Omega)$ and $H^1(\Omega)$ norms. Finally, some numerical experiments are conducted to validate the theoretical findings.
This paper focuses on stochastic saddle point problems with decision-dependent distributions in both the static and time-varying settings. These are problems whose objective is the expected value of a stochastic payoff function, where random variables are drawn from a distribution induced by a distributional map. For general distributional maps, the problem of finding saddle points is in general computationally burdensome, even if the distribution is known. To enable a tractable solution approach, we introduce the notion of equilibrium points -- which are saddle points for the stationary stochastic minimax problem that they induce -- and provide conditions for their existence and uniqueness. We demonstrate that the distance between the two classes of solutions is bounded provided that the objective has a strongly-convex-strongly-concave payoff and Lipschitz continuous distributional map. We develop deterministic and stochastic primal-dual algorithms and demonstrate their convergence to the equilibrium point. In particular, by modeling errors emerging from a stochastic gradient estimator as sub-Weibull random variables, we provide error bounds in expectation and in high probability that hold for each iteration; moreover, we show convergence to a neighborhood in expectation and almost surely. Finally, we investigate a condition on the distributional map -- which we call opposing mixture dominance -- that ensures the objective is strongly-convex-strongly-concave. Under this assumption, we show that primal-dual algorithms converge to the saddle points in a similar fashion.
This paper considers the temporal discretization of an inverse problem subject to a time fractional diffusion equation. Firstly, the convergence of the L1 scheme is established with an arbitrary sectorial operator of spectral angle $< \pi/2 $, that is the resolvent set of this operator contains $ \{z\in\mathbb C\setminus\{0\}:\ |\operatorname{Arg} z|< \theta\}$ for some $ \pi/2 < \theta < \pi $. The relationship between the time fractional order $\alpha \in (0, 1)$ and the constants in the error estimates is precisely characterized, revealing that the L1 scheme is robust as $ \alpha $ approaches $ 1 $. Then an inverse problem of a fractional diffusion equation is analyzed, and the convergence analysis of a temporal discretization of this inverse problem is given. Finally, numerical results are provided to confirm the theoretical results.
This paper is concerned with the efficient spectral solutions for weakly singular nonlocal diffusion equations with Dirichlet-type volume constraints. This type of equation contains an integral operator which typically has a singularity at the midpoint of the integral domain, and the approximation of such the integral operator is one of the essential difficulties in solving the nonlocal equations. To overcome this problem, two-sided Jacobi spectral quadrature rules are proposed to develop a Jacobi spectral collocation method for the nonlocal diffusion equations. Rigorous convergence analysis of the proposed method is presented in $L^\infty$ norms, and we further prove that the Jacobi collocation solution converges to its corresponding local limit as nonlocal interactions vanish. Numerical examples are given to verify the theoretical results.
Statistical divergences (SDs), which quantify the dissimilarity between probability distributions, are a basic constituent of statistical inference and machine learning. A modern method for estimating those divergences relies on parametrizing an empirical variational form by a neural network (NN) and optimizing over parameter space. Such neural estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. In particular, there is a fundamental tradeoff between the two sources of error involved: approximation and empirical estimation. While the former needs the NN class to be rich and expressive, the latter relies on controlling complexity. We explore this tradeoff for an estimator based on a shallow NN by means of non-asymptotic error bounds, focusing on four popular $\mathsf{f}$-divergences -- Kullback-Leibler, chi-squared, squared Hellinger, and total variation. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory. The bounds reveal the tension between the NN size and the number of samples, and enable to characterize scaling rates thereof that ensure consistency. For compactly supported distributions, we further show that neural estimators of the first three divergences above with appropriate NN growth-rate are near minimax rate-optimal, achieving the parametric rate up to logarithmic factors.
In this paper we analyze the Schwarz alternating method for unconstrained elliptic optimal control problems. We discuss the convergence properties of the method in the continuous case first and then apply the arguments to the finite difference discretization case. In both cases, we prove that the Schwarz alternating method is convergent if its counterpart for an elliptic equation is convergent. Meanwhile, the convergence rate of the method for the elliptic equation under the maximum norm also gives a uniform upper bound (with respect to the regularization parameter $\alpha$) of the convergence rate of the method for the optimal control problem under the maximum norm of proper error merit functions in the continuous case or vectors in the discrete case. Our numerical results corroborate our theoretical results and show that with $\alpha$ decreasing to zero, the method will converge faster. We also give some exposition of this phenomenon.
The gradient noise of Stochastic Gradient Descent (SGD) is considered to play a key role in its properties (e.g. escaping low potential points and regularization). Past research has indicated that the covariance of the SGD error done via minibatching plays a critical role in determining its regularization and escape from low potential points. It is however not much explored how much the distribution of the error influences the behavior of the algorithm. Motivated by some new research in this area, we prove universality results by showing that noise classes that have the same mean and covariance structure of SGD via minibatching have similar properties. We mainly consider the Multiplicative Stochastic Gradient Descent (M-SGD) algorithm as introduced by Wu et al., which has a much more general noise class than the SGD algorithm done via minibatching. We establish nonasymptotic bounds for the M-SGD algorithm mainly with respect to the Stochastic Differential Equation corresponding to SGD via minibatching. We also show that the M-SGD error is approximately a scaled Gaussian distribution with mean $0$ at any fixed point of the M-SGD algorithm. We also establish bounds for the convergence of the M-SGD algorithm in the strongly convex regime.