In this paper, we study power series with coefficients equal to a product of a generic sequence and an explicitly given function of a positive parameter expressible in terms of the Pochhammer symbols. Four types of such series are treated. We show that logarithmic concavity (convexity) of the generic sequence leads to logarithmic concavity (convexity) of the sum of the series with respect to the argument of the explicitly given function. The logarithmic concavity (convexity) is derived from a stronger property, namely, positivity (negativity) of the power series coefficients of the so-called generalized Tur\'{a}nian. Applications to special functions such as the generalized hypergeometric function and the Fox-Wright function are also discussed.
In this paper we analyze a method for approximating the first-passage time density and the corresponding distribution function for a CIR process. This approximation is obtained by truncating a series expansion involving the generalized Laguerre polynomials and the gamma probability density. The suggested approach involves a number of numerical issues which depend strongly on the coefficient of variation of the first passage time random variable. These issues are examined and solutions are proposed also involving the first passage time distribution function. Numerical results and comparisons with alternative approximation methods show the strengths and weaknesses of the proposed method. A general acceptance-rejection-like procedure, that makes use of the approximation, is presented. It allows the generation of first passage time data, even if its distribution is unknown.
This paper develops a flexible and computationally efficient multivariate volatility model, which allows for dynamic conditional correlations and volatility spillover effects among financial assets. The new model has desirable properties such as identifiability and computational tractability for many assets. A sufficient condition of the strict stationarity is derived for the new process. Two quasi-maximum likelihood estimation methods are proposed for the new model with and without low-rank constraints on the coefficient matrices respectively, and the asymptotic properties for both estimators are established. Moreover, a Bayesian information criterion with selection consistency is developed for order selection, and the testing for volatility spillover effects is carefully discussed. The finite sample performance of the proposed methods is evaluated in simulation studies for small and moderate dimensions. The usefulness of the new model and its inference tools is illustrated by two empirical examples for 5 stock markets and 17 industry portfolios, respectively.
In this paper, we introduce a general constructive method to compute solutions of initial value problems of semilinear parabolic partial differential equations via semigroup theory and computer-assisted proofs. Once a numerical candidate for the solution is obtained via a finite dimensional projection, Chebyshev series expansions are used to solve the linearized equations about the approximation from which a solution map operator is constructed. Using the solution operator (which exists from semigroup theory), we define an infinite dimensional contraction operator whose unique fixed point together with its rigorous bounds provide the local inclusion of the solution. Applying this technique for multiple time steps leads to constructive proofs of existence of solutions over long time intervals. As applications, we study the 3D/2D Swift-Hohenberg, where we combine our method with explicit constructions of trapping regions to prove global existence of solutions of initial value problems converging asymptotically to nontrivial equilibria. A second application consists of the 2D Ohta-Kawasaki equation, providing a framework for handling derivatives in nonlinear terms.
Numerical solutions for flows in partially saturated porous media pose challenges related to the non-linearity and elliptic-parabolic degeneracy of the governing Richards' equation. Iterative methods are therefore required to manage the complexity of the flow problem. Norms of successive corrections in the iterative procedure form sequences of positive numbers. Definitions of computational orders of convergence and theoretical results for abstract convergent sequences can thus be used to evaluate and compare different iterative methods. We analyze in this frame Newton's and $L$-scheme methods for an implicit finite element method (FEM) and the $L$-scheme for an explicit finite difference method (FDM). We also investigate the effect of the Anderson Acceleration (AA) on both the implicit and the explicit $L$-schemes. Considering a two-dimensional test problem, we found that the AA halves the number of iterations and renders the convergence of the FEM scheme two times faster. As for the FDM approach, AA does not reduce the number of iterations and even increases the computational effort. Instead, being explicit, the FDM $L$-scheme without AA is faster and as accurate as the FEM $L$-scheme with AA.
In this paper, we propose a novel approach to test the equality of high-dimensional mean vectors of several populations via the weighted $L_2$-norm. We establish the asymptotic normality of the test statistics under the null hypothesis. We also explain theoretically why our test statistics can be highly useful in weakly dense cases when the nonzero signal in mean vectors is present. Furthermore, we compare the proposed test with existing tests using simulation results, demonstrating that the weighted $L_2$-norm-based test statistic exhibits favorable properties in terms of both size and power.
In this paper, we provide an analysis of a recently proposed multicontinuum homogenization technique. The analysis differs from those used in classical homogenization methods for several reasons. First, the cell problems in multicontinuum homogenization use constraint problems and can not be directly substituted into the differential operator. Secondly, the problem contains high contrast that remains in the homogenized problem. The homogenized problem averages the microstructure while containing the small parameter. In this analysis, we first based on our previous techniques, CEM-GMsFEM, to define a CEM-downscaling operator that maps the multicontinuum quantities to an approximated microscopic solution. Following the regularity assumption of the multicontinuum quantities, we construct a downscaling operator and the homogenized multicontinuum equations using the information of linear approximation of the multicontinuum quantities. The error analysis is given by the residual estimate of the homogenized equations and the well-posedness assumption of the homogenized equations.
In this work we demonstrate that SVD-based model reduction techniques known for ordinary differential equations, such as the proper orthogonal decomposition, can be extended to stochastic differential equations in order to reduce the computational cost arising from both the high dimension of the considered stochastic system and the large number of independent Monte Carlo runs. We also extend the proper symplectic decomposition method to stochastic Hamiltonian systems, both with and without external forcing, and argue that preserving the underlying symplectic or variational structures results in more accurate and stable solutions that conserve energy better than when the non-geometric approach is used. We validate our proposed techniques with numerical experiments for a semi-discretization of the stochastic nonlinear Schr\"odinger equation and the Kubo oscillator.
In this paper, we establish the partial correlation graph for multivariate continuous-time stochastic processes, assuming only that the underlying process is stationary and mean-square continuous with expectation zero and spectral density function. In the partial correlation graph, the vertices are the components of the process and the undirected edges represent partial correlations between the vertices. To define this graph, we therefore first introduce the partial correlation relation for continuous-time processes and provide several equivalent characterisations. In particular, we establish that the partial correlation relation defines a graphoid. The partial correlation graph additionally satisfies the usual Markov properties and the edges can be determined very easily via the inverse of the spectral density function. Throughout the paper, we compare and relate the partial correlation graph to the mixed (local) causality graph of Fasen-Hartmann and Schenk (2023a). Finally, as an example, we explicitly characterise and interpret the edges in the partial correlation graph for the popular multivariate continuous-time AR (MCAR) processes.
This paper is focused on the study of entropic regularization in optimal transport as a smoothing method for Wasserstein estimators, through the prism of the classical tradeoff between approximation and estimation errors in statistics. Wasserstein estimators are defined as solutions of variational problems whose objective function involves the use of an optimal transport cost between probability measures. Such estimators can be regularized by replacing the optimal transport cost by its regularized version using an entropy penalty on the transport plan. The use of such a regularization has a potentially significant smoothing effect on the resulting estimators. In this work, we investigate its potential benefits on the approximation and estimation properties of regularized Wasserstein estimators. Our main contribution is to discuss how entropic regularization may reach, at a lower computational cost, statistical performances that are comparable to those of un-regularized Wasserstein estimators in statistical learning problems involving distributional data analysis. To this end, we present new theoretical results on the convergence of regularized Wasserstein estimators. We also study their numerical performances using simulated and real data in the supervised learning problem of proportions estimation in mixture models using optimal transport.
We propose a local modification of the standard subdiffusion model by introducing the initial Fickian diffusion, which results in a multiscale diffusion model. The developed model resolves the incompatibility between the nonlocal operators in subdiffusion and the local initial conditions and thus eliminates the initial singularity of the solutions of the subdiffusion, while retaining its heavy tail behavior away from the initial time. The well-posedness of the model and high-order regularity estimates of its solutions are analyzed by resolvent estimates, based on which the numerical discretization and analysis are performed. Numerical experiments are carried out to substantiate the theoretical findings.