亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Palm distributions are critical in the study of point processes. In the present paper we focus on a point process $\Phi$ defined as the superposition, i.e., sum, of two independent point processes, say $\Phi = \Phi_1 + \Phi_2$, and we characterize its Palm distribution. In particular, we show that the Palm distribution of $\Phi$ admits a simple mixture representation depending only on the Palm distribution of $\Phi_j$, as $j=1, 2$, and the associated moment measures. Extensions to the superposition of multiple point processes, and higher order Palm distributions, are treated analogously.

相關內容

 Palm(官方中文名稱奔邁)是一種掌上電腦硬件的品牌名稱,采用名為 Palm OS 的操作系統。

We study conditional linear factor models in the context of asset pricing panels. Our analysis focuses on conditional means and covariances to characterize the cross-sectional and inter-temporal properties of returns and factors as well as their interrelationships. We also review the conditions outlined in Kozak and Nagel (2024) and show how the conditional mean-variance efficient portfolio of an unbalanced panel can be spanned by low-dimensional factor portfolios, even without assuming invertibility of the conditional covariance matrices. Our analysis provides a comprehensive foundation for the specification and estimation of conditional linear factor models.

We show that the parameters of a $k$-mixture of inverse Gaussian or gamma distributions are algebraically identifiable from the first $3k-1$ moments, and rationally identifiable from the first $3k+2$ moments. Our proofs are based on Terracini's classification of defective surfaces, careful analysis of the intersection theory of moment varieties, and a recent result on sufficient conditions for rational identifiability of secant varieties by Massarenti--Mella.

This paper presents a convolution tensor decomposition based model reduction method for solving the Allen-Cahn equation. The Allen-Cahn equation is usually used to characterize phase separation or the motion of anti-phase boundaries in materials. Its solution is time-consuming when high-resolution meshes and large time scale integration are involved. To resolve these issues, the convolution tensor decomposition method is developed, in conjunction with a stabilized semi-implicit scheme for time integration. The development enables a powerful computational framework for high-resolution solutions of Allen-Cahn problems, and allows the use of relatively large time increments for time integration without violating the discrete energy law. To further improve the efficiency and robustness of the method, an adaptive algorithm is also proposed. Numerical examples have confirmed the efficiency of the method in both 2D and 3D problems. Orders-of-magnitude speedups were obtained with the method for high-resolution problems, compared to the finite element method. The proposed computational framework opens numerous opportunities for simulating complex microstructure formation in materials on large-volume high-resolution meshes at a deeply reduced computational cost.

Any discrete distribution with support on $\{0,\ldots, d\}$ can be constructed as the distribution of sums of Bernoulli variables. We prove that the class of $d$-dimensional Bernoulli variables $\boldsymbol{X}=(X_1,\ldots, X_d)$ whose sums $\sum_{i=1}^dX_i$ have the same distribution $p$ is a convex polytope $\mathcal{P}(p)$ and we analytically find its extremal points. Our main result is to prove that the Hausdorff measure of the polytopes $\mathcal{P}(p), p\in \mathcal{D}_d,$ is a continuous function $l(p)$ over $\mathcal{D}_d$ and it is the density of a finite measure $\mu_s$ on $\mathcal{D}_d$ that is Hausdorff absolutely continuous. We also prove that the measure $\mu_s$ normalized over the simplex $\mathcal{D}$ belongs to the class of Dirichlet distributions. We observe that the symmetric binomial distribution is the mean of the Dirichlet distribution on $\mathcal{D}$ and that when $d$ increases it converges to the mode.

We present a novel spatial discretization for the Cahn-Hilliard equation including transport. The method is given by a mixed discretization for the two elliptic operators, with the phase field and chemical potential discretized in discontinuous Galerkin spaces, and two auxiliary flux variables discretized in a divergence-conforming space. This allows for the use of an upwind-stabilized discretization for the transport term, while still ensuring a consistent treatment of structural properties including mass conservation and energy dissipation. Further, we couple the novel spatial discretization to an adaptive time stepping method in view of the Cahn-Hilliard equation's distinct slow and fast time scale dynamics. The resulting implicit stages are solved with a robust preconditioning strategy, which is derived for our novel spatial discretization based on an existing one for continuous Galerkin based discretizations. Our overall scheme's accuracy, robustness, efficient time adaptivity as well as structure preservation and stability with respect to advection dominated scenarios are demonstrated in a series of numerical tests.

The computational cost for inference and prediction of statistical models based on Gaussian processes with Mat\'ern covariance functions scales cubicly with the number of observations, limiting their applicability to large data sets. The cost can be reduced in certain special cases, but there are currently no generally applicable exact methods with linear cost. Several approximate methods have been introduced to reduce the cost, but most of these lack theoretical guarantees for the accuracy. We consider Gaussian processes on bounded intervals with Mat\'ern covariance functions and for the first time develop a generally applicable method with linear cost and with a covariance error that decreases exponentially fast in the order $m$ of the proposed approximation. The method is based on an optimal rational approximation of the spectral density and results in an approximation that can be represented as a sum of $m$ independent Gaussian Markov processes, which facilitates easy usage in general software for statistical inference, enabling its efficient implementation in general statistical inference software packages. Besides the theoretical justifications, we demonstrate the accuracy empirically through carefully designed simulation studies which show that the method outperforms all state-of-the-art alternatives in terms of accuracy for a fixed computational cost in statistical tasks such as Gaussian process regression.

Motivated by information geometry, a distance function on the space of stochastic matrices is advocated. Starting with sequences of Markov chains the Bhattacharyya angle is advocated as the natural tool for comparing both short and long term Markov chain runs. Bounds on the convergence of the distance and mixing times are derived. Guided by the desire to compare different Markov chain models, especially in the setting of healthcare processes, a new distance function on the space of stochastic matrices is presented. It is a true distance measure which has a closed form and is efficient to implement for numerical evaluation. In the case of ergodic Markov chains, it is shown that considering either the Bhattacharyya angle on Markov sequences or the new stochastic matrix distance leads to the same distance between models.

The incompressible Euler equations are an important model system in computational fluid dynamics. Fast high-order methods for the solution of this time-dependent system of partial differential equations are of particular interest: due to their exponential convergence in the polynomial degree they can make efficient use of computational resources. To address this challenge we describe a novel timestepping method which combines a hybridised Discontinuous Galerkin method for the spatial discretisation with IMEX timestepping schemes, thus achieving high-order accuracy in both space and time. The computational bottleneck is the solution of a (block-) sparse linear system to compute updates to pressure and velocity at each stage of the IMEX integrator. Following Chorin's projection approach, this update of the velocity and pressure fields is split into two stages. As a result, the hybridised equation for the implicit pressure-velocity problem is reduced to the well-known system which arises in hybridised mixed formulations of the Poisson- or diffusion problem and for which efficient multigrid preconditioners have been developed. Splitting errors can be reduced systematically by embedding this update into a preconditioned Richardson iteration. The accuracy and efficiency of the new method is demonstrated numerically for two time-dependent testcases that have been previously studied in the literature.

We prove that multilevel Picard approximations are capable of approximating solutions of semilinear heat equations in $L^{p}$-sense, ${p}\in [2,\infty)$, in the case of gradient-dependent, Lipschitz-continuous nonlinearities, in the sense that the computational effort of the multilevel Picard approximations grow at most polynomially in both the dimension $d$ and the reciprocal $1/\epsilon$ of the prescribed accuracy $\epsilon$.

Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.

北京阿比特科技有限公司