亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Motivated by applications to the study of depth functions for tree-indexed random variables generated by point processes, we describe functional limit theorems for the intensity measure of point processes. Specifically, we establish uniform laws of large numbers and uniform central limit theorems over a class of bounded measurable functions for estimates of the intensity measure. Using these results, we derive the uniform asymptotic properties of half-space depth and, as corollaries, obtain the asymptotic behavior of medians and other quantiles of the standardized intensity measure. Additionally, we obtain uniform concentration upper bound for the estimator of half-space depth. As a consequence of our results, we also derive uniform consistency and uniform asymptotic normality of Lotka-Nagaev and Harris-type estimators for the Laplace transform of the point processes in a branching random walk.

相關內容

In this paper, we introduce a framework for the discretization of a class of constrained Hamilton-Jacobi equations, a system coupling a Hamilton-Jacobi equation with a Lagrange multiplier determined by the constraint. The equation is non-local, and the constraint has bounded variations. We show that, under a set of general hypothesis, the approximation obtained with a finite-differences monotonic scheme, converges towards the viscosity solution of the constrained Hamilton-Jacobi equation. Constrained Hamilton-Jacobi equations often arise as the long time and small mutation asymptotics of population models in quantitative genetics. As an example, we detail the construction of a scheme for the limit of an integral Lotka-Volterra equation. We also construct and analyze an Asymptotic-Preserving (AP) scheme for the model outside of the asymptotics. We prove that it is stable along the transition towards the asymptotics. The theoretical analysis of the schemes is illustrated and discussed with numerical simulations. The AP scheme is also used to conjecture the asymptotic behavior of the integral Lotka-Volterra equation, when the environment varies in time.

In this work, we extend the data-driven It\^{o} stochastic differential equation (SDE) framework for the pathwise assessment of short-term forecast errors to account for the time-dependent upper bound that naturally constrains the observable historical data and forecast. We propose a new nonlinear and time-inhomogeneous SDE model with a Jacobi-type diffusion term for the phenomenon of interest, simultaneously driven by the forecast and the constraining upper bound. We rigorously demonstrate the existence and uniqueness of a strong solution to the SDE model by imposing a condition for the time-varying mean-reversion parameter appearing in the drift term. The normalized forecast function is thresholded to keep such mean-reversion parameters bounded. The SDE model parameter calibration also covers the thresholding parameter of the normalized forecast by applying a novel iterative two-stage optimization procedure to user-selected approximations of the likelihood function. Another novel contribution is estimating the transition density of the forecast error process, not known analytically in a closed form, through a tailored kernel smoothing technique with the control variate method. We fit the model to the 2019 photovoltaic (PV) solar power daily production and forecast data in Uruguay, computing the daily maximum solar PV production estimation. Two statistical versions of the constrained SDE model are fit, with the beta and truncated normal distributions as proxies for the transition density. Empirical results include simulations of the normalized solar PV power production and pathwise confidence bands generated through an indirect inference method. An objective comparison of optimal parametric points associated with the two selected statistical approximations is provided by applying the innovative kernel density estimation technique of the transition function of the forecast error process.

We propose a new Riemannian gradient descent method for computing spherical area-preserving mappings of topological spheres using a Riemannian retraction-based framework with theoretically guaranteed convergence. The objective function is based on the stretch energy functional, and the minimization is constrained on a power manifold of unit spheres embedded in 3-dimensional Euclidean space. Numerical experiments on several mesh models demonstrate the accuracy and stability of the proposed framework. Comparisons with two existing state-of-the-art methods for computing area-preserving mappings demonstrate that our algorithm is both competitive and more efficient. Finally, we present a concrete application to the problem of landmark-aligned surface registration of two brain models.

Moment models with suitable closure can lead to accurate and computationally efficient solvers for particle transport. Hence, we propose a new asymptotic preserving scheme for the M1 model of linear transport that works uniformly for any Knudsen number. Our idea is to apply the M1 closure at the numerical level to an existing asymptotic preserving scheme for the corresponding kinetic equation, namely the Unified Gas Kinetic scheme (UGKS) originally proposed in [27] and extended to linear transport in [24]. In order to ensure the moments realizability in this new scheme, the UGKS positivity needs to be maintained. We propose a new density reconstruction in time to obtain this property. A second order extension is also suggested and validated. Several test cases show the performances of this new scheme.

Recent advances in neuroimaging have enabled studies in functional connectivity (FC) of human brain, alongside investigation of the neuronal basis of cognition. One important FC study is the representation of vision in human brain. The release of publicly available dataset BOLD5000 has made it possible to study the brain dynamics during visual tasks in greater detail. In this paper, a comprehensive analysis of fMRI time series (TS) has been performed to explore different types of visual brain networks (VBN). The novelty of this work lies in (1) constructing VBN with consistently significant direct connectivity using both marginal and partial correlation, which is further analyzed using graph theoretic measures, (2) classification of VBNs as formed by image complexity-specific TS, using graphical features. In image complexity-specific VBN classification, XGBoost yields average accuracy in the range of 86.5% to 91.5% for positively correlated VBN, which is 2% greater than that using negative correlation. This result not only reflects the distinguishing graphical characteristics of each image complexity-specific VBN, but also highlights the importance of studying both positively correlated and negatively correlated VBN to understand the how differently brain functions while viewing different complexities of real-world images.

Adaptiveness is a key principle in information processing including statistics and machine learning. We investigate the usefulness of adaptive methods in the framework of asymptotic binary hypothesis testing, when each hypothesis represents asymptotically many independent instances of a quantum channel, and the tests are based on using the unknown channel and observing outputs. Unlike the familiar setting of quantum states as hypotheses, there is a fundamental distinction between adaptive and non-adaptive strategies with respect to the channel uses, and we introduce a number of further variants of the discrimination tasks by imposing different restrictions on the test strategies. The following results are obtained: (1) We prove that for classical-quantum channels, adaptive and non-adaptive strategies lead to the same error exponents both in the symmetric (Chernoff) and asymmetric (Hoeffding, Stein) settings. (2) The first separation between adaptive and non-adaptive symmetric hypothesis testing exponents for quantum channels, which we derive from a general lower bound on the error probability for non-adaptive strategies; the concrete example we analyze is a pair of entanglement-breaking channels. (3)We prove, in some sense generalizing the previous statement, that for general channels adaptive strategies restricted to classical feed-forward and product state channel inputs are not superior in the asymptotic limit to non-adaptive product state strategies. (4) As an application of our findings, we address the discrimination power of an arbitrary quantum channel and show that adaptive strategies with classical feedback and no quantum memory at the input do not increase the discrimination power of the channel beyond non-adaptive tensor product input strategies.

This work presents the cubature scheme for the fitting of spatio-temporal Poisson point processes. The methodology is implemented in the R Core Team (2024) package stopp (D'Angelo and Adelfio, 2023), published on the Comprehensive R Archive Network (CRAN) and available from //CRAN.R-project.org/package=stopp. Since the number of dummy points should be sufficient for an accurate estimate of the likelihood, numerical experiments are currently under development to give guidelines on this aspect.

Bayesian inference for complex models with an intractable likelihood can be tackled using algorithms performing many calls to computer simulators. These approaches are collectively known as "simulation-based inference" (SBI). Recent SBI methods have made use of neural networks (NN) to provide approximate, yet expressive constructs for the unavailable likelihood function and the posterior distribution. However, they do not generally achieve an optimal trade-off between accuracy and computational demand. In this work, we propose an alternative that provides both approximations to the likelihood and the posterior distribution, using structured mixtures of probability distributions. Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, while exhibiting a much smaller computational footprint. We illustrate our results on several benchmark models from the SBI literature.

We establish an invariance principle for polynomial functions of $n$ independent high-dimensional random vectors, and also show that the obtained rates are nearly optimal. Both the dimension of the vectors and the degree of the polynomial are permitted to grow with $n$. Specifically, we obtain a finite sample upper bound for the error of approximation by a polynomial of Gaussians, measured in Kolmogorov distance, and extend it to functions that are approximately polynomial in a mean squared error sense. We give a corresponding lower bound that shows the invariance principle holds up to polynomial degree $o(\log n)$. The proof is constructive and adapts an asymmetrisation argument due to V. V. Senatov. As applications, we obtain a higher-order delta method with possibly non-Gaussian limits, and generalise a number of known results on high-dimensional and infinite-order U-statistics, and on fluctuations of subgraph counts.

Motivated by a heat radiative transport equation, we consider a particle undergoing collisions in a space-time domain and propose a method to sample its escape time, space and direction from the domain. The first step of the procedure is an estimation of how many elementary collisions is safe to take before chances of exiting the domain are too high; then these collisions are aggregated into a single movement. The method does not use any model nor any particular regime of parameters. We give theoretical results both under the normal approximation and without it and test the method on some benchmarks from the literature. The results confirm the theoretical predictions and show that the proposal is an efficient method to sample the escape distribution of the particle.

北京阿比特科技有限公司