亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the present paper, we consider that $N$ diffusion processes $X^1,\dots,X^N$ are observed on $[0,T]$, where $T$ is fixed and $N$ grows to infinity. Contrary to most of the recent works, we no longer assume that the processes are independent. The dependency is modeled through correlations between the Brownian motions driving the diffusion processes. A nonparametric estimator of the drift function, which does not use the knowledge of the correlation matrix, is proposed and studied. Its integrated mean squared risk is bounded and an adaptive procedure is proposed. Few theoretical tools to handle this kind of dependency are available, and this makes our results new. Numerical experiments show that the procedure works in practice.

相關內容

The present study is an extension of the work done in [16] and [10], where a two-level Parareal method with averaging was examined. The method proposed in this paper is a multi-level Parareal method with arbitrarily many levels, which is not restricted to the two-level case. We give an asymptotic error estimate which reduces to the two-level estimate for the case when only two levels are considered. Introducing more than two levels has important consequences for the averaging procedure, as we choose separate averaging windows for each of the different levels, which is an additional new feature of the present study. The different averaging windows make the proposed method especially appropriate for multi-scale problems, because we can introduce a level for each intrinsic scale of the problem and adapt the averaging procedure such that we reproduce the behavior of the model on the particular scale resolved by the level.

A recently developed measure-theoretic framework solves a stochastic inverse problem (SIP) for models where uncertainties in model output data are predominantly due to aleatoric (i.e., irreducible) uncertainties in model inputs (i.e., parameters). The subsequent inferential target is a distribution on parameters. Another type of inverse problem is to quantify uncertainties in estimates of "true" parameter values under the assumption that such uncertainties should be reduced as more data are incorporated into the problem, i.e., the uncertainty is considered epistemic. A major contribution of this work is the formulation and solution of such a parameter identification problem (PIP) within the measure-theoretic framework developed for the SIP. The approach is novel in that it utilizes a solution to a stochastic forward problem (SFP) to update an initial density only in the parameter directions informed by the model output data. In other words, this method performs "selective regularization" only in the parameter directions not informed by data. The solution is defined by a maximal updated density (MUD) point where the updated density defines the measure-theoretic solution to the PIP. Another significant contribution of this work is the full theory of existence and uniqueness of MUD points for linear maps with Gaussian distributions. Data-constructed Quantity of Interest (QoI) maps are also presented and analyzed for solving the PIP within this measure-theoretic framework as a means of reducing uncertainties in the MUD estimate. We conclude with a demonstration of the general applicability of the method on two problems involving either spatial or temporal data for estimating uncertain model parameters.

Temporally consistent depth estimation is crucial for online applications such as augmented reality. While stereo depth estimation has received substantial attention as a promising way to generate 3D information, there is relatively little work focused on maintaining temporal stability. Indeed, based on our analysis, current techniques still suffer from poor temporal consistency. Stabilizing depth temporally in dynamic scenes is challenging due to concurrent object and camera motion. In an online setting, this process is further aggravated because only past frames are available. We present a framework named Consistent Online Dynamic Depth (CODD) to produce temporally consistent depth estimates in dynamic scenes in an online setting. CODD augments per-frame stereo networks with novel motion and fusion networks. The motion network accounts for dynamics by predicting a per-pixel SE3 transformation and aligning the observations. The fusion network improves temporal depth consistency by aggregating the current and past estimates. We conduct extensive experiments and demonstrate quantitatively and qualitatively that CODD outperforms competing methods in terms of temporal consistency and performs on par in terms of per-frame accuracy.

This manuscript makes two contributions to the field of change-point detection. In a generalchange-point setting, we provide a generic algorithm for aggregating local homogeneity testsinto an estimator of change-points in a time series. Interestingly, we establish that the errorrates of the collection of tests directly translate into detection properties of the change-pointestimator. This generic scheme is then applied to various problems including covariance change-point detection, nonparametric change-point detection and sparse multivariate mean change-point detection. For the latter, we derive minimax optimal rates that are adaptive to theunknown sparsity and to the distance between change-points when the noise is Gaussian. Forsub-Gaussian noise, we introduce a variant that is optimal in almost all sparsity regimes.

Results on the spectral behavior of random matrices as the dimension increases are applied to the problem of detecting the number of sources impinging on an array of sensors. A common strategy to solve this problem is to estimate the multiplicity of the smallest eigenvalue of the spatial covariance matrix $R$ of the sensed data from the sample covariance matrix $\widehat{R}$. Existing approaches, such as that based on information theoretic criteria, rely on the closeness of the noise eigenvalues of $\widehat R$ to each other and, therefore, the sample size has to be quite large when the number of sources is large in order to obtain a good estimate. The analysis presented in this report focuses on the splitting of the spectrum of $\widehat{R}$ into noise and signal eigenvalues. It is shown that, when the number of sensors is large, the number of signals can be estimated with a sample size considerably less than that required by previous approaches. The practical significance of the main result is that detection can be achieved with a number of samples comparable to the number of sensors in large dimensional array processing.

In off-policy reinforcement learning, a behaviour policy performs exploratory interactions with the environment to obtain state-action-reward samples which are then used to learn a target policy that optimises the expected return. This leads to a problem of off-policy evaluation, where one needs to evaluate the target policy from samples collected by the often unrelated behaviour policy. Importance sampling is a traditional statistical technique that is often applied to off-policy evaluation. While importance sampling estimators are unbiased, their variance increases exponentially with the horizon of the decision process due to computing the importance weight as a product of action probability ratios, yielding estimates with low accuracy for domains involving long-term planning. This paper proposes state-based importance sampling (SIS), which drops the action probability ratios of sub-trajectories with "neglible states" -- roughly speaking, those for which the chosen actions have no impact on the return estimate -- from the computation of the importance weight. Theoretical results show that this results in a reduction of the exponent in the variance upper bound as well as improving the mean squared error. An automated search algorithm based on covariance testing is proposed to identify a negligible state set which has minimal MSE when performing state-based importance sampling. Experiments are conducted on a lift domain, which include "lift states" where the action has no impact on the following state and reward. The results demonstrate that using the search algorithm, SIS yields reduced variance and improved accuracy compared to traditional importance sampling, per-decision importance sampling, and incremental importance sampling.

We investigate an anisotropic weakly over-penalised symmetric interior penalty method for the Stokes equation. The method is one of the discontinuous Galerkin methods, simple and similar to the Crouzeix--Raviart finite element method. The main contributions of this paper are to show new proof for the consistency term. It allows us to obtain an anisotropic consistency error estimate. The idea of the proof is to use the relation between the Raviart--Thomas finite element space and the discontinuous space. In many papers, the inf-sup stable schemes of the discontinuous Galerkin method have been discussed on shape-regular mesh partitions. Our result shows that the Stokes pair, which is treated in this paper, satisfies the inf-sup condition on anisotropic meshes. Furthermore, we show an error estimate in an energy norm on anisotropic meshes. In numerical experiments, we have compared the calculation results for standard and anisotropic mesh partitions. The effectiveness of using anisotropic meshes can be confirmed for problems with boundary layers.

This paper derives asymptotic theory for Breitung's (2002, Journal of Econometrics 108, 343-363) nonparameteric variance ratio unit root test when applied to regression residuals. The test requires neither the specification of the correlation structure in the data nor the choice of tuning parameters. Compared with popular residuals-based no-cointegration tests, the variance ratio test is less prone to size distortions but has smaller local asymptotic power. However, this paper shows that local asymptotic power properties do not serve as a useful indicator for the power of residuals-based no-cointegration tests in finite samples. In terms of size-corrected power, the variance ratio test performs relatively well and, in particular, does not suffer from power reversal problems detected for, e.g., the frequently used augmented Dickey-Fuller type no-cointegration test. An application to daily prices of cryptocurrencies illustrates the usefulness of the variance ratio test in practice.

We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. A special case is the conventional orthonormal wavelet estimation, where the warping distribution is the standard continuous uniform. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation. However, previous works paid little attention to modeling the backward network of MI (i.e., dependency from the target to the source), which is crucial to the tightness of the variational information maximization lower bound. In this paper, we propose Adversarial Mutual Information (AMI): a text generation framework which is formed as a novel saddle point (min-max) optimization aiming to identify joint interactions between the source and target. Within this framework, the forward and backward networks are able to iteratively promote or demote each other's generated instances by comparing the real and synthetic data distributions. We also develop a latent noise sampling strategy that leverages random variations at the high-level semantic space to enhance the long term dependency in the generation process. Extensive experiments based on different text generation tasks demonstrate that the proposed AMI framework can significantly outperform several strong baselines, and we also show that AMI has potential to lead to a tighter lower bound of maximum mutual information for the variational information maximization problem.

北京阿比特科技有限公司