亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces SPDE bridges with observation noise and contains an analysis of their spatially semidiscrete approximations. The SPDEs are considered in the form of mild solutions in an abstract Hilbert space framework suitable for parabolic equations. They are assumed to be linear with additive noise in the form of a cylindrical Wiener process. The observational noise is also cylindrical and SPDE bridges are formulated via conditional distributions of Gaussian random variables in Hilbert spaces. A general framework for the spatial discretization of these bridge processes is introduced. Explicit convergence rates are derived for a spectral and a finite element based method. It is shown that for sufficiently rough observation noise, the rates are essentially the same as those of the corresponding discretization of the original SPDE.

相關內容

This paper is devoted to show a discrete adaptative finite element approximation result for the isotropic two-dimensional Griffith energy arising in fracture mechanics. The problem is addressed in the geometric measure theoretic framework of generalized special functions of bounded deformation which corresponds to the natural energy space for this functional. It is proved to be approximated in the sense of $\Gamma$-convergence by a sequence of discrete integral functionals defined on continuous piecewise affine functions. The main feature of this result is that the mesh is part of the unknown of the problem, and it gives enough flexibility to recover isotropic surface energies.

We study the entropic Gromov-Wasserstein and its unbalanced version between (unbalanced) Gaussian distributions with different dimensions. When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions. Via an application of von Neumann's trace inequality, we obtain closed-form expressions for the entropic IGW between these Gaussian distributions. Finally, we consider an entropic inner product Gromov-Wasserstein barycenter of multiple Gaussian distributions. We prove that the barycenter is a Gaussian distribution when the entropic regularization parameter is small. We further derive a closed-form expression for the covariance matrix of the barycenter.

We propose inferential tools for functional linear quantile regression where the conditional quantile of a scalar response is assumed to be a linear functional of a functional covariate. In contrast to conventional approaches, we employ kernel convolution to smooth the original loss function. The coefficient function is estimated under a reproducing kernel Hilbert space framework. A gradient descent algorithm is designed to minimize the smoothed loss function with a roughness penalty. With the aid of the Banach fixed-point theorem, we show the existence and uniqueness of our proposed estimator as the minimizer of the regularized loss function in an appropriate Hilbert space. Furthermore, we establish the convergence rate as well as the weak convergence of our estimator. As far as we know, this is the first weak convergence result for a functional quantile regression model. Pointwise confidence intervals and a simultaneous confidence band for the true coefficient function are then developed based on these theoretical properties. Numerical studies including both simulations and a data application are conducted to investigate the performance of our estimator and inference tools in finite sample.

We establish a novel convergent iteration framework for a weak approximation of general switching diffusion. The key theoretical basis of the proposed approach is a restriction of the maximum number of switching so as to untangle and compensate a challenging system of weakly coupled partial differential equations to a collection of independent partial differential equations, for which a variety of accurate and efficient numerical methods are available. Upper and lower bounding functions for the solutions are constructed using the iterative approximate solutions. We provide a rigorous convergence analysis for the iterative approximate solutions, as well as for the upper and lower bounding functions. Numerical results are provided to examine our theoretical findings and the effectiveness of the proposed framework.

Quantum privacy amplification is a central task in quantum cryptography. Given shared randomness, which is initially correlated with a quantum system held by an eavesdropper, the goal is to extract uniform randomness which is decoupled from the latter. The optimal rate for this task is known to satisfy the strong converse property and we provide a lower bound on the corresponding strong converse exponent. In the strong converse region, the distance of the final state of the protocol from the desired decoupled state converges exponentially fast to its maximal value, in the asymptotic limit. We show that this necessarily leads to totally insecure communication by establishing that the eavesdropper can infer any sent messages with certainty, when given very limited extra information. In fact, we prove that in the strong converse region, the eavesdropper has an exponential advantage in inferring the sent message correctly, compared to the achievability region. Additionally we establish the following technical result, which is central to our proofs, and is of independent interest: the smoothing parameter for the smoothed max-relative entropy satisfies the strong converse property.

We introduce the "inverse bandit" problem of estimating the rewards of a multi-armed bandit instance from observing the learning process of a low-regret demonstrator. Existing approaches to the related problem of inverse reinforcement learning assume the execution of an optimal policy, and thereby suffer from an identifiability issue. In contrast, we propose to leverage the demonstrator's behavior en route to optimality, and in particular, the exploration phase, for reward estimation. We begin by establishing a general information-theoretic lower bound under this paradigm that applies to any demonstrator algorithm, which characterizes a fundamental tradeoff between reward estimation and the amount of exploration of the demonstrator. Then, we develop simple and efficient reward estimators for upper-confidence-based demonstrator algorithms that attain the optimal tradeoff, showing in particular that consistent reward estimation -- free of identifiability issues -- is possible under our paradigm. Extensive simulations on both synthetic and semi-synthetic data corroborate our theoretical results.

We introduce and discuss shape-based models for finding the best interpolation data in the compression of images with noise. The aim is to reconstruct missing regions by means of minimizing a data fitting term in the $L^2$-norm between the images and their reconstructed counterparts using time-dependent PDE inpainting. We analyze the proposed models in the framework of the $\Gamma$-convergence from two different points of view. First, we consider a continuous stationary PDE model, obtained by focusing on the first iteration of the discretized time-dependent PDE, and get pointwise information on the "relevance" of each pixel by a topological asymptotic method. Second, we introduce a finite dimensional setting of the continuous model based on "fat pixels" (balls with positive radius), and we study by $\Gamma$-convergence the asymptotics when the radius vanishes. Numerical computations are presented that confirm the usefulness of our theoretical findings for non-stationary PDE-based image compression.

Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images. Recently, many convolutional neural networks (CNNs) have been proposed for various LF image processing tasks. However, it is challenging for CNNs to effectively process LF images since the spatial and angular information are highly inter-twined with varying disparities. In this paper, we propose a generic mechanism to disentangle these coupled information for LF image processing. Specifically, we first design a class of domain-specific convolutions to disentangle LFs from different dimensions, and then leverage these disentangled features by designing task-specific modules. Our disentangling mechanism can well incorporate the LF structure prior and effectively handle 4D LF data. Based on the proposed mechanism, we develop three networks (i.e., DistgSSR, DistgASR and DistgDisp) for spatial super-resolution, angular super-resolution and disparity estimation. Experimental results show that our networks achieve state-of-the-art performance on all these three tasks, which demonstrates the effectiveness, efficiency, and generality of our disentangling mechanism. Project page: //yingqianwang.github.io/DistgLF/.

Overparametrized neural networks tend to perfectly fit noisy training data yet generalize well on test data. Inspired by this empirical observation, recent work has sought to understand this phenomenon of benign overfitting or harmless interpolation in the much simpler linear model. Previous theoretical work critically assumes that either the data features are statistically independent or the input data is high-dimensional; this precludes general nonparametric settings with structured feature maps. In this paper, we present a general and flexible framework for upper bounding regression and classification risk in a reproducing kernel Hilbert space. A key contribution is that our framework describes precise sufficient conditions on the data Gram matrix under which harmless interpolation occurs. Our results recover prior independent-features results (with a much simpler analysis), but they furthermore show that harmless interpolation can occur in more general settings such as features that are a bounded orthonormal system. Furthermore, our results show an asymptotic separation between classification and regression performance in a manner that was previously only shown for Gaussian features.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司