亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we investigate the imaging inverse problem by employing an infinite-dimensional Bayesian inference method with a general fractional total variation-Gaussian (GFTG) prior. This novel hybrid prior is a development for the total variation-Gaussian (TG) prior and the non-local total variation-Gaussian (NLTG) prior, which is a combination of the Gaussian prior and a general fractional total variation regularization term, which contains a wide class of fractional derivative. Compared to the TG prior, the GFTG prior can effectively reduce the staircase effect, enhance the texture details of the images and also provide a complete theoretical analysis in the infinite-dimensional limit similarly to TG prior. The separability of the state space in Bayesian inference is essential for developments of probability and integration theory in infinite-dimensional setting, thus we first introduce the corresponding general fractional Sobolev space and prove that the space is a separable Banach space. Thereafter, we give the well-posedness and finite-dimensional approximation of the posterior measure of the Bayesian inverse problem based on the GFTG prior, and then the samples are extracted from the posterior distribution by using the preconditioned Crank-Nicolson (pCN) algorithm. Finally, we give several numerical examples of image reconstruction under liner and nonlinear models to illustrate the advantages of the proposed improved prior.

相關內容

IEEE游(you)戲(xi)匯(hui)刊(T-G)發(fa)表(biao)關于游(you)戲(xi)的科學、技術和工程(cheng)方面的高質(zhi)量原(yuan)創文(wen)章(zhang)(zhang)。本(ben)雜志的文(wen)章(zhang)(zhang)按(an)照IEEE PSPB操作(zuo)手(shou)冊(章(zhang)(zhang)節(jie)8.2.1.C和8.2.2.A)的要求進行(xing)(xing)同行(xing)(xing)評審(shen)(shen)。每(mei)一(yi)篇發(fa)表(biao)的文(wen)章(zhang)(zhang)都由至少兩(liang)名獨立的審(shen)(shen)稿人通過單盲的同行(xing)(xing)評審(shen)(shen)過程(cheng)進行(xing)(xing)評審(shen)(shen),審(shen)(shen)稿人的身份作(zuo)者(zhe)并不知(zhi)道,但審(shen)(shen)稿人知(zhi)道作(zuo)者(zhe)的身份。文(wen)章(zhang)(zhang)在被接受前(qian)篩選(xuan)是否抄(chao)襲(xi)。 官網地址:

Spectral clustering has been one of the widely used methods for community detection in networks. However, large-scale networks bring computational challenges to the eigenvalue decomposition therein. In this paper, we study the spectral clustering using randomized sketching algorithms from a statistical perspective, where we typically assume the network data are generated from a stochastic block model that is not necessarily of full rank. To do this, we first use the recently developed sketching algorithms to obtain two randomized spectral clustering algorithms, namely, the random projection-based and the random sampling-based spectral clustering. Then we study the theoretical bounds of the resulting algorithms in terms of the approximation error for the population adjacency matrix, the misclassification error, and the estimation error for the link probability matrix. It turns out that, under mild conditions, the randomized spectral clustering algorithms lead to the same theoretical bounds as those of the original spectral clustering algorithm. We also extend the results to degree-corrected stochastic block models. Numerical experiments support our theoretical findings and show the efficiency of randomized methods. A new R package called Rclust is developed and made available to the public.

We establish summability results for coefficient sequences of Wiener-Hermite polynomial chaos expansions for countably-parametric solutions of linear elliptic and parabolic divergence-form partial differential equations with Gaussian random field inputs. The novel proof technique developed here is based on analytic continuation of parametric solutions into the complex domain. It differs from previous works that used bootstrap arguments and induction on the differentiation order of solution derivatives with respect to the parameters. The present holomorphy-based argument allows a unified, "differentiation-free" sparsity analysis of Wiener-Hermite polynomial chaos expansions in various scales of function spaces. The analysis also implies corresponding results for posterior densities in Bayesian inverse problems subject to Gaussian priors on uncertain inputs from function spaces. Our results furthermore yield dimension-independent convergence rates of various constructive high-dimensional deterministic numerical approximation schemes such as single-level and multi-level versions of anisotropic sparse-grid Hermite-Smolyak interpolation and quadrature in both forward and inverse computational uncertainty quantification.

Optimization under uncertainty and risk is indispensable in many practical situations. Our paper addresses stability of optimization problems using composite risk functionals which are subjected to measure perturbations. Our main focus is the asymptotic behavior of data-driven formulations with empirical or smoothing estimators such as kernels or wavelets applied to some or to all functions of the compositions. We analyze the properties of the new estimators and we establish strong law of large numbers, consistency, and bias reduction potential under fairly general assumptions. Our results are germane to risk-averse optimization and to data science in general.

The goal of this paper is to reduce the total complexity of gradient-based methods for two classes of problems: affine-constrained composite convex optimization and bilinear saddle-point structured non-smooth convex optimization. Our technique is based on a double-loop inexact accelerated proximal gradient (APG) method for minimizing the summation of a non-smooth but proximable convex function and two smooth convex functions with different smoothness constants and computational costs. Compared to the standard APG method, the inexact APG method can reduce the total computation cost if one smooth component has higher computational cost but a smaller smoothness constant than the other. With this property, the inexact APG method can be applied to approximately solve the subproblems of a proximal augmented Lagrangian method for affine-constrained composite convex optimization and the smooth approximation for bilinear saddle-point structured non-smooth convex optimization, where the smooth function with a smaller smoothness constant has significantly higher computational cost. Thus it can reduce total complexity for finding an approximately optimal/stationary solution. This technique is similar to the gradient sliding technique in the literature. The difference is that our inexact APG method can efficiently stop the inner loop by using a computable condition based on a measure of stationarity violation, while the gradient sliding methods need to pre-specify the number of iterations for the inner loop. Numerical experiments demonstrate significantly higher efficiency of our methods over an optimal primal-dual first-order method and the gradient sliding methods.

We study the class of first-order locally-balanced Metropolis--Hastings algorithms introduced in Livingstone & Zanella (2021). To choose a specific algorithm within the class the user must select a balancing function $g:\mathbb{R} \to \mathbb{R}$ satisfying $g(t) = tg(1/t)$, and a noise distribution for the proposal increment. Popular choices within the class are the Metropolis-adjusted Langevin algorithm and the recently introduced Barker proposal. We first establish a universal limiting optimal acceptance rate of 57% and scaling of $n^{-1/3}$ as the dimension $n$ tends to infinity among all members of the class under mild smoothness assumptions on $g$ and when the target distribution for the algorithm is of the product form. In particular we obtain an explicit expression for the asymptotic efficiency of an arbitrary algorithm in the class, as measured by expected squared jumping distance. We then consider how to optimise this expression under various constraints. We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution. Numerical simulations confirm our theoretical findings and in particular show that a bi-modal choice of noise distribution in the Barker proposal gives rise to a practical algorithm that is consistently more efficient than the original Gaussian version.

The positive definiteness of discrete time-fractional derivatives is fundamental to the numerical stability (in the energy sense) for time-fractional phase-field models. A novel technique is proposed to estimate the minimum eigenvalue of discrete convolution kernels generated by the nonuniform L1, half-grid based L1 and time-averaged L1 formulas of the fractional Caputo's derivative. The main discrete tools are the discrete orthogonal convolution kernels and discrete complementary convolution kernels. Certain variational energy dissipation laws at discrete levels of the variable-step L1-type methods are then established for time-fractional Cahn-Hilliard model.They are shown to be asymptotically compatible, in the fractional order limit $\alpha\rightarrow1$, with the associated energy dissipation law for the classical Cahn-Hilliard equation. Numerical examples together with an adaptive time-stepping procedure are provided to demonstrate the effectiveness of the proposed methods.

With the development of data acquisition technology, multi-channel data is collected and widely used in many fields. Most of them can be expressed as various types of vector functions. Feature extraction of vector functions for identifying certain patterns of interest is a critical but challenging task. In this paper, we focus on constructing moment invariants of general vector functions. Specifically, we define rotation-affine transform to describe real deformations of general vector functions, and then design a structural frame to systematically generate Gaussian-Hermite moment invariants to this transform model. This is the first time that a uniform frame has been proposed in the literature to construct orthogonal moment invariants of general vector functions. Given a certain type of multi-channel data, we demonstrate how to utilize the new method to derive all possible invariants and to eliminate various dependences among them. For RGB images, 2D and 3D flow fields, we obtain the complete and independent sets of the invariants with low orders and low degrees. Based on synthetic and popular datasets of vector-valued data, the experiments are carried out to evaluate the stability and discriminability of these invariants, and also their robustness to noise. The results clearly show that the moment invariants proposed in our paper have better performance than other previously used moment invariants of vector functions in RGB image classification, vortex detection in 2D vector fields and template matching for 3D flow fields.

We develop a theoretical analysis for special neural network architectures, termed operator recurrent neural networks, for approximating nonlinear functions whose inputs are linear operators. Such functions commonly arise in solution algorithms for inverse boundary value problems. Traditional neural networks treat input data as vectors, and thus they do not effectively capture the multiplicative structure associated with the linear operators that correspond to the data in such inverse problems. We therefore introduce a new family that resembles a standard neural network architecture, but where the input data acts multiplicatively on vectors. Motivated by compact operators appearing in boundary control and the analysis of inverse boundary value problems for the wave equation, we promote structure and sparsity in selected weight matrices in the network. After describing this architecture, we study its representation properties as well as its approximation properties. We furthermore show that an explicit regularization can be introduced that can be derived from the mathematical analysis of the mentioned inverse problems, and which leads to certain guarantees on the generalization properties. We observe that the sparsity of the weight matrices improves the generalization estimates. Lastly, we discuss how operator recurrent networks can be viewed as a deep learning analogue to deterministic algorithms such as boundary control for reconstructing the unknown wavespeed in the acoustic wave equation from boundary measurements.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

北京阿比特科技有限公司