亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Image acquisition and segmentation are likely to introduce noise. Further image processing such as image registration and parameterization can introduce additional noise. It is thus imperative to reduce noise measurements and boost signal. In order to increase the signal-to-noise ratio (SNR) and smoothness of data required for the subsequent random field theory based statistical inference, some type of smoothing is necessary. Among many image smoothing methods, Gaussian kernel smoothing has emerged as a de facto smoothing technique among brain imaging researchers due to its simplicity in numerical implementation. Gaussian kernel smoothing also increases statistical sensitivity and statistical power as well as Gausianness. Gaussian kernel smoothing can be viewed as weighted averaging of voxel values. Then from the central limit theorem, the weighted average should be more Gaussian.

相關內容

Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We propose an efficient approximation procedure based on the Nystr\"om method, which exploits a small random subset of the dataset. Our main result is an upper bound on the approximation error of this procedure. It yields sufficient conditions on the subsample size to obtain the standard $n^{-1/2}$ rate while reducing computational costs. We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules, and illustrate our theoretical findings with numerical experiments.

Stochastic gradient Markov Chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior's functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a "dropout" manner, leading to even more scalability. We test our scheme for ResNet-20 on CIFAR-10, SVHN, and FMNIST. In all cases, we find improvements in convergence speed and/or final accuracy compared to SG-MCMC and VI.

We present a new adaptive resource optimization strategy that jointly allocates the subwindow and transmit power in multi-device terahertz (THz) band Internet of Things (Tera-IoT) networks. Unlike the prior studies focusing mostly on maximizing the sum distance, we incorporate both rate and transmission distance into the objective function of our problem formulation with key features of THz bands, including the spreading and molecular absorption losses. More specifically, as a performance metric of Tera-IoT networks, we adopt the transport capacity (TC), which is defined as the sum of the rate-distance products over all users. This metric has been widely adopted in large-scale ad hoc networks, and would also be appropriate for evaluating the performance of various Tera-IoT applications. We then formulate an optimization problem that aims at maximizing the TC. Moreover, motivated by the importance of the transmission distance that is very limited due to the high path loss in THz bands, our optimization problem is extended to the case of allocating the subwindow, transmit power, and transmission distance. We show how to solve our problems via an effective two-stage resource allocation strategy. We demonstrate the superiority of our adaptive solution over benchmark methods via intensive numerical evaluations for various environmental setups of large-scale Tera-IoT networks.

A Bayes point machine is a single classifier that approximates the majority decision of an ensemble of classifiers. This paper observes that kernel interpolation is a Bayes point machine for Gaussian process classification. This observation facilitates the transfer of results from both ensemble theory as well as an area of convex geometry known as Brunn-Minkowski theory to derive PAC-Bayes risk bounds for kernel interpolation. Since large margin, infinite width neural networks are kernel interpolators, the paper's findings may help to explain generalisation in neural networks more broadly. Supporting this idea, the paper finds evidence that large margin, finite width neural networks behave like Bayes point machines too.

Wasserstein gradient flow has emerged as a promising approach to solve optimization problems over the space of probability distributions. A recent trend is to use the well-known JKO scheme in combination with input convex neural networks to numerically implement the proximal step. The most challenging step, in this setup, is to evaluate functions involving density explicitly, such as entropy, in terms of samples. This paper builds on the recent works with a slight but crucial difference: we propose to utilize a variational formulation of the objective function formulated as maximization over a parametric class of functions. Theoretically, the proposed variational formulation allows the construction of gradient flows directly for empirical distributions with a well-defined and meaningful objective function. Computationally, this approach replaces the computationally expensive step in existing methods, to handle objective functions involving density, with inner loop updates that only require a small batch of samples and scale well with the dimension. The performance and scalability of the proposed method are illustrated with the aid of several numerical experiments involving high-dimensional synthetic and real datasets.

The conventional approach to data-driven inversion framework is based on Gaussian statistics that presents serious difficulties, especially in the presence of outliers in the measurements. In this work, we present maximum likelihood estimators associated with generalized Gaussian distributions in the context of R\'enyi, Tsallis and Kaniadakis statistics. In this regard, we analytically analyse the outlier-resistance of each proposal through the so-called influence function. In this way, we formulate inverse problems by constructing objective functions linked to the maximum likelihood estimators. To demonstrate the robustness of the generalized methodologies, we consider an important geophysical inverse problem with high noisy data with spikes. The results reveal that the best data inversion performance occurs when the entropic index from each generalized statistic is associated with objective functions proportional to the inverse of the error amplitude. We argue that in such a limit the three approaches are resistant to outliers and are also equivalent, which suggests a lower computational cost for the inversion process due to the reduction of numerical simulations to be performed and the fast convergence of the optimization process.

A generalization of L{\"u}roth's theorem expresses that every transcendence degree 1 subfield of the rational function field is a simple extension. In this note we show that a classical proof of this theorem also holds to prove this generalization.

This paper surveys the machine learning literature and presents machine learning as optimization models. Such models can benefit from the advancement of numerical optimization techniques which have already played a distinctive role in several machine learning settings. Particularly, mathematical optimization models are presented for commonly used machine learning approaches for regression, classification, clustering, and deep neural networks as well new emerging applications in machine teaching and empirical model learning. The strengths and the shortcomings of these models are discussed and potential research directions are highlighted.

Multi-source translation is an approach to exploit multiple inputs (e.g. in two different languages) to increase translation accuracy. In this paper, we examine approaches for multi-source neural machine translation (NMT) using an incomplete multilingual corpus in which some translations are missing. In practice, many multilingual corpora are not complete due to the difficulty to provide translations in all of the relevant languages (for example, in TED talks, most English talks only have subtitles for a small portion of the languages that TED supports). Existing studies on multi-source translation did not explicitly handle such situations. This study focuses on the use of incomplete multilingual corpora in multi-encoder NMT and mixture of NMT experts and examines a very simple implementation where missing source translations are replaced by a special symbol <NULL>. These methods allow us to use incomplete corpora both at training time and test time. In experiments with real incomplete multilingual corpora of TED Talks, the multi-source NMT with the <NULL> tokens achieved higher translation accuracies measured by BLEU than those by any one-to-one NMT systems.

The Pachinko Allocation Machine (PAM) is a deep topic model that allows representing rich correlation structures among topics by a directed acyclic graph over topics. Because of the flexibility of the model, however, approximate inference is very difficult. Perhaps for this reason, only a small number of potential PAM architectures have been explored in the literature. In this paper we present an efficient and flexible amortized variational inference method for PAM, using a deep inference network to parameterize the approximate posterior distribution in a manner similar to the variational autoencoder. Our inference method produces more coherent topics than state-of-art inference methods for PAM while being an order of magnitude faster, which allows exploration of a wider range of PAM architectures than have previously been studied.

北京阿比特科技有限公司