亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A metric tensor for Riemann manifold Monte Carlo particularly suited for non-linear Bayesian hierarchical models is proposed. The metric tensor is built from symmetric positive semidefinite log-density gradient covariance (LGC) matrices, which are also proposed and further explored here. The LGCs generalize the Fisher information matrix by measuring the joint information content and dependence structure of both a random variable and the parameters of said variable. Consequently, positive definite Fisher/LGC-based metric tensors may be constructed not only from the observation likelihoods as is current practice, but also from arbitrarily complicated non-linear prior/latent variable structures, provided the LGC may be derived for each conditional distribution used to construct said structures. The proposed methodology is highly automatic and allows for exploitation of any sparsity associated with the model in question. When implemented in conjunction with a Riemann manifold variant of the recently proposed numerical generalized randomized Hamiltonian Monte Carlo processes, the proposed methodology is highly competitive, in particular for the more challenging target distributions associated with Bayesian hierarchical models.

相關內容

We introduce a new Projected Rayleigh Quotient Iteration aimed at improving the convergence behaviour of classic Rayleigh Quotient iteration (RQI) by incorporating approximate information about the target eigenvector at each step. While classic RQI exhibits local cubic convergence for Hermitian matrices, its global behaviour can be unpredictable, whereby it may converge to an eigenvalue far away from the target, even when started with accurate initial conditions. This problem is exacerbated when the eigenvalues are closely spaced. The key idea of the new algorithm is at each step to add a complex-valued projection to the original matrix (that depends on the current eigenvector approximation), such that the unwanted eigenvalues are lifted into the complex plane while the target stays close to the real line, thereby increasing the spacing between the target eigenvalue and the rest of the spectrum. Making better use of the eigenvector approximation leads to more robust convergence behaviour and the new method converges reliably to the correct target eigenpair for a significantly wider range of initial vectors than does classic RQI. We prove that the method converges locally cubically and we present several numerical examples demonstrating the improved global convergence behaviour. In particular, we apply it to compute eigenvalues in a band-gap spectrum of a Sturm-Liouville operator used to model photonic crystal fibres, where the target and unwanted eigenvalues are closely spaced. The examples show that the new method converges to the desired eigenpair even when the eigenvalue spacing is very small, often succeeding when classic RQI fails.

A semi-implicit in time, entropy stable finite volume scheme for the compressible barotropic Euler system is designed and analyzed and its weak convergence to a dissipative measure-valued (DMV) solution [E. Feireisl et al., Dissipative measure-valued solutions to the compressible Navier-Stokes system, Calc. Var. Partial Differential Equations, 2016] of the Euler system is shown. The entropy stability is achieved by introducing a shifted velocity in the convective fluxes of the mass and momentum balances, provided some CFL-like condition is satisfied to ensure stability. A consistency analysis is performed in the spirit of the Lax's equivalence theorem under some physically reasonable boundedness assumptions. The concept of K-convergence [E. Feireisl et al., K-convergence as a new tool in numerical analysis, IMA J. Numer. Anal., 2020] is used in order to obtain some strong convergence results, which are then illustrated via rigorous numerical case studies. The convergence of the scheme to a DMV solution, a weak solution and a strong solution of the Euler system using the weak-strong uniqueness principle and relative entropy are presented.

We prove asymptotic results for a modification of the cross-entropy estimator originally introduced by Ziv and Merhav in the Markovian setting in 1993. Our results concern a more general class of decoupled measures. In particular, our results imply strong asymptotic consistency of the modified estimator for all pairs of functions of stationary, irreducible, finite-state Markov chains satisfying a mild decay condition. {Our approach is based on the study of a rescaled cumulant-generating function called the cross-entropic pressure, importing to information theory some techniques from the study of large deviations within the thermodynamic formalism.

In this series of studies, we establish homogenized lattice Boltzmann methods (HLBM) for simulating fluid flow through porous media. Our contributions in part I are twofold. First, we assemble the targeted partial differential equation system by formally unifying the governing equations for nonstationary fluid flow in porous media. A matrix of regularly arranged, equally sized obstacles is placed into the domain to model fluid flow through porous structures governed by the incompressible nonstationary Navier--Stokes equations (NSE). Depending on the ratio of geometric parameters in the matrix arrangement, several homogenized equations are obtained. We review existing methods for homogenizing the nonstationary NSE for specific porosities and discuss the applicability of the resulting model equations. Consequently, the homogenized NSE are expressed as targeted partial differential equations that jointly incorporate the derived aspects. Second, we propose a kinetic model, the homogenized Bhatnagar--Gross--Krook Boltzmann equation, which approximates the homogenized nonstationary NSE. We formally prove that the zeroth and first order moments of the kinetic model provide solutions to the mass and momentum balance variables of the macrocopic model up to specific orders in the scaling parameter. Based on the present contributions, in the sequel (part II), the homogenized NSE are consistently approximated by deriving a limit consistent HLBM discretization of the homogenized Bhatnagar--Gross--Krook Boltzmann equation.

Injecting external knowledge can improve the performance of pre-trained language models (PLMs) on various downstream NLP tasks. However, massive retraining is required to deploy new knowledge injection methods or knowledge bases for downstream tasks. In this work, we are the first to study how to improve the flexibility and efficiency of knowledge injection by reusing existing downstream models. To this end, we explore a new paradigm plug-and-play knowledge injection, where knowledge bases are injected into frozen existing downstream models by a knowledge plugin. Correspondingly, we propose a plug-and-play injection method map-tuning, which trains a mapping of knowledge embeddings to enrich model inputs with mapped embeddings while keeping model parameters frozen. Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models. Moreover, we show that a frozen downstream model can be well adapted to different domains with different mapping networks of domain knowledge. Our code and models are available at //github.com/THUNLP/Knowledge-Plugin.

Evolutionary algorithms face significant challenges when dealing with dynamic multi-objective optimization because Pareto optimal solutions and/or Pareto optimal fronts change. This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction (denoted by KAEP), for solving dynamic multi-objective optimization problems (DMOPs). Specifically, whenever a change is detected, KAEP reacts effectively to it by generating two subpopulations. The first subpoulation is generated by a simple centriod-based prediction strategy. For the second initial subpopulation, the kernel autoencoder is derived to predict the moving of the Pareto-optimal solutions based on the historical elite solutions. In this way, an initial population is predicted by the proposed combination strategies with good convergence and diversity, which can be effective for solving DMOPs. The performance of our proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems. Empirical results fully demonstrate the superiority of our proposed method on most test instances.

We introduce the new setting of open-vocabulary object 6D pose estimation, in which a textual prompt is used to specify the object of interest. In contrast to existing approaches, in our setting (i) the object of interest is specified solely through the textual prompt, (ii) no object model (e.g. CAD or video sequence) is required at inference, (iii) the object is imaged from two different viewpoints of two different scenes, and (iv) the object was not observed during the training phase. To operate in this setting, we introduce a novel approach that leverages a Vision-Language Model to segment the object of interest from two distinct scenes and to estimate its relative 6D pose. The key of our approach is a carefully devised strategy to fuse object-level information provided by the prompt with local image features, resulting in a feature space that can generalize to novel concepts. We validate our approach on a new benchmark based on two popular datasets, REAL275 and Toyota-Light, which collectively encompass 39 object instances appearing in four thousand image pairs. The results demonstrate that our approach outperforms both a well-established hand-crafted method and a recent deep learning-based baseline in estimating the relative 6D pose of objects in different scenes. Project website: //jcorsetti.github.io/oryon-website/.

We propose a theoretically justified and practically applicable slice sampling based Markov chain Monte Carlo (MCMC) method for approximate sampling from probability measures on Riemannian manifolds. The latter naturally arise as posterior distributions in Bayesian inference of matrix-valued parameters, for example belonging to either the Stiefel or the Grassmann manifold. Our method, called geodesic slice sampling, is reversible with respect to the distribution of interest, and generalizes Hit-and-run slice sampling on $\mathbb{R}^{d}$ to Riemannian manifolds by using geodesics instead of straight lines. We demonstrate the robustness of our sampler's performance compared to other MCMC methods dealing with manifold valued distributions through extensive numerical experiments, on both synthetic and real data. In particular, we illustrate its remarkable ability to cope with anisotropic target densities, without using gradient information and preconditioning.

We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.

北京阿比特科技有限公司