亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Approximate Bayesian computation (ABC) is a class of Bayesian inference algorithms that targets for problems with intractable or {unavailable} likelihood function. It uses synthetic data drawn from the simulation model to approximate the posterior distribution. However, ABC is computationally intensive for complex models in which simulating synthetic data is very expensive. In this article, we propose an early rejection Markov chain Monte Carlo (ejMCMC) sampler based on Gaussian processes to accelerate inference speed. We early reject samples in the first stage of the kernel using a discrepancy model, in which the discrepancy between the simulated and observed data is modeled by Gaussian process (GP). Hence, the synthetic data is generated only if the parameter space is worth exploring. We demonstrate from theory, simulation experiments, and real data analysis that the new algorithm significantly improves inference efficiency compared to existing early-rejection MCMC algorithms. In addition, we employ our proposed method within an ABC sequential Monte Carlo (SMC) sampler. In our numerical experiments, we use examples of ordinary differential equations, stochastic differential equations, and delay differential equations to demonstrate the effectiveness of the proposed algorithm. We develop an R package that is available at //github.com/caofff/ejMCMC.

相關內容

Current metrics for text-to-image models typically rely on statistical metrics which inadequately represent the real preference of humans. Although recent work attempts to learn these preferences via human annotated images, they reduce the rich tapestry of human preference to a single overall score. However, the preference results vary when humans evaluate images with different aspects. Therefore, to learn the multi-dimensional human preferences, we propose the Multi-dimensional Preference Score (MPS), the first multi-dimensional preference scoring model for the evaluation of text-to-image models. The MPS introduces the preference condition module upon CLIP model to learn these diverse preferences. It is trained based on our Multi-dimensional Human Preference (MHP) Dataset, which comprises 918,315 human preference choices across four dimensions (i.e., aesthetics, semantic alignment, detail quality and overall assessment) on 607,541 images. The images are generated by a wide range of latest text-to-image models. The MPS outperforms existing scoring methods across 3 datasets in 4 dimensions, enabling it a promising metric for evaluating and improving text-to-image generation.

Hybrid Gibbs samplers represent a prominent class of approximated Gibbs algorithms that utilize Markov chains to approximate conditional distributions, with the Metropolis-within-Gibbs algorithm standing out as a well-known example. Despite their widespread use in both statistical and non-statistical applications, very little is known about their convergence properties. This article introduces novel methods for establishing bounds on the convergence rates of hybrid Gibbs samplers. In particular, we examine the convergence characteristics of hybrid random-scan Gibbs and data augmentation algorithms. Our analysis confirms that the absolute spectral gap of a hybrid chain can be bounded based on the absolute spectral gap of the exact Gibbs chain and the absolute spectral gaps of the Markov chains employed for conditional distribution approximations. For application, we study the convergence properties of four practical hybrid Gibbs algorithms: a random-scan Metropolis-within-Gibbs sampler, a hybrid proximal sampler, random-scan Gibbs samplers with block updates, and a hybrid slice sampler.

Domain-specific hardware to solve computationally hard optimization problems has generated tremendous excitement recently. Here, we evaluate probabilistic bit (p-bit) based on Ising Machines (IM) or p-computers with a benchmark combinatorial optimization problem, namely the 3-regular 3-XOR Satisfiability (3R3X). The 3R3X problem has a glassy energy landscape, and it has recently been used to benchmark various IMs and other solvers. We introduce a multiplexed architecture where p-computers emulate all-to-all (complete) graph functionality despite being interconnected in sparse networks, enabling a highly parallelized chromatic Gibbs sampling. We implement this architecture in FPGAs and show that p-bit networks running an adaptive version of the powerful parallel tempering algorithm demonstrate competitive algorithmic and prefactor advantages over alternative IMs by D-Wave, Toshiba, and Fujitsu, except a greedy algorithm accelerated on a GPU. We further extend our APT results using higher-order interactions in FPGAs and show that while higher-order interactions lead to prefactor advantages, they do not show any algorithmic scaling advantages for the XORSAT problem, settling an open conjecture. Even though FPGA implementations of p-bits are still not quite as fast as the best possible greedy algorithms implemented in GPUs, scaled magnetic versions of p-computers could lead to orders of magnitude over such algorithms according to experimentally established projections.

The Fisher-Rao distance between two probability distributions of a statistical model is defined as the Riemannian geodesic distance induced by the Fisher information metric. In order to calculate the Fisher-Rao distance in closed-form, we need (1) to elicit a formula for the Fisher-Rao geodesics, and (2) to integrate the Fisher length element along those geodesics. We consider several numerically robust approximation and bounding techniques for the Fisher-Rao distances: First, we report generic upper bounds on Fisher-Rao distances based on closed-form 1D Fisher-Rao distances of submodels. Second, we describe several generic approximation schemes depending on whether the Fisher-Rao geodesics or pregeodesics are available in closed-form or not. In particular, we obtain a generic method to guarantee an arbitrarily small additive error on the approximation provided that Fisher-Rao pregeodesics and tight lower and upper bounds are available. Third, we consider the case of Fisher metrics being Hessian metrics, and report generic tight upper bounds on the Fisher-Rao distances using techniques of information geometry. Uniparametric and biparametric statistical models always have Fisher Hessian metrics, and in general a simple test allows to check whether the Fisher information matrix yields a Hessian metric or not. Fourth, we consider elliptical distribution families and show how to apply the above techniques to these models. We also propose two new distances based either on the Fisher-Rao lengths of curves serving as proxies of Fisher-Rao geodesics, or based on the Birkhoff/Hilbert projective cone distance. Last, we consider an alternative group-theoretic approach for statistical transformation models based on the notion of maximal invariant which yields insights on the structures of the Fisher-Rao distance formula which may be used fruitfully in applications.

Similar to the notion of h-adaptivity, where the discretization resolution is adaptively changed, I propose the notion of model adaptivity, where the underlying model (the governing equations) is adaptively changed in space and time. Specifically, this work introduces a hybrid and adaptive coupling of a 3D bulk fluid flow model with a 2D thin film flow model. As a result, this work extends the applicability of existing thin film flow models to complex scenarios where, for example, bulk flow develops into thin films after striking a surface. At each location in space and time, the proposed framework automatically decides whether a 3D model or a 2D model must be applied. Using a meshless approach for both 3D and 2D models, at each particle, the decision to apply a 2D or 3D model is based on the user-prescribed resolution and a local principal component analysis. When a particle needs to be changed from a 3D model to 2D, or vice versa, the discretization is changed, and all relevant data mapping is done on-the-fly. Appropriate two-way coupling conditions and mass conservation considerations between the 3D and 2D models are also developed. Numerical results show that this model adaptive framework shows higher flexibility and compares well against finely resolved 3D simulations. In an actual application scenario, a 3 factor speed up is obtained, while maintaining the accuracy of the solution.

This article introduces a general mesh intersection algorithm that exactly computes the so-called Weiler model and that uses it to implement boolean operations with arbitrary multi-operand expressions, CSG (constructive solid geometry) and some mesh repair operations. From an input polygon soup, the algorithm first computes the co-refinement, with an exact representation of the intersection points. Then, the decomposition of 3D space into volumetric regions (Weiler model) is constructed, by sorting the facets around the non-manifold intersection edges (radial sort), using specialized exact predicates. Finally, based on the input boolean expression, the triangular facets that belong to the boundary of the result are classified. This is, to our knowledge, the first algorithm that computes an exact Weiler model. To implement all the involved predicates and constructions, two geometric kernels are proposed, tested and discussed (arithmetic expansions and multi-precision floating-point). As a guiding principle,the combinatorial information shared between each step is kept as simple as possible. It is made possible by treating all the particular cases in the kernel. In particular, triangles with intersections are remeshed using the (uniquely defined) Constrained Delaunay Triangulation, with symbolic perturbations to disambiguate configurations with co-cyclic points. It makes it easy to discard the duplicated triangles that appear when remeshing overlapping facets. The method is tested and compared with previous work, on the existing "thingi10K" dataset (to test co-refinement and mesh repair) and on a new "thingiCSG" dataset made publicly available (to test the full CSG pipeline) on a variety of interesting examples featuring different types of "pathologies"

We consider the fundamental problem of constructing fast and small circuits for binary addition. We propose a new algorithm with running time $\mathcal O(n \log_2 n)$ for constructing linear-size $n$-bit adder circuits with a significantly better depth guarantee compared to previous approaches: Our circuits have a depth of at most $\log_2 n + \log_2 \log_2 n + \log_2 \log_2 \log_2 n + \text{const}$, improving upon the previously best circuits by [12] with a depth of at most $\log_2 n + 8 \sqrt{\log_2 n} + 6 \log_2 \log_2 n + \text{const}$. Hence, we decrease the gap to the lower bound of $\log_2 n + \log_2 \log_2 n + \text{const}$ by [5] significantly from $\mathcal O (\sqrt{\log_2 n})$ to $\mathcal O(\log_2 \log_2 \log_2 n)$. Our core routine is a new algorithm for the construction of a circuit for a single carry bit, or, more generally, for an And-Or path, i.e., a Boolean function of type $t_0 \lor ( t_1 \land (t_2 \lor ( \dots t_{m-1}) \dots ))$. We compute linear-size And-Or path circuits with a depth of at most $\log_2 m + \log_2 \log_2 m + 0.65$ in time $\mathcal O(m \log_2 m)$. These are the first And-Or path circuits known that, up to an additive constant, match the lower bound by [5] and at the same time have a linear size. The previously fastest And-Or path circuits are only by an additive constant worse in depth, but have a much higher size in the order of $\mathcal O (m \log_2 m)$.

Multilingual large language models (MLLMs) are jointly trained on data from many different languages such that representation of individual languages can benefit from other languages' data. Impressive performance on zero-shot cross-lingual transfer shows that these models are capable of exploiting data from other languages. Yet, it remains unclear to what extent, and under which conditions, languages rely on each other's data. In this study, we use TracIn (Pruthi et al., 2020), a training data attribution (TDA) method, to retrieve the most influential training samples seen during multilingual fine-tuning for a particular test language. This allows us to analyse cross-lingual sharing mechanisms of MLLMs from a new perspective. While previous work studied cross-lingual sharing at the level of model parameters, we present the first approach to study cross-lingual sharing at the data level. We find that MLLMs rely on data from multiple languages from the early stages of fine-tuning and that this reliance gradually increases as fine-tuning progresses. We further study how different fine-tuning languages influence model performance on a given test language and find that they can both reinforce and complement the knowledge acquired from data of the test language itself.

Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.

We propose a local, past-oriented fragment of propositional dynamic logic to reason about concurrent scenarios modelled as Mazurkiewicz traces, and prove it to be expressively complete with respect to regular trace languages. Because of locality, specifications in this logic are efficiently translated into asynchronous automata, in a way that reflects the structure of formulas. In particular, we obtain a new proof of Zielonka's fundamental theorem and we prove that any regular trace language can be implemented by a cascade product of localized asynchronous automata, which essentially operate on a single process. These results refine earlier results by Adsul et al. which involved a larger fragment of past propositional dynamic logic and used Mukund and Sohoni's gossip automaton. Our new results avoid using this automaton, or Zielonka's timestamping mechanism and, in particular, they show how to implement a gossip automaton as a cascade product.

北京阿比特科技有限公司