亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For exchangeable data, mixture models are an extremely useful tool for density estimation due to their attractive balance between smoothness and flexibility. When additional covariate information is present, mixture models can be extended for flexible regression by modeling the mixture parameters, namely the weights and atoms, as functions of the covariates. These types of models are interpretable and highly flexible, allowing non only the mean but the whole density of the response to change with the covariates, which is also known as density regression. This article reviews Bayesian covariate-dependent mixture models and highlights which data types can be accommodated by the different models along with the methodological and applied areas where they have been used. In addition to being highly flexible, these models are also numerous; we focus on nonparametric constructions and broadly organize them into three categories: 1) joint models of the responses and covariates, 2) conditional models with single-weights and covariate-dependent atoms, and 3) conditional models with covariate-dependent weights. The diversity and variety of the available models in the literature raises the question of how to choose among them for the application at hand. We attempt to shed light on this question through a careful analysis of the predictive equations for the conditional mean and density function as well as predictive comparisons in three simulated data examples.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · MoDELS · 玻爾茲曼機 · 受限玻爾茲曼機 · 有向 ·
2023 年 9 月 20 日

Generative models offer a direct way to model complex data. Among them, energy-based models provide us with a neural network model that aims to accurately reproduce all statistical correlations observed in the data at the level of the Boltzmann weight of the model. However, one challenge is to understand the physical interpretation of such models. In this study, we propose a simple solution by implementing a direct mapping between the energy function of the Restricted Boltzmann Machine and an effective Ising spin Hamiltonian that includes high-order interactions between spins. This mapping includes interactions of all possible orders, going beyond the conventional pairwise interactions typically considered in the inverse Ising approach, and allowing the description of complex datasets. Earlier works attempted to achieve this goal, but the proposed mappings did not do properly treat the complexity of the problem or did not contain direct prescriptions for practical application. To validate our method, we performed several controlled numerical experiments where we trained the RBMs using equilibrium samples of predefined models containing local external fields, two-body and three-body interactions in various low-dimensional topologies. The results demonstrate the effectiveness of our proposed approach in learning the correct interaction network and pave the way for its application in modeling interesting datasets. We also evaluate the quality of the inferred model based on different training methods.

Ordinary state-based peridynamic (OSB-PD) models have an unparalleled capability to simulate crack propagation phenomena in solids with arbitrary Poisson's ratio. However, their non-locality also leads to prohibitively high computational cost. In this paper, a fast solution scheme for OSB-PD models based on matrix operation is introduced, with which, the graphics processing units (GPUs) are used to accelerate the computation. For the purpose of comparison and verification, a commonly used solution scheme based on loop operation is also presented. An in-house software is developed in MATLAB. Firstly, the vibration of a cantilever beam is solved for validating the loop- and matrix-based schemes by comparing the numerical solutions to those produced by a FEM software. Subsequently, two typical dynamic crack propagation problems are simulated to illustrate the effectiveness of the proposed schemes in solving dynamic fracture problems. Finally, the simulation of the Brokenshire torsion experiment is carried out by using the matrix-based scheme, and the similarity in the shapes of the experimental and numerical broken specimens further demonstrates the ability of the proposed approach to deal with 3D non-planar fracture problems. In addition, the speed-up of the matrix-based scheme with respect to the loop-based scheme and the performance of the GPU acceleration are investigated. The results emphasize the high computational efficiency of the matrix-based implementation scheme.

We examine a stochastic formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution, but has knowledge that it lies in some hypothesis set and possesses a historical data set, from which information about it can be gleaned. We define a prescriptive solution as a decision rule mapping such a data set to decisions. As there does not exist prescriptive solutions that are generalizable over the entire hypothesis set, we define out-of-sample optimality as a local average over a neighbourhood of hypotheses, and averaged over the sampling distribution. We prove sufficient conditions for local out-of-sample optimality, which reduces to functions of the sufficient statistic of the hypothesis family. We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms. Finally, we illustrate our model on the newsvendor model, and find strong performance when compared against alternatives in the literature. There are potential implications of our research on end-to-end learning and Bayesian optimization.

Inference principles are postulated within statistics, they are not usually derived from any underlying physical constraints on real world observers. An exception to this rule is that in the context of partially observable information engines decision making can be based solely on physical arguments. An inference principle can be derived from minimization of the lower bound on average dissipation [Phys. Rev. Lett., 124(5), 050601], which is achievable with a quasi-static process. Thermodynamically rational decision strategies can be computed algorithmically with the resulting approach. Here, we use this to study an example of binary decision making under uncertainty that is very simple, yet just interesting enough to be non-trivial: observations are either entirely uninformative, or they carry complete certainty about the variable that needs to be known for successful energy harvesting. Solutions found algorithmically can be expressed in terms of parameterized soft partitions of the observable space. This allows for their interpretation, as well as for the analytical calculation of all quantities that characterize the decision problem and the thermodynamically rational strategies.

Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training and test dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.

Diffusion-based generative models have recently gained attention in speech enhancement (SE), providing an alternative to conventional supervised methods. These models transform clean speech training samples into Gaussian noise centered at noisy speech, and subsequently learn a parameterized model to reverse this process, conditionally on noisy speech. Unlike supervised methods, generative-based SE approaches usually rely solely on an unsupervised loss, which may result in less efficient incorporation of conditioned noisy speech. To address this issue, we propose augmenting the original diffusion training objective with a mean squared error (MSE) loss, measuring the discrepancy between estimated enhanced speech and ground-truth clean speech at each reverse process iteration. Experimental results demonstrate the effectiveness of our proposed methodology.

Recently, conditional score-based diffusion models have gained significant attention in the field of supervised speech enhancement, yielding state-of-the-art performance. However, these methods may face challenges when generalising to unseen conditions. To address this issue, we introduce an alternative approach that operates in an unsupervised manner, leveraging the generative power of diffusion models. Specifically, in a training phase, a clean speech prior distribution is learnt in the short-time Fourier transform (STFT) domain using score-based diffusion models, allowing it to unconditionally generate clean speech from Gaussian noise. Then, we develop a posterior sampling methodology for speech enhancement by combining the learnt clean speech prior with a noise model for speech signal inference. The noise parameters are simultaneously learnt along with clean speech estimation through an iterative expectationmaximisation (EM) approach. To the best of our knowledge, this is the first work exploring diffusion-based generative models for unsupervised speech enhancement, demonstrating promising results compared to a recent variational auto-encoder (VAE)-based unsupervised approach and a state-of-the-art diffusion-based supervised method. It thus opens a new direction for future research in unsupervised speech enhancement.

Recent advancements in evaluating matrix-exponential functions have opened the doors to the practical use of exponential time-integration methods in numerical weather prediction (NWP). The success of exponential methods in shallow water simulations has led to the question of whether they can be beneficial in a 3D atmospheric model. In this paper, we take the first step forward by evaluating the behavior of exponential time-integration methods in the Navy's compressible deep-atmosphere nonhydrostatic global model (NEPTUNE-Navy Environmental Prediction sysTem Utilizing a Nonhydrostatic Engine). Simulations are conducted on a set of idealized test cases designed to assess key features of a nonhydrostatic model and demonstrate that exponential integrators capture the desired large and small-scale traits, yielding results comparable to those found in the literature. We propose a new upper boundary absorbing layer independent of reference state and shown to be effective in both idealized and real-data simulations. A real-data forecast using an exponential method with full physics is presented, providing a positive outlook for using exponential integrators for NWP.

Nonparametric varying coefficient (NVC) models are useful for modeling time-varying effects on responses that are measured repeatedly for the same subjects. When the number of covariates is moderate or large, it is desirable to perform variable selection from the varying coefficient functions. However, existing methods for variable selection in NVC models either fail to account for within-subject correlations or require the practitioner to specify a parametric form for the correlation structure. In this paper, we introduce the nonparametric varying coefficient spike-and-slab lasso (NVC-SSL) for Bayesian high-dimensional NVC models. Through the introduction of functional random effects, our method allows for flexible modeling of within-subject correlations without needing to specify a parametric covariance function. We further propose several scalable optimization and Markov chain Monte Carlo (MCMC) algorithms. For variable selection, we propose an Expectation Conditional Maximization (ECM) algorithm to rapidly obtain maximum a posteriori (MAP) estimates. Our ECM algorithm scales linearly in the total number of observations $N$ and the number of covariates $p$. For uncertainty quantification, we introduce an approximate MCMC algorithm that also scales linearly in both $N$ and $p$. We demonstrate the scalability, variable selection performance, and inferential capabilities of our method through simulations and a real data application. These algorithms are implemented in the publicly available R package NVCSSL on the Comprehensive R Archive Network.

The investigation of mixture models is a key to understand and visualize the distribution of multivariate data. Most mixture models approaches are based on likelihoods, and are not adapted to distribution with finite support or without a well-defined density function. This study proposes the Augmented Quantization method, which is a reformulation of the classical quantization problem but which uses the p-Wasserstein distance. This metric can be computed in very general distribution spaces, in particular with varying supports. The clustering interpretation of quantization is revisited in a more general framework. The performance of Augmented Quantization is first demonstrated through analytical toy problems. Subsequently, it is applied to a practical case study involving river flooding, wherein mixtures of Dirac and Uniform distributions are built in the input space, enabling the identification of the most influential variables.

北京阿比特科技有限公司