亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We theoretically analyze the original version of the denoising diffusion probabilistic models (DDPMs) presented in Ho, J., Jain, A., and Abbeel, P., Advances in Neural Information Processing Systems, 33 (2020), pp. 6840-6851. Our main theorem states that the sequence constructed by the original DDPM sampling algorithm weakly converges to a given data distribution as the number of time steps goes to infinity, under some asymptotic conditions on the parameters for the variance schedule, the $L^2$-based score estimation error, and the noise estimating function with respect to the number of time steps. In proving the theorem, we reveal that the sampling sequence can be seen as an exponential integrator type approximation of a reverse time stochastic differential equation over a finite time interval.

相關內容

Combining microstructural mechanical models with experimental data enhances our understanding of the mechanics of soft tissue, such as tendons. In previous work, a Bayesian framework was used to infer constitutive parameters from uniaxial stress-strain experiments on horse tendons, specifically the superficial digital flexor tendon (SDFT) and common digital extensor tendon (CDET), on a per-experiment basis. Here, we extend this analysis to investigate the natural variation of these parameters across a population of horses. Using a Bayesian mixed effects model, we infer population distributions of these parameters. Given that the chosen hyperelastic model does not account for tendon damage, careful data selection is necessary. Avoiding ad hoc methods, we introduce a hierarchical Bayesian data selection method. This two-stage approach selects data per experiment, and integrates data weightings into the Bayesian mixed effects model. Our results indicate that the CDET is stiffer than the SDFT, likely due to a higher collagen volume fraction. The modes of the parameter distributions yield estimates of the product of the collagen volume fraction and Young's modulus as 811.5 MPa for the SDFT and 1430.2 MPa for the CDET. This suggests that positional tendons have stiffer collagen fibrils and/or higher collagen volume density than energy-storing tendons.

In recent years, human mobility research has discovered universal patterns capable of describing how people move. These regularities have been shown to partly depend on individual and environmental characteristics (e.g., gender, rural/urban, country). In this work, we show that life-course events, such as job loss, can disrupt individual mobility patterns. Adversely affecting individuals' well-being and potentially increasing the risk of social and economic inequalities, we show that job loss drives a significant change in the exploratory behaviour of individuals with changes that intensify over time since job loss. Our findings shed light on the dynamics of employment-related behavior at scale, providing a deeper understanding of key components in human mobility regularities. These drivers can facilitate targeted social interventions to support the most vulnerable populations.

Extremal graphical models encode the conditional independence structure of multivariate extremes and provide a powerful tool for quantifying the risk of rare events. Prior work on learning these graphs from data has focused on the setting where all relevant variables are observed. For the popular class of H\"usler-Reiss models, we propose the \texttt{eglatent} method, a tractable convex program for learning extremal graphical models in the presence of latent variables. Our approach decomposes the H\"usler-Reiss precision matrix into a sparse component encoding the graphical structure among the observed variables after conditioning on the latent variables, and a low-rank component encoding the effect of a few latent variables on the observed variables. We provide finite-sample guarantees of \texttt{eglatent} and show that it consistently recovers the conditional graph as well as the number of latent variables. We highlight the improved performances of our approach on synthetic and real data.

Climate models struggle to accurately simulate precipitation, particularly extremes and the diurnal cycle. Here, we present a hybrid model that is trained directly on satellite-based precipitation observations. Our model runs at 2.8$^\circ$ resolution and is built on the differentiable NeuralGCM framework. The model demonstrates significant improvements over existing general circulation models, the ERA5 reanalysis, and a global cloud-resolving model in simulating precipitation. Our approach yields reduced biases, a more realistic precipitation distribution, improved representation of extremes, and a more accurate diurnal cycle. Furthermore, it outperforms the mid-range precipitation forecast of the ECMWF ensemble. This advance paves the way for more reliable simulations of current climate and demonstrates how training on observations can be used to directly improve GCMs.

Motivated by the Iowa Fluoride Study (IFS) dataset, which comprises zero-inflated multi-level ordinal responses on tooth fluorosis, we develop an estimation scheme leveraging generalized estimating equations (GEEs) and James-Stein shrinkage. Previous analyses of this cohort study primarily focused on caries (count response) or employed a Bayesian approach to the ordinal fluorosis outcome. This study is based on the expanded dataset that now includes observations for age 23, whereas earlier works were restricted to ages 9, 13, and/or 17 according to the participants' ages at the time of measurement. The adoption of a frequentist perspective enhances the interpretability to a broader audience. Over a choice of several covariance structures, separate models are formulated for the presence (zero versus non-zero score) and severity (non-zero ordinal scores) of fluorosis, which are then integrated through shared regression parameters. This comprehensive framework effectively identifies risk or protective effects of dietary and non-dietary factors on dental fluorosis.

In this paper, we focus on efficiently and flexibly simulating the Fokker-Planck equation associated with the Nonlinear Noisy Leaky Integrate-and-Fire (NNLIF) model, which reflects the dynamic behavior of neuron networks. We apply the Galerkin spectral method to discretize the spatial domain by constructing a variational formulation that satisfies complex boundary conditions. Moreover, the boundary conditions in the variational formulation include only zeroth-order terms, with first-order conditions being naturally incorporated. This allows the numerical scheme to be further extended to an excitatory-inhibitory population model with synaptic delays and refractory states. Additionally, we establish the consistency of the numerical scheme. Experimental results, including accuracy tests, blow-up events, and periodic oscillations, validate the properties of our proposed method.

The Cox proportional hazards model (Cox model) is a popular model for survival data analysis. When the sample size is small relative to the dimension of the model, the standard maximum partial likelihood inference is often problematic. In this work, we propose the Cox catalytic prior distributions for Bayesian inference on Cox models, which is an extension of a general class of prior distributions originally designed for stabilizing complex parametric models. The Cox catalytic prior is formulated as a weighted likelihood of the regression coefficients based on synthetic data and a surrogate baseline hazard constant. This surrogate hazard can be either provided by the user or estimated from the data, and the synthetic data are generated from the predictive distribution of a fitted simpler model. For point estimation, we derive an approximation of the marginal posterior mode, which can be computed conveniently as a regularized log partial likelihood estimator. We prove that our prior distribution is proper and the resulting estimator is consistent under mild conditions. In simulation studies, our proposed method outperforms standard maximum partial likelihood inference and is on par with existing shrinkage methods. We further illustrate the application of our method to a real dataset.

Deep Neural Networks are vulnerable to adversarial examples, i.e., carefully crafted input samples that can cause models to make incorrect predictions with high confidence. To mitigate these vulnerabilities, adversarial training and detection-based defenses have been proposed to strengthen models in advance. However, most of these approaches focus on a single data modality, overlooking the relationships between visual patterns and textual descriptions of the input. In this paper, we propose a novel defense, Multi-Shield, designed to combine and complement these defenses with multi-modal information to further enhance their robustness. Multi-Shield leverages multi-modal large language models to detect adversarial examples and abstain from uncertain classifications when there is no alignment between textual and visual representations of the input. Extensive evaluations on CIFAR-10 and ImageNet datasets, using robust and non-robust image classification models, demonstrate that Multi-Shield can be easily integrated to detect and reject adversarial examples, outperforming the original defenses.

This paper presents an innovative method for predicting shape errors in 5-axis machining using graph neural networks. The graph structure is defined with nodes representing workpiece surface points and edges denoting the neighboring relationships. The dataset encompasses data from a material removal simulation, process data, and post-machining quality information. Experimental results show that the presented approach can generalize the shape error prediction for the investigated workpiece geometry. Moreover, by modelling spatial and temporal connections within the workpiece, the approach handles a low number of labels compared to non-graphical methods such as Support Vector Machines.

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.

北京阿比特科技有限公司