亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Optimization is a key tool for scientific and engineering applications, however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations. The cost of OUU is proportional to the cost of performing a forward uncertainty analysis at each design location visited, which makes the computational burden too high for high-fidelity simulations with significant computational cost. From a high-level standpoint, an OUU workflow typically has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. In this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called Multilevel Monte Carlo (MLMC) method. MLMC has the potential of drastically reducing the computational cost by allocating resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our novel approach with respect to the single fidelity counterpart, based on standard Monte Carlo evaluation of statistics.

相關內容

The limitations of turbulence closure models in the context of Reynolds-averaged NavierStokes (RANS) simulations play a significant part in contributing to the uncertainty of Computational Fluid Dynamics (CFD). Perturbing the spectral representation of the Reynolds stress tensor within physical limits is common practice in several commercial and open-source CFD solvers, in order to obtain estimates for the epistemic uncertainties of RANS turbulence models. Recent research revealed, that there is a need for moderating the amount of perturbed Reynolds stress tensor tensor to be considered due to upcoming stability issues of the solver. In this paper we point out that the consequent common implementation can lead to unintended states of the resulting perturbed Reynolds stress tensor. The combination of eigenvector perturbation and moderation factor may actually result in moderated eigenvalues, which are not linearly dependent on the originally unperturbed and fully perturbed eigenvalues anymore. Hence, the computational implementation is no longer in accordance with the conceptual idea of the Eigenspace Perturbation Framework. We verify the implementation of the conceptual description with respect to its self-consistency. Adequately representing the basic concept results in formulating a computational implementation to improve self-consistency of the Reynolds stress tensor perturbation

Conversational AI systems exhibit a level of human-like behavior that promises to have profound impacts on many aspects of daily life -- how people access information, create content, and seek social support. Yet these models have also shown a propensity for biases, offensive language, and conveying false information. Consequently, understanding and moderating safety risks in these models is a critical technical and social challenge. Perception of safety is intrinsically subjective, where many factors -- often intersecting -- could determine why one person may consider a conversation with a chatbot safe and another person could consider the same conversation unsafe. In this work, we focus on demographic factors that could influence such diverse perceptions. To this end, we contribute an analysis using Bayesian multilevel modeling to explore the connection between rater demographics and how raters report safety of conversational AI systems. We study a sample of 252 human raters stratified by gender, age group, race/ethnicity group, and locale. This rater pool provided safety labels for 1,340 human-chatbot conversations. Our results show that intersectional effects involving demographic characteristics such as race/ethnicity, gender, and age, as well as content characteristics, such as degree of harm, all play significant roles in determining the safety of conversational AI systems. For example, race/ethnicity and gender show strong intersectional effects, particularly among South Asian and East Asian women. We also find that conversational degree of harm impacts raters of all race/ethnicity groups, but that Indigenous and South Asian raters are particularly sensitive to this harm. Finally, we observe the effect of education is uniquely intersectional for Indigenous raters, highlighting the utility of multilevel frameworks for uncovering underrepresented social perspectives.

Double generalized linear models provide a flexible framework for modeling data by allowing the mean and the dispersion to vary across observations. Common members of the exponential dispersion family including the Gaussian, Poisson, compound Poisson-gamma (CP-g), Gamma and inverse-Gaussian are known to admit such models. The lack of their use can be attributed to ambiguities that exist in model specification under a large number of covariates and complications that arise when data display complex spatial dependence. In this work we consider a hierarchical specification for the CP-g model with a spatial random effect. The spatial effect is targeted at performing uncertainty quantification by modeling dependence within the data arising from location based indexing of the response. We focus on a Gaussian process specification for the spatial effect. Simultaneously, we tackle the problem of model specification for such models using Bayesian variable selection. It is effected through a continuous spike and slab prior on the model parameters, specifically the fixed effects. The novelty of our contribution lies in the Bayesian frameworks developed for such models. We perform various synthetic experiments to showcase the accuracy of our frameworks. They are then applied to analyze automobile insurance premiums in Connecticut, for the year of 2008.

Monte Carlo (MC) sampling is a popular method for estimating the statistics (e.g. expectation and variance) of a random variable. Its slow convergence has led to the emergence of advanced techniques to reduce the variance of the MC estimator for the outputs of computationally expensive solvers. The control variates (CV) method corrects the MC estimator with a term derived from auxiliary random variables that are highly correlated with the original random variable. These auxiliary variables may come from surrogate models. Such a surrogate-based CV strategy is extended here to the multilevel Monte Carlo (MLMC) framework, which relies on a sequence of levels corresponding to numerical simulators with increasing accuracy and computational cost. MLMC combines output samples obtained across levels, into a telescopic sum of differences between MC estimators for successive fidelities. In this paper, we introduce three multilevel variance reduction strategies that rely on surrogate-based CV and MLMC. MLCV is presented as an extension of CV where the correction terms devised from surrogate models for simulators of different levels add up. MLMC-CV improves the MLMC estimator by using a CV based on a surrogate of the correction term at each level. Further variance reduction is achieved by using the surrogate-based CVs of all the levels in the MLMC-MLCV strategy. Alternative solutions that reduce the subset of surrogates used for the multilevel estimation are also introduced. The proposed methods are tested on a test case from the literature consisting of a spectral discretization of an uncertain 1D heat equation, where the statistic of interest is the expected value of the integrated temperature along the domain at a given time. The results are assessed in terms of the accuracy and computational cost of the multilevel estimators, depending on whether the construction of the surrogates, and the associated computational cost, precede the evaluation of the estimator. It was shown that when the lower fidelity outputs are strongly correlated with the high-fidelity outputs, a significant variance reduction is obtained when using surrogate models for the coarser levels only. It was also shown that taking advantage of pre-existing surrogate models proves to be an even more efficient strategy.

We present an adaptive algorithm for the computation of quantities of interest involving the solution of a stochastic elliptic PDE where the diffusion coefficient is parametrized by means of a Karhunen-Lo\`eve expansion. The approximation of the equivalent parametric problem requires a restriction of the countably infinite-dimensional parameter space to a finite-dimensional parameter set, a spatial discretization and an approximation in the parametric variables. We consider a sparse grid approach between these approximation directions in order to reduce the computational effort and propose a dimension-adaptive combination technique. In addition, a sparse grid quadrature for the high-dimensional parametric approximation is employed and simultaneously balanced with the spatial and stochastic approximation. Our adaptive algorithm constructs a sparse grid approximation based on the benefit-cost ratio such that the regularity and thus the decay of the Karhunen-Lo\`eve coefficients is not required beforehand. The decay is detected and exploited as the algorithm adjusts to the anisotropy in the parametric variables. We include numerical examples for the Darcy problem with a lognormal permeability field, which illustrate a good performance of the algorithm: For sufficiently smooth random fields, we essentially recover the rate of the spatial discretization as asymptotic convergence rate with respect to the computational cost.

We establish sparsity and summability results for coefficient sequences of Wiener-Hermite polynomial chaos expansions of countably-parametric solutions of linear elliptic and parabolic divergence-form partial differential equations with Gaussian random field inputs. The novel proof technique developed here is based on analytic continuation of parametric solutions into the complex domain. It differs from previous works that used bootstrap arguments and induction on the differentiation order of solution derivatives with respect to the parameters. The present holomorphy-based argument allows a unified, ``differentiation-free'' proof of sparsity (expressed in terms of $\ell^p$-summability or weighted $\ell^2$-summability) of sequences of Wiener-Hermite coefficients in polynomial chaos expansions in various scales of function spaces. The analysis also implies corresponding analyticity and sparsity results for posterior densities in Bayesian inverse problems subject to Gaussian priors on uncertain inputs from function spaces. Our results furthermore yield dimension-independent convergence rates of various \emph{constructive} high-dimensional deterministic numerical approximation schemes such as single-level and multi-level versions of Hermite-Smolyak anisotropic sparse-grid interpolation and quadrature in both forward and inverse computational uncertainty quantification.

The prevalence and importance of algorithmic two-sided marketplaces has drawn attention to the issue of fairness in such settings. Algorithmic decisions are used in assigning students to schools, users to advertisers, and applicants to job interviews. These decisions should heed the preferences of individuals, and simultaneously be fair with respect to their merits (synonymous with fit, future performance, or need). Merits conditioned on observable features are always \emph{uncertain}, a fact that is exacerbated by the widespread use of machine learning algorithms to infer merit from the observables. As our key contribution, we carefully axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits; indeed, it simultaneously recognizes uncertainty as the primary potential cause of unfairness and an approach to address it. We design a linear programming framework to find fair utility-maximizing distributions over allocations, and we show that the linear program is robust to perturbations in the estimated parameters of the uncertain merit distributions, a key property in combining the approach with machine learning techniques.

crucial for transportation management. However, traditional spatial-temporal deep learning models grapple with addressing the sparse and long-tail characteristics in high-resolution O-D matrices and quantifying prediction uncertainty. This dilemma arises from the numerous zeros and over-dispersed demand patterns within these matrices, which challenge the Gaussian assumption inherent to deterministic deep learning models. To address these challenges, we propose a novel approach: the Spatial-Temporal Tweedie Graph Neural Network (STTD). The STTD introduces the Tweedie distribution as a compelling alternative to the traditional 'zero-inflated' model and leverages spatial and temporal embeddings to parameterize travel demand distributions. Our evaluations using real-world datasets highlight STTD's superiority in providing accurate predictions and precise confidence intervals, particularly in high-resolution scenarios.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.

北京阿比特科技有限公司