亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The R-package GeoAdjust //github.com/umut-altay/GeoAdjust-package implements fast empirical Bayesian geostatistical inference for household survey data from the Demographic and Health Surveys Program (DHS) using Template Model Builder (TMB). DHS household survey data is an important source of data for tracking demographic and health indicators, but positional uncertainty has been intentionally introduced in the GPS coordinates to preserve privacy. GeoAdjust accounts for such positional uncertainty in geostatistical models containing both spatial random effects and raster- and distance-based covariates. The R package supports Gaussian, binomial and Poisson likelihoods with identity link, logit link, and log link functions respectively. The user defines the desired model structure by setting a small number of function arguments, and can easily experiment with different hyperparameters for the priors. GeoAdjust is the first software package that is specifically designed to address positional uncertainty in the GPS coordinates of point referenced household survey data. The package provides inference for model parameters and can predict values at unobserved locations.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 估計/估計量 · Learning · MoDELS · 平均絕對誤差 ·
2023 年 5 月 12 日

The use of deep learning approaches for image reconstruction is of contemporary interest in radiology, especially for approaches that solve inverse problems associated with imaging. In deployment, these models may be exposed to input distributions that are widely shifted from training data, due in part to data biases or drifts. We propose a metric based on local Lipschitz determined from a single trained model that can be used to estimate the model uncertainty for image reconstructions. We demonstrate a monotonic relationship between the local Lipschitz value and Mean Absolute Error and show that this method can be used to provide a threshold that determines whether a given DL reconstruction approach was well suited to the task. Our uncertainty estimation method can be used to identify out-of-distribution test samples, relate information regarding epistemic uncertainties, and guide proper data augmentation. Quantifying uncertainty of learned reconstruction approaches is especially pertinent to the medical domain where reconstructed images must remain diagnostically accurate.

Deep learning models, including modern systems like large language models, are well known to offer unreliable estimates of the uncertainty of their decisions. In order to improve the quality of the confidence levels, also known as calibration, of a model, common approaches entail the addition of either data-dependent or data-independent regularization terms to the training loss. Data-dependent regularizers have been recently introduced in the context of conventional frequentist learning to penalize deviations between confidence and accuracy. In contrast, data-independent regularizers are at the core of Bayesian learning, enforcing adherence of the variational distribution in the model parameter space to a prior density. The former approach is unable to quantify epistemic uncertainty, while the latter is severely affected by model misspecification. In light of the limitations of both methods, this paper proposes an integrated framework, referred to as calibration-aware Bayesian neural networks (CA-BNNs), that applies both regularizers while optimizing over a variational distribution as in Bayesian learning. Numerical results validate the advantages of the proposed approach in terms of expected calibration error (ECE) and reliability diagrams.

A Bayesian treatment of deep learning allows for the computation of uncertainties associated with the predictions of deep neural networks. We show how the concept of Errors-in-Variables can be used in Bayesian deep regression to also account for the uncertainty associated with the input of the employed neural network. The presented approach thereby exploits a relevant, but generally overlooked, source of uncertainty and yields a decomposition of the predictive uncertainty into an aleatoric and epistemic part that is more complete and, in many cases, more consistent from a statistical perspective. We discuss the approach along various simulated and real examples and observe that using an Errors-in-Variables model leads to an increase in the uncertainty while preserving the prediction performance of models without Errors-in-Variables. For examples with known regression function we observe that this ground truth is substantially better covered by the Errors-in-Variables model, indicating that the presented approach leads to a more reliable uncertainty estimation.

Physical models with uncertain inputs are commonly represented as parametric partial differential equations (PDEs). That is, PDEs with inputs that are expressed as functions of parameters with an associated probability distribution. Developing efficient and accurate solution strategies that account for errors on the space, time and parameter domains simultaneously is highly challenging. Indeed, it is well known that standard polynomial-based approximations on the parameter domain can incur errors that grow in time. In this work, we focus on advection-diffusion problems with parameter-dependent wind fields. A novel adaptive solution strategy is proposed that allows users to combine stochastic collocation on the parameter domain with off-the-shelf adaptive timestepping algorithms with local error control. This is a non-intrusive strategy that builds a polynomial-based surrogate that is adapted sequentially in time. The algorithm is driven by a so-called hierarchical estimator for the parametric error and balances this against an estimate for the global timestepping error which is derived from a scaling argument.

We derive exact reconstruction methods for cracks consisting of unions of Lipschitz hypersurfaces in the context of Calder\'on's inverse conductivity problem. Our first method obtains upper bounds for the unknown cracks, bounds that can be shrunk to obtain the exact crack locations upon verifying certain operator inequalities for differences of the local Neumann-to-Dirichlet maps. This method can simultaneously handle perfectly insulating and perfectly conducting cracks, and it appears to be the first rigorous reconstruction method capable of this. Our second method assumes that only perfectly insulating cracks or only perfectly conducting cracks are present. Once more using operator inequalities, this method generates approximate cracks that are guaranteed to be subsets of the unknown cracks that are being reconstructed.

In causal inference, sensitivity analysis is important to assess the robustness of study conclusions to key assumptions. We perform sensitivity analysis of the assumption that missing outcomes are missing completely at random. We follow a Bayesian approach, which is nonparametric for the outcome distribution and can be combined with an informative prior on the sensitivity parameter. We give insight in the posterior and provide theoretical guarantees in the form of Bernstein-von Mises theorems for estimating the mean outcome. We study different parametrisations of the model involving Dirichlet process priors on the distribution of the outcome and on the distribution of the outcome conditional on the subject being treated. We show that these parametrisations incorporate a prior on the sensitivity parameter in different ways and discuss the relative merits. We also present a simulation study, showing the performance of the methods in finite sample scenarios.

Artistic pieces can be studied from several perspectives, one example being their reception among readers over time. In the present work, we approach this interesting topic from the standpoint of literary works, particularly assessing the task of predicting whether a book will become a best seller. Dissimilarly from previous approaches, we focused on the full content of books and considered visualization and classification tasks. We employed visualization for the preliminary exploration of the data structure and properties, involving SemAxis and linear discriminant analyses. Then, to obtain quantitative and more objective results, we employed various classifiers. Such approaches were used along with a dataset containing (i) books published from 1895 to 1924 and consecrated as best sellers by the Publishers Weekly Bestseller Lists and (ii) literary works published in the same period but not being mentioned in that list. Our comparison of methods revealed that the best-achieved result - combining a bag-of-words representation with a logistic regression classifier - led to an average accuracy of 0.75 both for the leave-one-out and 10-fold cross-validations. Such an outcome suggests that it is unfeasible to predict the success of books with high accuracy using only the full content of the texts. Nevertheless, our findings provide insights into the factors leading to the relative success of a literary work.

A long tradition of studies in psycholinguistics has examined the formation and generalization of ad hoc conventions in reference games, showing how newly acquired conventions for a given target transfer to new referential contexts. However, another axis of generalization remains understudied: how do conventions formed for one target transfer to completely distinct targets, when specific lexical choices are unlikely to repeat? This paper presents two dyadic studies (N = 240) that address this axis of generalization, focusing on the role of nameability -- the a priori likelihood that two individuals will share the same label. We leverage the recently-released KiloGram dataset, a collection of abstract tangram images that is orders of magnitude larger than previously available, exhibiting high diversity of properties like nameability. Our first study asks how nameability shapes convention formation, while the second asks how new conventions generalize to entirely new targets of reference. Our results raise new questions about how ad hoc conventions extend beyond target-specific re-use of specific lexical choices.

We consider the problem of learning a directed graph $G^\star$ from observational data. We assume that the distribution which gives rise to the samples is Markov and faithful to the graph $G^\star$ and that there are no unobserved variables. We do not rely on any further assumptions regarding the graph or the distribution of the variables. In particular, we allow for directed cycles in $G^\star$ and work in the fully non-parametric setting. Given the set of conditional independence statements satisfied by the distribution, we aim to find a directed graph which satisfies the same $d$-separation statements as $G^\star$. We propose a hybrid approach consisting of two steps. We first find a partially ordered partition of the vertices of $G^\star$ by optimizing a certain score in a greedy fashion. We prove that any optimal partition uniquely characterizes the Markov equivalence class of $G^\star$. Given an optimal partition, we propose an algorithm for constructing a graph in the Markov equivalence class of $G^\star$ whose strongly connected components correspond to the elements of the partition, and which are partially ordered according to the partial order of the partition. Our algorithm comes in two versions -- one which is provably correct and another one which performs fast in practice.

Several precise and computationally efficient results for pointing errors models in two asymptotic cases are derived in this paper. The normalized mean-squared error (NMSE) performance metric is employed to quantify the accuracy of different models. For the case that the beam width is relatively larger than the detection aperture, we propose the three kinds of models that have the form of $c_1\exp(-c_2r^2) $.It is shown that the modified intensity uniform model not only achieves a comparable accuracy with the best linearized model, but also is expressed in an elegant mathematical way when compared to the traditional Farid model. This indicates that the modified intensity uniform model is preferable in the performance analysis of free space optical (FSO) systems considering the effects of the pointing errors. By analogizing the beam spot with a point in the case that beam width is smaller than the detection aperture, the solution of the pointing errors model is transformed to a smooth function approximation problem, and we find that a more accurate approximation can be achieved by the proposed point approximation model when compared to the model that is induced from the Vasylyev model in some scenarios.

北京阿比特科技有限公司