亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In Bayesian theory, the role of information is central. The influence exerted by prior information on posterior outcomes often jeopardizes Bayesian studies, due to the potentially subjective nature of the prior choice. When the studied model is not enriched with sufficiently a priori information, the reference prior theory emerges as a proficient tool. Based on the mutual information criterion, the theory handles the construction of a non informative prior whose choice could be called objective. We unveil an original analogy between reference prior theory and Global Sensitivity Analysis, from which we propose a natural generalization of the mutual information definition. A class of our generalized metrics is studied and our results reinforce the Jeffreys' prior choice which satisfies our extended definition of reference prior.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 傅立葉變換 · FAST · CASE · 變換 ·
2024 年 1 月 30 日

The modification of Amdahl's law for the case of increment of processor elements in a computer system is considered. The coefficient $k$ linking accelerations of parallel and parallel specialized computer systems is determined. The limiting values of the coefficient are investigated and its theoretical maximum is calculated. It is proved that $k$ > 1 for any positive increment of processor elements. The obtained formulas are combined into a single method allowing to determine the maximum theoretical acceleration of a parallel specialized computer system in comparison with the acceleration of a minimal parallel computer system. The method is tested on Apriori, k-nearest neighbors, CDF 9/7, fast Fourier transform and naive Bayesian classifier algorithms.

A popular method for variance reduction in observational causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Yang and Ding, 2018; Crump et al. 2009), but in modern settings where heteroscedastic data are analyzed with non-parametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold: first, we describe novel procedures for selecting which units to trim. Our procedures differ from previous work in that we trim not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed sub-populations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007-2008 National Health and Nutrition Examination Survey, and on a semi-synthetic Medicare dataset and find promising results in all settings.

In a world filled with data, it is expected for a nation to take decisions informed by data. However, countries need to first collect and publish such data in a way meaningful for both citizens and policy makers. A good thematic classification could be instrumental in helping users navigate and find the right resources on a rich data repository as the one collected by Colombia's National Administrative Department of Statistics (DANE). The Visual Analytics Framework is a methodology for conducting visual analysis developed by T. Munzner et al. [T. Munzner, Visualization Analysis and Design, A K Peters Visualization Series, 1, 2014] that could help with this task. This paper presents a case study applying such framework conducted to help the DANE better visualize their data repository, and present a more understandable classification of it. It describes three main analysis tasks identified, the proposed solutions and the collection of insights generated from them.

This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom), focusing on creating a causal understanding of AI model outputs. We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation. The framework aims to revolutionize how AI processes are understood, making them more transparent and interpretable. While the initial application of this model is demonstrated on synthetic data, the implications of this research extend beyond these simple applications. This general approach has the potential to redefine interactions with AI across multiple domains, fostering trust and informed decision-making in healthcare and in various sectors where AI's decision-making processes are critical. The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems in an increasingly digital world.

It is a challenge to manage infinite- or high-dimensional data in situations where storage, transmission, or computation resources are constrained. In the simplest scenario when the data consists of a noisy infinite-dimensional signal, we introduce the notion of local \emph{effective dimension} (i.e., pertinent to the underlying signal), formulate and study the problem of its recovery on the basis of noisy data. This problem can be associated to the problems of adaptive quantization, (lossy) data compression, oracle signal estimation. We apply a Bayesian approach and study frequentists properties of the resulting posterior, a purely frequentist version of the results is also proposed. We derive certain upper and lower bounds results about identifying the local effective dimension which show that only the so called \emph{one-sided inference} on the local effective dimension can be ensured whereas the \emph{two-sided inference}, on the other hand, is in general impossible. We establish the \emph{minimal} conditions under which two-sided inference can be made. Finally, connection to the problem of smoothness estimation for some traditional smoothness scales (Sobolev scales) is considered.

A discrete spatial lattice can be cast as a network structure over which spatially-correlated outcomes are observed. A second network structure may also capture similarities among measured features, when such information is available. Incorporating the network structures when analyzing such doubly-structured data can improve predictive power, and lead to better identification of important features in the data-generating process. Motivated by applications in spatial disease mapping, we develop a new doubly regularized regression framework to incorporate these network structures for analyzing high-dimensional datasets. Our estimators can easily be implemented with standard convex optimization algorithms. In addition, we describe a procedure to obtain asymptotically valid confidence intervals and hypothesis tests for our model parameters. We show empirically that our framework provides improved predictive accuracy and inferential power compared to existing high-dimensional spatial methods. These advantages hold given fully accurate network information, and also with networks which are partially misspecified or uninformative. The application of the proposed method to modeling COVID-19 mortality data suggests that it can improve prediction of deaths beyond standard spatial models, and that it selects relevant covariates more often.

We analyze the optimized adaptive importance sampler (OAIS) for performing Monte Carlo integration with general proposals. We leverage a classical result which shows that the bias and the mean-squared error (MSE) of the importance sampling scales with the $\chi^2$-divergence between the target and the proposal and develop a scheme which performs global optimization of $\chi^2$-divergence. While it is known that this quantity is convex for exponential family proposals, the case of the general proposals has been an open problem. We close this gap by utilizing the nonasymptotic bounds for stochastic gradient Langevin dynamics (SGLD) for the global optimization of $\chi^2$-divergence and derive nonasymptotic bounds for the MSE by leveraging recent results from non-convex optimization literature. The resulting AIS schemes have explicit theoretical guarantees that are uniform-in-time.

We consider a problem of inferring contact network from nodal states observed during an epidemiological process. In a black-box Bayesian optimisation framework this problem reduces to a discrete likelihood optimisation over the set of possible networks. The high dimensionality of this set, which grows quadratically with the number of network nodes, makes this optimisation computationally challenging. Moreover, the computation of the likelihood of a network requires estimating probabilities of the observed data to realise during the evolution of the epidemiological process on this network. A stochastic simulation algorithm struggles to estimate rare events of observing the data (corresponding to the ground truth network) during the evolution with a significantly different network, and hence prevents optimisation of the likelihood. We replace the stochastic simulation with solving the chemical master equation for the probabilities of all network states. This equation also suffers from the curse of dimensionality due to the exponentially large number of network states. We overcome this by approximating the probability of states in the tensor-train decomposition. This enables fast and accurate computation of small probabilities and likelihoods. Numerical simulations demonstrate efficient black-box Bayesian inference of the network.

We analyze the effects of enforcing vs. exempting access ISP from net neutrality regulations when platforms are present and operate two-sided pricing in their business models. This study is conducted in a scenario where users and Content Providers (CPs) have access to the internet by means of their serving ISPs and to a platform that intermediates and matches users and CPs, among other service offerings. Our hypothesis is that platform two-sided pricing interacts in a relevant manner with the access ISP, which may be allowed (an hypothetical non-neutrality scenario) or not (the current neutrality regulation status) to apply two-sided pricing on its service business model. We preliminarily conclude that the platforms are extracting surplus from the CPs under the current net neutrality regime for the ISP, and that the platforms would not be able to do so under the counter-factual situation where the ISPs could apply two-sided prices.

Typical pipelines for model geometry generation in computational biomedicine stem from images, which are usually considered to be at rest, despite the object being in mechanical equilibrium under several forces. We refer to the stress-free geometry computation as the reference configuration problem, and in this work we extend such a formulation to the theory of fully nonlinear poroelastic media. The main steps are (i) writing the equations in terms of the reference porosity and (ii) defining a time dependent problem whose steady state solution is the reference porosity. This problem can be computationally challenging as it can require several hundreds of iterations to converge, so we propose the use of Anderson acceleration to speed up this procedure. Our evidence shows that this strategy can reduce the number of iterations up to 80\%. In addition, we note that a primal formulation of the nonlinear mass conservation equations is not consistent due to the presence of second order derivatives of the displacement, which we alleviate through adequate mixed formulations. All claims are validated through numerical simulations in both idealized and realistic scenarios.

北京阿比特科技有限公司