亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Since the emergence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), many contact surveys have been conducted to measure changes in human interactions in the face of the pandemic and non-pharmaceutical interventions. These surveys were typically conducted longitudinally, using protocols that differ from those used in the pre-pandemic era. We present a model-based statistical approach that can reconstruct contact patterns at 1-year resolution even when the age of the contacts is reported coarsely by 5 or 10-year age bands. This innovation is rooted in population-level consistency constraints in how contacts between groups must add up, which prompts us to call the approach presented here the Bayesian rate consistency model. The model incorporates computationally efficient Hilbert Space Gaussian process priors to infer the dynamics in age- and gender-structured social contacts and is designed to adjust for reporting fatigue in longitudinal surveys. We demonstrate on simulations the ability to reconstruct contact patterns by gender and 1-year age interval from coarse data with adequate accuracy and within a fully Bayesian framework to quantify uncertainty. We investigate the patterns of social contact data collected in Germany from April to June 2020 across five longitudinal survey waves. We reconstruct the fine age structure in social contacts during the early stages of the pandemic and demonstrate that social contacts rebounded in a structured, non-homogeneous manner. We also show that by July 2020, social contact intensities remained well below pre-pandemic values despite a considerable easing of non-pharmaceutical interventions. This model-based inference approach is open access, computationally tractable enabling full Bayesian uncertainty quantification, and readily applicable to contemporary survey data as long as the exact age of survey participants is reported.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · FAST · MoDELS · Automator · Performer ·
2022 年 12 月 2 日

The ever-growing size of modern space-time data sets, such as those collected by remote sensing, requires new techniques for their efficient and automated processing, including gap-filling of missing values. CUDA-based parallelization on GPU has become a popular way to dramatically increase computational efficiency of various approaches. Recently, we have proposed a computationally efficient and competitive, yet simple spatial prediction approach inspired from statistical physics models, called modified planar rotator (MPR) method. Its GPU implementation allowed additional impressive computational acceleration exceeding two orders of magnitude in comparison with CPU calculations. In the current study we propose a rather general approach to modelling spatial heterogeneity in GPU-implemented spatial prediction methods for two-dimensional gridded data by introducing spatial variability to model parameters. Predictions of unknown values are obtained from non-equilibrium conditional simulations, assuming ``local'' equilibrium conditions. We demonstrate that the proposed method leads to significant improvements in both prediction performance and computational efficiency.

Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.

Background: The COVID-19 pandemic has had a profound impact on health, everyday life and economics around the world. An important complication that can arise in connection with a COVID-19 infection is acute kidney injury. A recent observational cohort study of COVID-19 patients treated at multiple sites of a tertiary care center in Berlin, Germany identified risk factors for the development of (severe) acute kidney injury. Since inferring results from a single study can be tricky, we validate these findings and potentially adjust results by including external information from other studies on acute kidney injury and COVID-19. Methods: We synthesize the results of the main study with other trials via a Bayesian meta-analysis. The external information is used to construct a predictive distribution and to derive posterior estimates for the study of interest. We focus on various important potential risk factors for acute kidney injury development such as mechanical ventilation, use of vasopressors, hypertension, obesity, diabetes, gender and smoking. Results: Our results show that depending on the degree of heterogeneity in the data the estimated effect sizes may be refined considerably with inclusion of external data. Our findings confirm that mechanical ventilation and use of vasopressors are important risk factors for the development of acute kidney injury in COVID-19 patients. Hypertension also appears to be a risk factor that should not be ignored. Shrinkage weights depended to a large extent on the estimated heterogeneity in the model. Conclusions: Our work shows how external information can be used to adjust the results from a primary study, using a Bayesian meta-analytic approach. How much information is borrowed from external studies will depend on the degree of heterogeneity present in the model.

We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted. Since optimal mechanisms typically achieve these high success probabilities, our results imply that optimal private mechanisms for many basic statistics problems are robust. We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems. Assuming the Brennan-Bresler secret-leakage planted clique conjecture, we demonstrate a fundamental tradeoff between computational efficiency, privacy leakage, and success probability for sparse mean estimation. Private algorithms which match this tradeoff are not yet known -- we achieve that (up to polylogarithmic factors) in a polynomially-large range of parameters via the Sum-of-Squares method. To establish an information-computation gap for private sparse mean estimation, we also design new (exponential-time) mechanisms using fewer samples than efficient algorithms must use. Finally, we give evidence for privacy-induced information-computation gaps for several other statistics and learning problems, including PAC learning parity functions and estimation of the mean of a multivariate Gaussian.

In preclinical investigations, e.g. in in vitro, in vivo and in silico studies, the pharmacokinetic, pharmacodynamic and toxicological characteristics of a drug are evaluated before advancing to first-in-man trial. Usually, each study is analyzed independently and the human dose range does not leverage the knowledge gained from all studies. Taking into account the preclinical data through inferential procedures can be particularly interesting to obtain a more precise and reliable starting dose and dose range. We propose a Bayesian framework for multi-source data integration from preclinical studies results extrapolated to human, which allow to predict the quantities of interest (e.g. the minimum effective dose, the maximum tolerated dose, etc.) in humans. We build an approach, divided in four main steps, based on a sequential parameter estimation for each study, extrapolation to human, commensurability checking between posterior distributions and final information merging to increase the precision of estimation. The new framework is evaluated via an extensive simulation study, based on a real-life example in oncology inspired from the preclinical development of galunisertib. Our approach allows to better use all the information compared to a standard framework, reducing uncertainty in the predictions and potentially leading to a more efficient dose selection.

This work considers the sample complexity of obtaining an $\varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP), given access to a generative model (simulator). When the ground-truth MDP is weakly communicating, we prove an upper bound of $\widetilde O(H \varepsilon^{-3} \ln \frac{1}{\delta})$ samples per state-action pair, where $H := sp(h^*)$ is the span of bias of any optimal policy, $\varepsilon$ is the accuracy and $\delta$ is the failure probability. This bound improves the best-known mixing-time-based approaches in [Jin & Sidford 2021], which assume the mixing-time of every deterministic policy is bounded. The core of our analysis is a proper reduction bound from AMDP problems to discounted MDP (DMDP) problems, which may be of independent interests since it allows the application of DMDP algorithms for AMDP in other settings. We complement our upper bound by proving a minimax lower bound of $\Omega(|\mathcal S| |\mathcal A| H \varepsilon^{-2} \ln \frac{1}{\delta})$ total samples, showing that a linear dependent on $H$ is necessary and that our upper bound matches the lower bound in all parameters of $(|\mathcal S|, |\mathcal A|, H, \ln \frac{1}{\delta})$ up to some logarithmic factors.

Deep transfer learning (DTL) has formed a long-term quest toward enabling deep neural networks (DNNs) to reuse historical experiences as efficiently as humans. This ability is named knowledge transferability. A commonly used paradigm for DTL is firstly learning general knowledge (pre-training) and then reusing (fine-tuning) them for a specific target task. There are two consensuses of transferability of pre-trained DNNs: (1) a larger domain gap between pre-training and downstream data brings lower transferability; (2) the transferability gradually decreases from lower layers (near input) to higher layers (near output). However, these consensuses were basically drawn from the experiments based on natural images, which limits their scope of application. This work aims to study and complement them from a broader perspective by proposing a method to measure the transferability of pre-trained DNN parameters. Our experiments on twelve diverse image classification datasets get similar conclusions to the previous consensuses. More importantly, two new findings are presented, i.e., (1) in addition to the domain gap, a larger data amount and huge dataset diversity of downstream target task also prohibit the transferability; (2) although the lower layers learn basic image features, they are usually not the most transferable layers due to their domain sensitivity.

Bayesian clinical trials can benefit of available historical information through the elicitation of informative prior distributions. Concerns are however often raised about the potential for prior-data conflict and the impact of Bayes test decisions on frequentist operating characteristics, with particular attention being assigned to inflation of type I error rates. This motivates the development of principled borrowing mechanisms, that strike a balance between frequentist and Bayesian decisions. Ideally, the trust assigned to historical information defines the degree of robustness to prior-data conflict one is willing to sacrifice. However, such relationship is often not directly available when explicitly considering inflation of type I error rates. We build on available literature relating frequentist and Bayesian test decisions, and investigate a rationale for inflation of type I error rate which explicitly and linearly relates the amount of borrowing and the amount of type I error rate inflation in one-arm studies. A novel dynamic borrowing mechanism tailored to hypothesis testing is additionally proposed. We show that, while dynamic borrowing prevents the possibility to obtain a simple closed form type I error rate computation, an explicit upper bound can still be enforced. Connections with the robust mixture prior approach, particularly in relation to the choice of the mixture weight and robust component, are made. Simulations are performed to show the properties of the approach for normal and binomial outcomes.

We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE) fail to satisfy basic properties like continuity. We present a rigorous framework for analyzing calibration measures, inspired by the literature on property testing. We propose a ground-truth notion of distance from calibration: the $\ell_1$ distance to the nearest perfectly calibrated predictor. We define a consistent calibration measure as one that is a polynomial factor approximation to the this distance. Applying our framework, we identify three calibration measures that are consistent and can be estimated efficiently: smooth calibration, interval calibration, and Laplace kernel calibration. The former two give quadratic approximations to the ground truth distance, which we show is information-theoretically optimal. Our work thus establishes fundamental lower and upper bounds on measuring distance to calibration, and also provides theoretical justification for preferring certain metrics (like Laplace kernel calibration) in practice.

Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.

北京阿比特科技有限公司