亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Experimental designs with hierarchically-structured errors are pervasive in many biomedical areas; it is important to take into account this hierarchical architecture in order to account for the dispersion and make reliable inferences from the data. This paper addresses the question of estimating a proportion or a ratio from positive or negative count data akin to those generated by droplet digital polymerase chain reaction experiments when the number of biological or technical replicates is limited. We present and discuss a Bayesian framework, for which we provide and implement a Gibbs sampler in R and compare it to a random effect model.

相關內容

Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Although growing in popularity in both industry and academia, digital twins and the methodologies for developing and maintaining them differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research study with 19 professionals from industry and academia who are closely associated with different lifecycle stages of the corresponding digital twins. In this paper, we present our analysis and findings from this study, which is based on eight research questions (RQ). We present our findings per research question. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for their development and maintenance. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin's lifecycle.

Digital twins hold substantial promise in many applications, but rigorous procedures for assessing their accuracy are essential for their widespread deployment in safety-critical settings. By formulating this task within the framework of causal inference, we show that attempts to certify the correctness of a twin using real-world observational data are unsound unless potentially tenuous assumptions are made about the data-generating process. To avoid these assumptions, we propose an assessment strategy that instead aims to find cases where the twin is not correct, and present a general-purpose statistical procedure for doing so that may be used across a wide variety of applications and twin models. Our approach yields reliable and actionable information about the twin under minimal assumptions about the twin and the real-world process of interest. We demonstrate the effectiveness of our methodology via a large-scale case study involving sepsis modelling within the Pulse Physiology Engine, which we assess using the MIMIC-III dataset of ICU patients.

We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is "close" to $p$. Our main result is the first data structure that, given a sublinear (in $n$) number of samples from $p$, identifies $v_i$ in time sublinear in $k$. We also give an improved version of the algorithm of Acharya et al. (2018) that reports $v_i$ in time linear in $k$. The experimental evaluation of the latter algorithm shows that it achieves a significant reduction in the number of operations needed to achieve a given accuracy compared to prior work.

With the rise in popularity of digital Atlases to communicate spatial variation, there is an increasing need for robust small-area estimates. However, current small-area estimation methods suffer from various modelling problems when data are very sparse or when estimates are required for areas with very small populations. These issues are particularly heightened when modelling proportions. Additionally, recent work has shown significant benefits in modelling at both the individual and area levels. We propose a two-stage Bayesian hierarchical small area estimation model for proportions that can: account for survey design; use both individual-level survey-only covariates and area-level census covariates; reduce direct estimate instability; and generate prevalence estimates for small areas with no survey data. Using a simulation study we show that, compared with existing Bayesian small area estimation methods, our model can provide optimal predictive performance (Bayesian mean relative root mean squared error, mean absolute relative bias and coverage) of proportions under a variety of data conditions, including very sparse and unstable data. To assess the model in practice, we compare modeled estimates of current smoking prevalence for 1,630 small areas in Australia using the 2017-2018 National Health Survey data combined with 2016 census data.

We examine the problem of variance components testing in general mixed effects models using the likelihood ratio test. We account for the presence of nuisance parameters, i.e. the fact that some untested variances might also be equal to zero. Two main issues arise in this context leading to a non regular setting. First, under the null hypothesis the true parameter value lies on the boundary of the parameter space. Moreover, due to the presence of nuisance parameters the exact location of these boundary points is not known, which prevents from using classical asymptotic theory of maximum likelihood estimation. Then, in the specific context of nonlinear mixed-effects models, the Fisher information matrix is singular at the true parameter value. We address these two points by proposing a shrinked parametric bootstrap procedure, which is straightforward to apply even for nonlinear models. We show that the procedure is consistent, solving both the boundary and the singularity issues, and we provide a verifiable criterion for the applicability of our theoretical results. We show through a simulation study that, compared to the asymptotic approach, our procedure has a better small sample performance and is more robust to the presence of nuisance parameters. A real data application is also provided.

Network coding has been widely used as a technology to ensure efficient and reliable communication. The ability to recode packets at the intermediate nodes is a major benefit of network coding implementations. This allows the intermediate nodes to choose a different code rate and fine-tune the outgoing transmission to the channel conditions, decoupling the requirement for the source node to compensate for cumulative losses over a multi-hop network. Block network coding solutions already have practical recoders but an on-the-fly recoder for sliding window network coding has not been studied in detail. In this paper, we present the implementation details of a practical recoder for sliding window network coding for the first time along with a comprehensive performance analysis of a multi-hop network using the recoder. The sliding window recoder ensures that the network performs closest to its capacity and that each node can use its outgoing links efficiently.

This white paper introduces Interactive Digital Narratives (IDN) as a powerful tool for tackling the complex challenges we face in today's society. In the scope of the COST Action 18230 - Interactive Narrative Design for Complexity Representation, a group of researchers dedicated to studying media, systematically selected six case studies of IDNs, including educational games, news media, and social media content, that confront and challenge the existing traditional media landscape. These case studies cover a wide range of important societal issues, such as racism, coloniality, feminist social movements, cultural heritage, war, and disinformation. By exploring this broad range of examples, we aim to demonstrate how IDN can effectively address social complexity in an interactive, participatory, and engaging manner. We encourage you to examine these case studies and discover for yourself how IDN can be used as a creative tool to address complex societal issues. This white paper might be inspiring for journalists, digital content creators, game designers, developers, educators using information and communication technologies in the classroom, or anyone interested in learning how to use IDN tools to tackle complex societal issues. In this sense, along with key scientific references, we offer key takeaways at the end of this paper that might be helpful for media practitioners at large, in two main ways: 1) Designing IDNs to address complex societal issues and 2) Using IDNs to engage audiences with complex societal issues.

Estimating dynamic treatment effects is essential across various disciplines, offering nuanced insights into the time-dependent causal impact of interventions. However, this estimation presents challenges due to the "curse of dimensionality" and time-varying confounding, which can lead to biased estimates. Additionally, correctly specifying the growing number of treatment assignments and outcome models with multiple exposures seems overly complex. Given these challenges, the concept of double robustness, where model misspecification is permitted, is extremely valuable, yet unachieved in practical applications. This paper introduces a new approach by proposing novel, robust estimators for both treatment assignments and outcome models. We present a "sequential model double robust" solution, demonstrating that double robustness over multiple time points can be achieved when each time exposure is doubly robust. This approach improves the robustness and reliability of dynamic treatment effects estimation, addressing a significant gap in this field.

Sequential recommendation aims to leverage users' historical behaviors to predict their next interaction. Existing works have not yet addressed two main challenges in sequential recommendation. First, user behaviors in their rich historical sequences are often implicit and noisy preference signals, they cannot sufficiently reflect users' actual preferences. In addition, users' dynamic preferences often change rapidly over time, and hence it is difficult to capture user patterns in their historical sequences. In this work, we propose a graph neural network model called SURGE (short for SeqUential Recommendation with Graph neural nEtworks) to address these two issues. Specifically, SURGE integrates different types of preferences in long-term user behaviors into clusters in the graph by re-constructing loose item sequences into tight item-item interest graphs based on metric learning. This helps explicitly distinguish users' core interests, by forming dense clusters in the interest graph. Then, we perform cluster-aware and query-aware graph convolutional propagation and graph pooling on the constructed graph. It dynamically fuses and extracts users' current activated core interests from noisy user behavior sequences. We conduct extensive experiments on both public and proprietary industrial datasets. Experimental results demonstrate significant performance gains of our proposed method compared to state-of-the-art methods. Further studies on sequence length confirm that our method can model long behavioral sequences effectively and efficiently.

北京阿比特科技有限公司