亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modelling disease progression of iron deficiency anaemia (IDA) following oral iron supplement prescriptions is a prerequisite for evaluating the cost-effectiveness of oral iron supplements. Electronic health records (EHRs) from the Clinical Practice Research Datalink (CPRD) provide rich longitudinal data on IDA disease progression in patients registered with 663 General Practitioner (GP) practices in the UK, but they also create challenges in statistical analyses. First, the CPRD data are clustered at multi-levels (i.e., GP practices and patients), but their large volume makes it computationally difficult to implement estimation of standard random effects models for multi-level data. Second, observation times in the CPRD data are irregular and could be informative about the disease progression. For example, shorter/longer gap times between GP visits could be associated with deteriorating/improving IDA. Existing methods to address informative observation times are mostly based on complex joint models, which adds more computational burden. To tackle these challenges, we develop a computationally efficient approach to modelling disease progression with EHRs data while accounting for variability at multi-level clusters and informative observation times. We apply the proposed method to the CPRD data to investigate IDA improvement and treatment intolerance following oral iron prescriptions in primary care of the UK.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Extensibility · 流形 · 線性的 · MoDELS ·
2021 年 9 月 29 日

In geosciences, the use of classical Euclidean methods is unsuitable for treating and analyzing some types of data, as this may not belong to a vector space. This is the case for correlation matrices, belonging to a subfamily of symmetric positive definite matrices, which in turn form a cone shape Riemannian manifold. We propose two novel applications for dealing with the problem of accounting with the non-linear behavior usually presented on multivariate geological data by exploiting the manifold features of correlations matrices. First, we employ an extension for the linear model of coregionalization (LMC) that alters the linear mixture, which is assumed fixed on the domain, and making it locally varying according to the local strength in the dependency of the coregionalized variables. The main challenge, once this relaxation on the LMC is assumed, is to solve appropriately the interpolation of the different known correlation matrices throughout the domain, in a reliable and coherent fashion. The present work adopts the non-euclidean framework to achieve our objective by locally averaging and interpolating the correlations between the variables, retaining the intrinsic geometry of correlation matrices. A second application deals with the problem of clustering of multivariate data.

We introduce a simple diagnostic test for assessing the goodness of fit of linear regression, and in particular for detecting hidden confounding. We propose to evaluate the sensitivity of the regression coefficient with respect to changes of the marginal distribution of covariates by comparing the so-called higher-order least squares with the usual least squares estimates. In spite of its simplicity, this strategy is extremely general and powerful. Specifically, we show that it allows to distinguish between confounded and unconfounded predictor variables as well as determining ancestor variables in structural equation models.

The growing availability of observational databases like electronic health records (EHR) provides unprecedented opportunities for secondary use of such data in biomedical research. However, these data can be error-prone and need to be validated before use. It is usually unrealistic to validate the whole database due to resource constraints. A cost-effective alternative is to implement a two-phase design that validates a subset of patient records that are enriched for information about the research question of interest. Herein, we consider odds ratio estimation under differential outcome and exposure misclassification. We propose optimal designs that minimize the variance of the maximum likelihood odds ratio estimator. We develop a novel adaptive grid search algorithm that can locate the optimal design in a computationally feasible and numerically accurate manner. Because the optimal design requires specification of unknown parameters at the outset and thus is unattainable without prior information, we introduce a multi-wave sampling strategy to approximate it in practice. We demonstrate the efficiency gains of the proposed designs over existing ones through extensive simulations and two large observational studies. We provide an R package and Shiny app to facilitate the use of the optimal designs.

A benchmark study of modern distributed databases is an important source of information to select the right technology for managing data in the cloud-edge paradigms. To make the right decision, it is required to conduct an extensive experimental study on a variety of hardware infrastructures. While most of the state-of-the-art studies have investigated only response time and scalability of distributed databases, focusing on other various metrics (e.g., energy, bandwidth, and storage consumption) is essential to fully understand the resources consumption of the distributed databases. Also, existing studies have explored the response time and scalability of these databases either in private or public cloud. Hence, there is a paucity of investigation into the evaluation of these databases deployed in a hybrid cloud, which is the seamless integration of public and private cloud. To address these research gaps, in this paper, we investigate energy, bandwidth and storage consumption of the most used and common distributed databases. For this purpose, we have evaluated four open-source databases (Cassandra, Mongo, Redis and MySQL) on the hybrid cloud spanning over local OpenStack and Microsoft Azure, and a variety of edge computing nodes including Raspberry Pi, a cluster of Raspberry Pi, and low and high power servers. Our extensive experimental results reveal several helpful insights for the deployment selection of modern distributed databases in edge-cloud environments.

Airborne electromagnetic surveys may consist of hundreds of thousands of soundings. In most cases, this makes 3D inversions unfeasible even when the subsurface is characterized by a high level of heterogeneity. Instead, approaches based on 1D forwards are routinely used because of their computational efficiency. However, it is relatively easy to fit 3D responses with 1D forward modelling and retrieve apparently well-resolved conductivity models. However, those detailed features may simply be caused by fitting the modelling error connected to the approximate forward. In addition, it is, in practice, difficult to identify this kind of artifacts as the modeling error is correlated. The present study demonstrates how to assess the modelling error introduced by the 1D approximation and how to include this additional piece of information into a probabilistic inversion. Not surprisingly, it turns out that this simple modification provides not only much better reconstructions of the targets but, maybe, more importantly, guarantees a correct estimation of the corresponding reliability.

Understanding where and when human mobility is associated with disease infection is crucial for implementing location-based health care policy and interventions. Previous studies on COVID-19 have revealed the correlation between human mobility and COVID-19 cases. However, the spatiotemporal heterogeneity of such correlation is not yet fully understood. In this study, we aim to identify the spatiotemporal heterogeneities in the relationship between human mobility flows and COVID-19 cases in U.S. counties. Using anonymous mobile device location data, we compute an aggregate measure of mobility that includes flows within and into each county. We then compare the trends in human mobility and COVID-19 cases of each county using dynamic time warping (DTW). DTW results highlight the time periods and locations (counties) where mobility may have influenced disease transmission. Also, the correlation between human mobility and infections varies substantially across geographic space and time in terms of relationship, strength, and similarity.

Item Response Theory (IRT) models have received growing interest in health science for analyzing latent constructs such as depression, anxiety, quality of life or cognitive functioning from the information provided by each individual's items responses. However, in the presence of repeated item measures, IRT methods usually assume that the measurement occasions are made at the exact same time for all patients. In this paper, we show how the IRT methodology can be combined with the mixed model theory to provide a dynamic IRT model which exploits the information provided at item-level for a measurement scale while simultaneously handling observation times that may vary across individuals. The latent construct is a latent process defined in continuous time that is linked to the observed item responses through a measurement model at each individual- and occasion-specific observation time; we focus here on a Graded Response Model for binary and ordinal items. The Maximum Likelihood Estimation procedure of the dynamic IRT model is available in the R package lcmm. The proposed approach is contextualized in a clinical example in end-stage renal disease, the PREDIALA study. The objective is to study the trajectories of depressive symptomatology (as measured by 7 items of the Hospital Anxiety and Depression scale) according to the time on renal transplant waiting list and the renal replacement therapy. We also illustrate how the method can be used to assess Differential Item Functioning and lack of measurement invariance over time.

What are the pathways for spreading disinformation on social media platforms? This article addresses this question by collecting, categorising, and situating an extensive body of research on how application programming interfaces (APIs) provided by social media platforms facilitate the spread of disinformation. We first examine the landscape of official social media APIs, then perform quantitative research on the open-source code repositories GitHub and GitLab to understand the usage patterns of these APIs. By inspecting the code repositories, we classify developers' usage of the APIs as official and unofficial, and further develop a four-stage framework characterising pathways for spreading disinformation on social media platforms. We further highlight how the stages in the framework were activated during the 2016 US Presidential Elections, before providing policy recommendations for issues relating to access to APIs, algorithmic content, advertisements, and suggest rapid response to coordinate campaigns, development of collaborative, and participatory approaches as well as government stewardship in the regulation of social media platforms.

Mathematical modelling heavily employs differential equations to describe the macroscopic or global behaviour of systems. The dynamics of complex systems is in contrast more efficiently described by local rules and thus in an algorithmic and local or microscopic manner. The theory of such an approach has to be established still. We recently presented the so-called allagmatic method, which includes a system metamodel providing a framework for describing, modelling, simulating, and interpreting complex systems. Its development and programming was guided by philosophy, especially by Gilbert Simondon's philosophy of individuation, and concepts from cybernetics. Here, a mathematical formalism is presented to more precisely describe and define the system metamodel of the allagmatic method, further generalising it and extending its reach to a more formal treatment and allowing more theoretical studies. Using the formalism, an example for such a further study is finally provided with mathematical definitions and proofs for model creation and equivalence of cellular automata and artificial neural networks.

Like any large software system, a full-fledged DBMS offers an overwhelming amount of configuration knobs. These range from static initialisation parameters like buffer sizes, degree of concurrency, or level of replication to complex runtime decisions like creating a secondary index on a particular column or reorganising the physical layout of the store. To simplify the configuration, industry grade DBMSs are usually shipped with various advisory tools, that provide recommendations for given workloads and machines. However, reality shows that the actual configuration, tuning, and maintenance is usually still done by a human administrator, relying on intuition and experience. Recent work on deep reinforcement learning has shown very promising results in solving problems, that require such a sense of intuition. For instance, it has been applied very successfully in learning how to play complicated games with enormous search spaces. Motivated by these achievements, in this work we explore how deep reinforcement learning can be used to administer a DBMS. First, we will describe how deep reinforcement learning can be used to automatically tune an arbitrary software system like a DBMS by defining a problem environment. Second, we showcase our concept of NoDBA at the concrete example of index selection and evaluate how well it recommends indexes for given workloads.

北京阿比特科技有限公司