亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A sentinel network, Ob\'epine, has been designed to monitor SARS-CoV-2 viral load in wastewaters arriving at several tens of wastewater treatment plants in France as an indirect macro-epidemiological parameter. The sources of uncertainty in such monitoring system are numerous and the concentration measurements it provides are left-censored and contain numerous outliers, which biases the results of usual smoothing methods. Hence the need for an adapted pre-processing in order to evaluate the real daily amount of virus arriving to each WWTP. We propose a method based on an auto-regressive model adapted to censored data with outliers. Inference and prediction are produced via a discretised smoother which makes it a very flexible tool. This method is both validated on simulations and on real data from Ob\'epine. The resulting smoothed signal shows a good correlation with other epidemiological indicators and currently contributes to the construction of the wastewater indicators provided each week by Ob\'epine.

相關內容

We introduce a novel procedure to perform Bayesian non-parametric inference with right-censored data, the \emph{beta-Stacy bootstrap}. This approximates the posterior law of summaries of the survival distribution (e.g. the mean survival time). More precisely, our procedure approximates the joint posterior law of functionals of the beta-Stacy process, a non-parametric process prior that generalizes the Dirichlet process and that is widely used in survival analysis. The beta-Stacy bootstrap generalizes and unifies other common Bayesian bootstraps for complete or censored data based on non-parametric priors. It is defined by an exact sampling algorithm that does not require tuning of Markov Chain Monte Carlo steps. We illustrate the beta-Stacy bootstrap by analyzing survival data from a real clinical trial.

Computational Fluid Dynamics (CFD) is used to assist in designing artificial valves and planning procedures, focusing on local flow features. However, assessing the impact on overall cardiovascular function or predicting longer-term outcomes may require more comprehensive whole heart CFD models. Fitting such models to patient data requires numerous computationally expensive simulations, and depends on specific clinical measurements to constrain model parameters, hampering clinical adoption. Surrogate models can help to accelerate the fitting process while accounting for the added uncertainty. We create a validated patient-specific four-chamber heart CFD model based on the Navier-Stokes-Brinkman (NSB) equations and test Gaussian Process Emulators (GPEs) as a surrogate model for performing a variance-based global sensitivity analysis (GSA). GSA identified preload as the dominant driver of flow in both the right and left side of the heart, respectively. Left-right differences were seen in terms of vascular outflow resistances, with pulmonary artery resistance having a much larger impact on flow than aortic resistance. Our results suggest that GPEs can be used to identify parameters in personalized whole heart CFD models, and highlight the importance of accurate preload measurements.

Diagnosing the changes of structural behaviors using monitoring data is an important objective of structural health monitoring (SHM). The changes in structural behaviors are usually manifested as the feature changes in monitored structural responses; thus, developing effective methods for automatically detecting such changes is of considerable significance. Existing methods for change detection in SHM are mainly used for scalar or vector data, thus incapable of detecting the changes of the features represented by complex data, e.g., the probability density functions (PDFs). Detecting the abrupt changes occurred in the distributions (represented by PDFs) associated with the feature variables extracted from SHM data are usually of crucial interest for structural condition assessment; however, the SHM community still lacks effective diagnostic tools for detecting such changes. In this study, a change-point detection method is developed in the functional data-analytic framework for PDF-valued sequence, and it is leveraged to diagnose the distributional information break encountered in structural condition assessment. A major challenge in PDF-valued data modeling or analysis is that the PDFs are special functional data subjecting to nonlinear constraints. To tackle this issue, the PDFs are embedded into the Bayes space, and the associated change-point model is constructed by using the linear structure of the Bayes space; then, a hypothesis testing procedure is presented for distributional change-point detection based on the isomorphic mapping between the Bayes space and a functional linear space. Comprehensive simulation studies are conducted to validate the effectiveness of the proposed method as well as demonstrate its superiority over the competing method. Finally, an application to real SHM data illustrates its practical utility in structural condition assessment.

Several queries and scores have recently been proposed to explain individual predictions over ML models. Given the need for flexible, reliable, and easy-to-apply interpretability methods for ML models, we foresee the need for developing declarative languages to naturally specify different explainability queries. We do this in a principled way by rooting such a language in a logic, called FOIL, that allows for expressing many simple but important explainability queries, and might serve as a core for more expressive interpretability languages. We study the computational complexity of FOIL queries over two classes of ML models often deemed to be easily interpretable: decision trees and OBDDs. Since the number of possible inputs for an ML model is exponential in its dimension, the tractability of the FOIL evaluation problem is delicate but can be achieved by either restricting the structure of the models or the fragment of FOIL being evaluated. We also present a prototype implementation of FOIL wrapped in a high-level declarative language and perform experiments showing that such a language can be used in practice.

The synthpop package for R //www.synthpop.org.uk provides tools to allow data custodians to create synthetic versions of confidential microdata that can be distributed with fewer restrictions than the original. The synthesis can be customized to ensure that relationships evident in the real data are reproduced in the synthetic data. A number of measures have been proposed to assess this aspect, commonly known as the utility of the synthetic data. We show that all these measures, including those calculated from tabulations, can be derived from a propensity score model. The measures will be reviewed and compared, and relations between them illustrated. All the measures compared are highly correlated and some are shown to be identical. The method used to define the propensity score model is more important than the choice of measure. These measures and methods are incorporated into utility modules in the synthpop package that include methods to visualize the results and thus provide immediate feedback to allow the person creating the synthetic data to improve its quality. The utility functions were originally designed to be used for synthetic data objects of class \code{synds}, created by the \pkg{synthpop} function syn() or syn.strata(), but they can now be used to compare one or more synthesised data sets with the original records, where the records are R data frames or lists of data frames.

The recent proliferation of medical data, such as genetics and electronic health records (EHR), offers new opportunities to find novel predictors of health outcomes. Presented with a large set of candidate features, interest often lies in selecting the ones most likely to be predictive of an outcome for further study such that the goal is to control the false discovery rate (FDR) at a specified level. Knockoff filtering is an innovative strategy for FDR-controlled feature selection. But, existing knockoff methods make strong distributional assumptions that hinder their applicability to real world data. We propose Bayesian models for generating high quality knockoff copies that utilize available knowledge about the data structure, thus improving the resolution of prognostic features. Applications to two feature sets are considered: those with categorical and/or continuous variables possibly having a population substructure, such as in EHR; and those with microbiome features having a compositional constraint and phylogenetic relatedness. Through simulations and real data applications, these methods are shown to identify important features with good FDR control and power.

Background: Bland and Altman plot method is a widely cited graphical approach to assess equivalence of quantitative measurement techniques. It has been widely applied, however often misinterpreted by lacking of inferential statistical support. We aim to develop and distribute a statistical method in R in order to add robust and suitable inferential statistics of equivalence. Methods: Three nested tests based on structural regressions are proposed to assess the equivalence of structural means (accuracy), equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements of data pairs obtained from the same subject) to reach statistical support for the equivalence of measurement techniques. Graphical outputs illustrating these three tests were added to follow Bland and Altman's principles of easy communication. Results: Statistical p-values and robust approach by bootstrapping with corresponding graphs provide objective, robust measures of equivalence. Five pairs of data sets were analyzed in order to criticize previously published articles that applied the Bland and Altman's principles, thus showing the suitability of the present statistical approach. In one case it was demonstrated strict equivalence, three cases showed partial equivalence, and one case showed poor equivalence. Package containing open codes and data is available with installation instructions on GitHub for free distribution. Conclusions: Statistical p-values and robust approach assess the equivalence of accuracy, precision, and agreement for measurement techniques. Decomposition in three tests helps the location of any disagreement as a means to fix a new technique.

The autoregressive (AR) models are used to represent the time-varying random process in which output depends linearly on previous terms and a stochastic term (the innovation). In the classical version, the AR models are based on normal distribution. However, this distribution does not allow describing data with outliers and asymmetric behavior. In this paper, we study the AR models with normal inverse Gaussian (NIG) innovations. The NIG distribution belongs to the class of semi heavy-tailed distributions with wide range of shapes and thus allows for describing real-life data with possible jumps. The expectation-maximization (EM) algorithm is used to estimate the parameters of the considered model. The efficacy of the estimation procedure is shown on the simulated data. A comparative study is presented, where the classical estimation algorithms are also incorporated, namely, Yule-Walker and conditional least squares methods along with EM method for model parameters estimation. The applications of the introduced model are demonstrated on the real-life financial data.

In this paper, we propose a time-series stochastic model based on a scale mixture distribution with Markov transitions to detect epileptic seizures in electroencephalography (EEG). In the proposed model, an EEG signal at each time point is assumed to be a random variable following a Gaussian distribution. The covariance matrix of the Gaussian distribution is weighted with a latent scale parameter, which is also a random variable, resulting in the stochastic fluctuations of covariances. By introducing a latent state variable with a Markov chain in the background of this stochastic relationship, time-series changes in the distribution of latent scale parameters can be represented according to the state of epileptic seizures. In an experiment, we evaluated the performance of the proposed model for seizure detection using EEGs with multiple frequency bands decomposed from a clinical dataset. The results demonstrated that the proposed model can detect seizures with high sensitivity and outperformed several baselines.

Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this setting. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a benchmark containing multiple tasks taken from classical graph theory, which demonstrates the capacity of our model.

北京阿比特科技有限公司