亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In health cohort studies, repeated measures of markers are often used to describe the natural history of a disease. Joint models allow to study their evolution by taking into account the possible informative dropout usually due to clinical events. However, joint modeling developments mostly focused on continuous Gaussian markers while, in an increasing number of studies, the actual marker of interest is non-directly measurable; it consitutes a latent quantity evaluated by a set of observed indicators from questionnaires or measurement scales. Classical examples include anxiety, fatigue, cognition. In this work, we explain how joint models can be extended to the framework of a latent quantity measured over time by markers of different nature (e.g. continuous, binary, ordinal). The longitudinal submodel describes the evolution over time of the quantity of interest defined as a latent process in a structural mixed model, and links the latent process to each marker repeated observation through appropriate measurement models. Simultaneously, the risk of multi-cause event is modelled via a proportional cause-specific hazard model that includes a function of the mixed model elements as linear predictor to take into account the association between the latent process and the risk of event. Estimation, carried out in the maximum likelihood framework and implemented in the R-package JLPM, has been validated by simulations. The methodology is illustrated in the French cohort on Multiple-System Atrophy (MSA), a rare and fatal neurodegenerative disease, with the study of dysphagia progression over time truncated by the occurrence of death.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 流形 · 塑造 · Performer · 估計/估計量 ·
2021 年 11 月 30 日

In many applications, geodesic hierarchical models are adequate for the study of temporal observations. We employ such a model derived for manifold-valued data to Kendall's shape space. In particular, instead of the Sasaki metric, we adapt a functional-based metric, which increases the computational efficiency and does not require the implementation of the curvature tensor. We propose the corresponding variational time discretization of geodesics and employ the approach for longitudinal analysis of 2D rat skulls shapes as well as 3D shapes derived from an imaging study on osteoarthritis. Particularly, we perform hypothesis test and estimate the mean trends.

Data in non-Euclidean spaces are commonly encountered in many fields of Science and Engineering. For instance, in Robotics, attitude sensors capture orientation which is an element of a Lie group. In the recent past, several researchers have reported methods that take into account the geometry of Lie Groups in designing parameter estimation algorithms in nonlinear spaces. Maximum likelihood estimators (MLE) are quite commonly used for such tasks and it is well known in the field of statistics that Stein's shrinkage estimators dominate the MLE in a mean-squared sense assuming the observations are from a normal population. In this paper, we present a novel shrinkage estimator for data residing in Lie groups, specifically, abelian or compact Lie groups. The key theoretical results presented in this paper are: (i) Stein's Lemma and its proof for Lie groups and, (ii) proof of dominance of the proposed shrinkage estimator over MLE for abelian and compact Lie groups. We present examples of simulation studies of the dominance of the proposed shrinkage estimator and an application of shrinkage estimation to multiple-robot localization.

We present a novel class of projected methods, to perform statistical analysis on a data set of probability distributions on the real line, with the 2-Wasserstein metric. We focus in particular on Principal Component Analysis (PCA) and regression. To define these models, we exploit a representation of the Wasserstein space closely related to its weak Riemannian structure, by mapping the data to a suitable linear space and using a metric projection operator to constrain the results in the Wasserstein space. By carefully choosing the tangent point, we are able to derive fast empirical methods, exploiting a constrained B-spline approximation. As a byproduct of our approach, we are also able to derive faster routines for previous work on PCA for distributions. By means of simulation studies, we compare our approaches to previously proposed methods, showing that our projected PCA has similar performance for a fraction of the computational cost and that the projected regression is extremely flexible even under misspecification. Several theoretical properties of the models are investigated and asymptotic consistency is proven. Two real world applications to Covid-19 mortality in the US and wind speed forecasting are discussed.

Precision medicine is a clinical approach for disease prevention, detection and treatment, which considers each individual's genetic background, environment and lifestyle. The development of this tailored avenue has been driven by the increased availability of omics methods, large cohorts of temporal samples, and their integration with clinical data. Despite the immense progression, existing computational methods for data analysis fail to provide appropriate solutions for this complex, high-dimensional and longitudinal data. In this work we have developed a new method termed TCAM, a dimensionality reduction technique for multi-way data, that overcomes major limitations when doing trajectory analysis of longitudinal omics data. Using real-world data, we show that TCAM outperforms traditional methods, as well as state-of-the-art tensor-based approaches for longitudinal microbiome data analysis. Moreover, we demonstrate the versatility of TCAM by applying it to several different omics datasets, and the applicability of it as a drop-in replacement within straightforward ML tasks.

It is necessary to use more general models than the classical Fourier heat conduction law to describe small-scale thermal conductivity processes. The effects of heat flow memory and heat capacity memory (internal energy) in solids are considered in first-order integrodifferential evolutionary equations with difference-type kernels. The main difficulties in applying such nonlocal in-time mathematical models are associated with the need to work with a solution throughout the entire history of the process. The paper develops an approach to transforming a nonlocal problem into a computationally simpler local problem for a system of first-order evolution equations. Such a transition is applicable for heat conduction problems with memory if the relaxation functions of the heat flux and heat capacity are represented as a sum of exponentials. The correctness of the auxiliary linear problem is ensured by the obtained estimates of the stability of the solution concerning the initial data and the right-hand side in the corresponding Hilbert spaces. The study's main result is to prove the unconditional stability of the proposed two-level scheme with weights for the evolutionary system of equations for modeling heat conduction in solid media with memory. In this case, finding an approximate solution on a new level in time is not more complicated than the classical heat equation. The numerical solution of a model one-dimensional in space heat conduction problem with memory effects is presented.

This work deals with the analysis of longitudinal ordinal responses. The novelty of the proposed approach is in modeling simultaneously the temporal dynamics of a latent trait of interest, measured via the observed ordinal responses, and the answering behaviors influenced by response styles, through hidden Markov models (HMMs) with two latent components. This approach enables the modeling of (i) the substantive latent trait, controlling for response styles; (ii) the change over time of latent trait and answering behavior, allowing also dependence on individual characteristics. For the proposed HMMs, estimation procedures, methods for standard errors calculation, measures of goodness of fit and classification, and full-conditional residuals are discussed. The proposed model is fitted to ordinal longitudinal data from the Survey on Household Income and Wealth (Bank of Italy) to give insights on the evolution of the Italian households financial capability.

In this work, we look at a two-sample problem within the framework of Gaussian graphical models. When the global hypothesis of equality of two distributions is rejected, the interest is usually in localizing the source of difference. Motivated by the idea that diseases can be seen as system perturbations, and by the need to distinguish between the origin of perturbation and components affected by the perturbation, we introduce the concept of a minimal seed set, and its graphical counterpart a graphical seed set. They intuitively consist of variables driving the difference between the two conditions. We propose a simple testing procedure, linear in the number of nodes, to estimate the graphical seed set from data, and study its finite sample behavior with a stimulation study. We illustrate our approach in the context of gene set analysis by means of a publicly available gene expression dataset.

Statistical models are central to machine learning with broad applicability across a range of downstream tasks. The models are typically controlled by free parameters that are estimated from data by maximum-likelihood estimation. However, when faced with real-world datasets many of the models run into a critical issue: they are formulated in terms of fully-observed data, whereas in practice the datasets are plagued with missing data. The theory of statistical model estimation from incomplete data is conceptually similar to the estimation of latent-variable models, where powerful tools such as variational inference (VI) exist. However, in contrast to standard latent-variable models, parameter estimation with incomplete data often requires estimating exponentially-many conditional distributions of the missing variables, hence making standard VI methods intractable. We address this gap by introducing variational Gibbs inference (VGI), a new general-purpose method to estimate the parameters of statistical models from incomplete data. We validate VGI on a set of synthetic and real-world estimation tasks, estimating important machine learning models, VAEs and normalising flows, from incomplete data. The proposed method, whilst general-purpose, achieves competitive or better performance than existing model-specific estimation methods.

Modern longitudinal studies collect feature data at many timepoints, often of the same order of sample size. Such studies are typically affected by {dropout} and positivity violations. We tackle these problems by generalizing effects of recent incremental interventions (which shift propensity scores rather than set treatment values deterministically) to accommodate multiple outcomes and subject dropout. We give an identifying expression for incremental intervention effects when dropout is conditionally ignorable (without requiring treatment positivity), and derive the nonparametric efficiency bound for estimating such effects. Then we present efficient nonparametric estimators, showing that they converge at fast parametric rates and yield uniform inferential guarantees, even when nuisance functions are estimated flexibly at slower rates. We also study the variance ratio of incremental intervention effects relative to more conventional deterministic effects in a novel infinite time horizon setting, where the number of timepoints can grow with sample size, and show that incremental intervention effects yield near-exponential gains in statistical precision in this setup. Finally we conclude with simulations and apply our methods in a study of the effect of low-dose aspirin on pregnancy outcomes.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

北京阿比特科技有限公司