亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the study of life tables the random variable of interest is usually assumed discrete since mortality rates are studied for integer ages. In dynamic life tables a time domain is included to account for the evolution effect of the hazard rates in time. In this article we follow a survival analysis approach and use a nonparametric description of the hazard rates. We construct a discrete time stochastic processes that reflects dependence across age as well as in time. This process is used as a bayesian nonparametric prior distribution for the hazard rates for the study of evolutionary life tables. Prior properties of the process are studied and posterior distributions are derived. We present a simulation study, with the inclusion of right censored observations, as well as a real data analysis to show the performance of our model.

相關內容

A key problem in robotic locomotion is in finding optimal shape changes to effectively displace systems through the world. Variational techniques for gait optimization require estimates of body displacement per gait cycle; however, these estimates introduce error due to unincluded high order terms. In this paper, we formulate existing estimates for displacement, and describe the contribution of low order terms to these estimates. We additionally describe the magnitude of higher (third) order effects, and identify that choice of body coordinate, gait diameter, and starting phase influence these effects. We demonstrate that variation of such parameters on two example systems (the differential drive car and Purcell swimmer) effectively manages third order contributions.

Bayesian nonparametric methods are a popular choice for analysing survival data due to their ability to flexibly model the distribution of survival times. These methods typically employ a nonparametric prior on the survival function that is conjugate with respect to right-censored data. Eliciting these priors, particularly in the presence of covariates, can be challenging and inference typically relies on computationally intensive Markov chain Monte Carlo schemes. In this paper, we build on recent work that recasts Bayesian inference as assigning a predictive distribution on the unseen values of a population conditional on the observed samples, thus avoiding the need to specify a complex prior. We describe a copula-based predictive update which admits a scalable sequential importance sampling algorithm to perform inference that properly accounts for right-censoring. We provide theoretical justification through an extension of Doob's consistency theorem and illustrate the method on a number of simulated and real data sets, including an example with covariates. Our approach enables analysts to perform Bayesian nonparametric inference through only the specification of a predictive distribution.

In this study, we investigate fundamental trade-off among identification, secrecy, template, and privacy-leakage rates in biometric identification systems. Ignatenko and Willems (2015) studied this system assuming that the channel in the enrollment process of the system is noiseless and they did not consider the template rate. In the enrollment process, however, it is highly considered that noise occurs when bio-data is scanned. In this paper, we impose a noisy channel in the enrollment process and characterize the capacity region of the rate tuples. The capacity region is proved by a novel technique via two auxiliary random variables, which has never been seen in previous studies. As special cases, the obtained result shows that the characterization reduces to the one given by Ignatenko and Willems (2015) where the enrollment channel is noiseless and there is no constraint on the template rate, and it also coincides with the result derived by G\"unl\"u and Kramer (2018) where there is only one individual.

We present a new method for analyzing stochastic epidemic models under minimal assumptions. The method, dubbed DSA, is based on a simple yet powerful observation, namely that population-level mean-field trajectories described by a system of PDE may also approximate individual-level times of infection and recovery. This idea gives rise to a certain non-Markovian agent-based model and provides an agent-level likelihood function for a random sample of infection and/or recovery times. Extensive numerical analyses on both synthetic and real epidemic data from the FMD in the United Kingdom and the COVID-19 in India show good accuracy and confirm method's versatility in likelihood-based parameter estimation. The accompanying software package gives prospective users a practical tool for modeling, analyzing and interpreting epidemic data with the help of the DSA approach.

Explicit knowledge of total community-level immune seroprevalence is critical to developing policies to mitigate the social and clinical impact of SARS-CoV-2. Publicly available vaccination data are frequently cited as a proxy for population immunity, but this metric ignores the effects of naturally-acquired immunity, which varies broadly throughout the country and world. Without broad or random sampling of the population, accurate measurement of persistent immunity post natural infection is generally unavailable. To enable tracking of both naturally-acquired and vaccine-induced immunity, we set up a synthetic random proxy based on routine hospital testing for estimating total Immunoglobulin G (IgG) prevalence in the sampled community. Our approach analyzes viral IgG testing data of asymptomatic patients who present for elective procedures within a hospital system. We apply multilevel regression and poststratification to adjust for demographic and geographic discrepancies between the sample and the community population. We then apply state-based vaccination data to categorize immune status as driven by natural infection or by vaccine. We have validated the model using verified clinical metrics of viral and symptomatic disease incidence to show the expected biological correlation of these entities with the timing, rate, and magnitude of seroprevalence. In mid-July 2021, the estimated immunity level was 74% with the administered vaccination rate of 45% in the two counties. The metric improves real-time understanding of immunity to COVID-19 as it evolves and the coordination of policy responses to the disease, toward an inexpensive and easily operational surveillance system that transcends the limits of vaccination datasets alone.

Observational studies can play a useful role in assessing the comparative effectiveness of competing treatments. In a clinical trial the randomization of participants to treatment and control groups generally results in well-balanced groups with respect to possible confounders, which makes the analysis straightforward. However, when analysing observational data, the potential for unmeasured confounding makes comparing treatment effects much more challenging. Causal inference methods such as the Instrumental Variable and Prior Even Rate Ratio approaches make it possible to circumvent the need to adjust for confounding factors that have not been measured in the data or measured with error. Direct confounder adjustment via multivariable regression and Propensity score matching also have considerable utility. Each method relies on a different set of assumptions and leverages different data. In this paper, we describe the assumptions of each method and assess the impact of violating these assumptions in a simulation study. We propose the prior outcome augmented Instrumental Variable method that leverages data from before and after treatment initiation, and is robust to the violation of key assumptions. Finally, we propose the use of a heterogeneity statistic to decide two or more estimates are statistically similar, taking into account their correlation. We illustrate our causal framework to assess the risk of genital infection in patients prescribed Sodium-glucose Co-transporter-2 inhibitors versus Dipeptidyl Peptidase-4 inhibitors as second-line treatment for type 2 diabets using observational data from the Clinical Practice Research Datalink.

Identifying personalized interventions for an individual is an important task. Recent work has shown that interventions that do not consider the demographic background of individual consumers can, in fact, produce the reverse effect, strengthening opposition to electric vehicles. In this work, we focus on methods for personalizing interventions based on an individual's demographics to shift the preferences of consumers to be more positive towards Battery Electric Vehicles (BEVs). One of the constraints in building models to suggest interventions for shifting preferences is that each intervention can influence the effectiveness of later interventions. This, in turn, requires many subjects to evaluate effectiveness of each possible intervention. To address this, we propose to identify personalized factors influencing BEV adoption, such as barriers and motivators. We present a method for predicting these factors and show that the performance is better than always predicting the most frequent factors. We then present a Reinforcement Learning (RL) model that learns the most effective interventions, and compare the number of subjects required for each approach.

Spatio-temporal forecasting has numerous applications in analyzing wireless, traffic, and financial networks. Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data. Recent advances in deep learning allow for better modelling of spatial and temporal dependencies. While most of these models focus on obtaining accurate point forecasts, they do not characterize the prediction uncertainty. In this work, we consider the time-series data as a random realization from a nonlinear state-space model and target Bayesian inference of the hidden states for probabilistic forecasting. We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings. Thorough experimentation on several real world time-series datasets demonstrates that our approach provides better characterization of uncertainty while maintaining comparable accuracy to the state-of-the art point forecasting methods.

This work focuses on combining nonparametric topic models with Auto-Encoding Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the topics are treated as trainable parameters and the document-specific topic proportions are obtained by a stick-breaking construction. The inference of iTM-VAE is modeled by neural networks such that it can be computed in a simple feed-forward manner. We also describe how to introduce a hyper-prior into iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the hyper-prior technique is quite general and we show that it can be applied to other AEVB based models to alleviate the {\it collapse-to-prior} problem elegantly. Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner. HiTM-VAE is even more flexible and can generate topic distributions with better variability. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-art baselines significantly. The advantages of the hyper-prior technique and the hierarchical model construction are also confirmed by experiments.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司