亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper concerns the construction of confidence intervals in standard seroprevalence surveys. In particular, we discuss methods for constructing confidence intervals for the proportion of individuals in a population infected with a disease using a sample of antibody test results and measurements of the test's false positive and false negative rates. We begin by documenting erratic behavior in the coverage probabilities of standard Wald and percentile bootstrap intervals when applied to this problem. We then consider two alternative sets of intervals constructed with test inversion. The first set of intervals are approximate, using either asymptotic or bootstrap approximation to the finite-sample distribution of a chosen test statistic. We consider several choices of test statistic, including maximum likelihood estimators and generalized likelihood ratio statistics. We show with simulation that, at empirically relevant parameter values and sample sizes, the coverage probabilities for these intervals are close to their nominal level and are approximately equi-tailed. The second set of intervals are shown to contain the true parameter value with probability at least equal to the nominal level, but can be conservative in finite samples.

相關內容

It is well-known that an algorithm exists which approximates the NP-complete problem of Set Cover within a factor of ln(n), and it was recently proven that this approximation ratio is optimal unless P = NP. This optimality result is the product of many advances in characterizations of NP, in terms of interactive proof systems and probabilistically checkable proofs (PCP), and improvements to the analyses thereof. However, as a result, it is difficult to extract the development of Set Cover approximation bounds from the greater scope of proof system analysis. This paper attempts to present a chronological progression of results on lower-bounding the approximation ratio of Set Cover. We analyze a series of proofs of progressively better bounds and unify the results under similar terminologies and frameworks to provide an accurate comparison of proof techniques and their results. We also treat many preliminary results as black-boxes to better focus our analysis on the core reductions to Set Cover instances. The result is alternative versions of several hardness proofs, beginning with initial inapproximability results and culminating in a version of the proof that ln(n) is a tight lower bound.

Replication analysis is widely used in many fields of study. Once a research is published, many other researchers will conduct the same or very similar analysis to confirm the reliability of the published research. However, what if the data is confidential? In particular, if the data sets used for the studies are confidential, we cannot release the results of replication analyses to any entity without the permission to access the data sets, otherwise it may result in serious privacy leakage especially when the published study and replication studies are using similar or common data sets. For example, examining the influence of the treatment on outliers can cause serious leakage of the information about outliers. In this paper, we build two frameworks for replication analysis by a differentially private Bayesian approach. We formalize our questions of interest and illustrates the properties of our methods by a combination of theoretical analysis and simulation to show the feasibility of our approach. We also provide some guidance on the choice of parameters and interpretation of the results.

Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.

Meta-analyses of survival studies aim to reveal the variation of an effect measure of interest over different studies and present a meaningful summary. They must address between study heterogeneity in several dimensions and eliminate spurious sources of variation. Forest plots of the usual (adjusted) hazard ratios are fraught with difficulties from this perspective since both the magnitude and interpretation of these hazard ratios depend on factors ancillary to the true study-specific exposure effect. These factors generally include the study duration, the censoring patterns within studies, the covariates adjusted for and their distribution over exposure groups. Ignoring these mentioned features and accepting implausible hidden assumptions may critically affect interpretation of the pooled effect measure. Risk differences or restricted mean effects over a common follow-up interval and balanced distribution of a covariate set are natural candidates for exposure evaluation and possible treatment choice. In this paper, we propose differently standardized survival curves over a fitting time horizon, targeting various estimands with their own transportability. With each type of standardization comes a given interpretation within studies and overall, under stated assumptions. These curves can in turn be summarized by standardized study-specific contrasts, including hazard ratios with more consistent meaning. We prefer forest plots of risk differences at well chosen time points. Our case study examines overall survival among anal squamous cell carcinoma patients, expressing the tumor marker $p16^{INK4a}$ or not, based on the individual patient data of six studies.

Modern longitudinal studies collect feature data at many timepoints, often of the same order of sample size. Such studies are typically affected by {dropout} and positivity violations. We tackle these problems by generalizing effects of recent incremental interventions (which shift propensity scores rather than set treatment values deterministically) to accommodate multiple outcomes and subject dropout. We give an identifying expression for incremental intervention effects when dropout is conditionally ignorable (without requiring treatment positivity), and derive the nonparametric efficiency bound for estimating such effects. Then we present efficient nonparametric estimators, showing that they converge at fast parametric rates and yield uniform inferential guarantees, even when nuisance functions are estimated flexibly at slower rates. We also study the variance ratio of incremental intervention effects relative to more conventional deterministic effects in a novel infinite time horizon setting, where the number of timepoints can grow with sample size, and show that incremental intervention effects yield near-exponential gains in statistical precision in this setup. Finally we conclude with simulations and apply our methods in a study of the effect of low-dose aspirin on pregnancy outcomes.

We present an overview on Temporal Logic Programming under the perspective of its application for Knowledge Representation and declarative problem solving. Such programs are the result of combining usual rules with temporal modal operators, as in Linear-time Temporal Logic (LTL). We focus on recent results of the non-monotonic formalism called Temporal Equilibrium Logic (TEL) that is defined for the full syntax of LTL, but performs a model selection criterion based on Equilibrium Logic, a well known logical characterization of Answer Set Programming (ASP). We obtain a proper extension of the stable models semantics for the general case of arbitrary temporal formulas. We recall the basic definitions for TEL and its monotonic basis, the temporal logic of Here-and-There (THT), and study the differences between infinite and finite traces. We also provide other useful results, such as the translation into other formalisms like Quantified Equilibrium Logic or Second-order LTL, and some techniques for computing temporal stable models based on automata. In a second part, we focus on practical aspects, defining a syntactic fragment called temporal logic programs closer to ASP, and explain how this has been exploited in the construction of the solver TELINGO.

Constrained tensor and matrix factorization models allow to extract interpretable patterns from multiway data. Therefore identifiability properties and efficient algorithms for constrained low-rank approximations are nowadays important research topics. This work deals with columns of factor matrices of a low-rank approximation being sparse in a known and possibly overcomplete basis, a model coined as Dictionary-based Low-Rank Approximation (DLRA). While earlier contributions focused on finding factor columns inside a dictionary of candidate columns, i.e. one-sparse approximations, this work is the first to tackle DLRA with sparsity larger than one. I propose to focus on the sparse-coding subproblem coined Mixed Sparse-Coding (MSC) that emerges when solving DLRA with an alternating optimization strategy. Several algorithms based on sparse-coding heuristics (greedy methods, convex relaxations) are provided to solve MSC. The performance of these heuristics is evaluated on simulated data. Then, I show how to adapt an efficient MSC solver based on the LASSO to compute Dictionary-based Matrix Factorization and Canonical Polyadic Decomposition in the context of hyperspectral image processing and chemometrics. These experiments suggest that DLRA extends the modeling capabilities of low-rank approximations, helps reducing estimation variance and enhances the identifiability and interpretability of estimated factors.

I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions that do not bound scaled model parameters. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length and coverage properties in small samples via a set of Monte Carlo experiments that entail a variety of different data distributions as well as an empirical application to the predictors of diabetes disease progression.

We study the link between generalization and interference in temporal-difference (TD) learning. Interference is defined as the inner product of two different gradients, representing their alignment. This quantity emerges as being of interest from a variety of observations about neural networks, parameter sharing and the dynamics of learning. We find that TD easily leads to low-interference, under-generalizing parameters, while the effect seems reversed in supervised learning. We hypothesize that the cause can be traced back to the interplay between the dynamics of interference and bootstrapping. This is supported empirically by several observations: the negative relationship between the generalization gap and interference in TD, the negative effect of bootstrapping on interference and the local coherence of targets, and the contrast between the propagation rate of information in TD(0) versus TD($\lambda$) and regression tasks such as Monte-Carlo policy evaluation. We hope that these new findings can guide the future discovery of better bootstrapping methods.

Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.

北京阿比特科技有限公司