亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The study of treatment effects is often complicated by noncompliance and missing data. In the one-sided noncompliance setting where of interest are the complier and noncomplier average causal effects (CACE and NACE), we address outcome missingness of the \textit{latent missing at random} type (LMAR, also known as \textit{latent ignorability}). That is, conditional on covariates and treatment assigned, the missingness may depend on compliance type. Within the instrumental variable (IV) approach to noncompliance, methods have been proposed for handling LMAR outcome that additionally invoke an exclusion restriction type assumption on missingness, but no solution has been proposed for when a non-IV approach is used. This paper focuses on effect identification in the presence of LMAR outcome, with a view to flexibly accommodate different principal identification approaches. We show that under treatment assignment ignorability and LMAR only, effect nonidentifiability boils down to a set of two connected mixture equations involving unidentified stratum-specific response probabilities and outcome means. This clarifies that (except for a special case) effect identification generally requires two additional assumptions: a \textit{specific missingness mechanism} assumption and a \textit{principal identification} assumption. This provides a template for identifying effects based on separate choices of these assumptions. We consider a range of specific missingness assumptions, including those that have appeared in the literature and some new ones. Incidentally, we find an issue in the existing assumptions, and propose a modification of the assumptions to avoid the issue. Results under different assumptions are illustrated using data from the Baltimore Experience Corps Trial.

相關內容

We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.

We have developed an efficient and unconditionally energy-stable method for simulating droplet formation dynamics. Our approach involves a novel time-marching scheme based on the scalar auxiliary variable technique, specifically designed for solving the Cahn-Hilliard-Navier-Stokes phase field model with variable density and viscosity. We have successfully applied this method to simulate droplet formation in scenarios where a Newtonian fluid is injected through a vertical tube into another immiscible Newtonian fluid. To tackle the challenges posed by nonhomogeneous Dirichlet boundary conditions at the tube entrance, we have introduced additional nonlocal auxiliary variables and associated ordinary differential equations. These additions effectively eliminate the influence of boundary terms. Moreover, we have incorporated stabilization terms into the scheme to enhance its numerical effectiveness. Notably, our resulting scheme is fully decoupled, requiring the solution of only linear systems at each time step. We have also demonstrated the energy decaying property of the scheme, with suitable modifications. To assess the accuracy and stability of our algorithm, we have conducted extensive numerical simulations. Additionally, we have examined the dynamics of droplet formation and explored the impact of dimensionless parameters on the process. Overall, our work presents a refined method for simulating droplet formation dynamics, offering improved efficiency, energy stability, and accuracy.

We introduce a method to construct a stochastic surrogate model from the results of dimensionality reduction in forward uncertainty quantification. The hypothesis is that the high-dimensional input augmented by the output of a computational model admits a low-dimensional representation. This assumption can be met by numerous uncertainty quantification applications with physics-based computational models. The proposed approach differs from a sequential application of dimensionality reduction followed by surrogate modeling, as we "extract" a surrogate model from the results of dimensionality reduction in the input-output space. This feature becomes desirable when the input space is genuinely high-dimensional. The proposed method also diverges from the Probabilistic Learning on Manifold, as a reconstruction mapping from the feature space to the input-output space is circumvented. The final product of the proposed method is a stochastic simulator that propagates a deterministic input into a stochastic output, preserving the convenience of a sequential "dimensionality reduction + Gaussian process regression" approach while overcoming some of its limitations. The proposed method is demonstrated through two uncertainty quantification problems characterized by high-dimensional input uncertainties.

The existing cross-validated risk scores (CVRS) design has been proposed for developing and testing the efficacy of a treatment in a high-efficacy patient group (the sensitive group) using high-dimensional data (such as genetic data). The design is based on computing a risk score for each patient and dividing them into clusters using a non-parametric clustering procedure. In some settings it is desirable to consider the trade-off between two outcomes, such as efficacy and toxicity, or cost and effectiveness. With this motivation, we extend the CVRS design (CVRS2) to consider two outcomes. The design employs bivariate risk scores that are divided into clusters. We assess the properties of the CVRS2 using simulated data and illustrate its application on a randomised psychiatry trial. We show that CVRS2 is able to reliably identify the sensitive group (the group for which the new treatment provides benefit on both outcomes) in the simulated data. We apply the CVRS2 design to a psychology clinical trial that had offender status and substance use status as two outcomes and collected a large number of baseline covariates. The CVRS2 design yields a significant treatment effect for both outcomes, while the CVRS approach identified a significant effect for the offender status only after pre-filtering the covariates.

We investigate the multiplicity model with m values of some test statistic independently drawn from a mixture of no effect (null) and positive effect (alternative), where we seek to identify, the alternative test results with a controlled error rate. We are interested in the case where the alternatives are rare. A number of multiple testing procedures filter the set of ordered p-values in order to eliminate the nulls. Such an approach can only work if the p-values originating from the alternatives form one or several identifiable clusters. The Benjamini and Hochberg (BH) method, for example, assumes that this cluster occurs in a small interval $(0,\Delta)$ and filters out all or most of the ordered p-values $p_{(r)}$ above a linear threshold $s \times r$. In repeated applications this filter controls the false discovery rate via the slope s. We propose a new adaptive filter that deletes the p-values from regions of uniform distribution. In cases where a single cluster remains, the p-values in an interval are declared alternatives, with the mid-point and the length of the interval chosen by controlling the data-dependent FDR at a desired level.

We develop an anytime-valid permutation test, where the dataset is fixed and the permutations are sampled sequentially one by one, with the objective of saving computational resources by sampling fewer permutations and stopping early. The core technical advance is the development of new test martingales (nonnegative martingales with initial value one) for testing exchangeability against a very particular alternative. These test martingales are constructed using new and simple betting strategies that smartly bet on the relative ranks of permuted test statistics. The betting strategies are guided by the derivation of a simple log-optimal betting strategy, and display excellent power in practice. In contrast to a well-known method by Besag and Clifford, our method yields a valid e-value or a p-value at any stopping time, and with particular stopping rules, it yields computational gains under both the null and the alternative without compromising power.

When extending inferences from a randomized trial to a new target population, an assumption of transportability of difference effect measures (e.g., conditional average treatment effects) -- or even stronger assumptions of transportability in expectation or distribution of potential outcomes -- is invoked to identify the marginal causal mean difference in the target population. However, many clinical investigators believe that relative effect measures conditional on covariates, such as conditional risk ratios and mean ratios, are more likely to be ``transportable'' across populations compared with difference effect measures. Here, we examine the identification and estimation of the marginal counterfactual mean difference and ratio under a transportability assumption for conditional relative effect measures. We obtain identification results for two scenarios that often arise in practice when individuals in the target population (1) only have access to the control treatment, or (2) have access to the control and other treatments but not necessarily the experimental treatment evaluated in the trial. We then propose multiply robust and nonparametric efficient estimators that allow for the use of data-adaptive methods (e.g., machine learning techniques) to model the nuisance parameters. We examine the performance of the methods in simulation studies and illustrate their use with data from two trials of paliperidone for patients with schizophrenia. We conclude that the proposed methods are attractive when background knowledge suggests that the transportability assumption for conditional relative effect measures is more plausible than alternative assumptions.

In social choice theory with ordinal preferences, a voting method satisfies the axiom of positive involvement if adding to a preference profile a voter who ranks an alternative uniquely first cannot cause that alternative to go from winning to losing. In this note, we prove a new impossibility theorem concerning this axiom: there is no ordinal voting method satisfying positive involvement that also satisfies the Condorcet winner and loser criteria, resolvability, and a common invariance property for Condorcet methods, namely that the choice of winners depends only on the ordering of majority margins by size.

Building prediction models from mass-spectrometry data is challenging due to the abundance of correlated features with varying degrees of zero-inflation, leading to a common interest in reducing the features to a concise predictor set with good predictive performance. In this study, we formally established and examined regularized regression approaches, designed to address zero-inflated and correlated predictors. In particular, we describe a novel two-stage regularized regression approach (ridge-garrote) explicitly modelling zero-inflated predictors using two component variables, comprising a ridge estimator in the first stage and subsequently applying a nonnegative garrote estimator in the second stage. We contrasted ridge-garrote with one-stage methods (ridge, lasso) and other two-stage regularized regression approaches (lasso-ridge, ridge-lasso) for zero-inflated predictors. We assessed the predictive performance and predictor selection properties of these methods in a comparative simulation study and a real-data case study to predict kidney function using peptidomic features derived from mass-spectrometry. In the simulation study, the predictive performance of all assessed approaches was comparable, yet the ridge-garrote approach consistently selected more parsimonious models compared to its competitors in most scenarios. While lasso-ridge achieved higher predictive accuracy than its competitors, it exhibited high variability in the number of selected predictors. Ridge-lasso exhibited slightly superior predictive accuracy than ridge-garrote but at the expense of selecting more noise predictors. Overall, ridge emerged as a favourable option when variable selection is not a primary concern, while ridge-garrote demonstrated notable practical utility in selecting a parsimonious set of predictors, with only minimal compromise in predictive accuracy.

We develop and study a statistical test to detect synchrony in spike trains. Our test is based on the number of coincidences between two trains of spikes. The data are supplied in the form of \(n\) pairs (assumed to be independent) of spike trains. The aim is to assess whether the two trains in a pair are also independent. Our approach is based on previous results of Albert et al. (2015, 2019) and Kim et al. (2022) that we extend to our setting, focusing on the construction of a non-asymptotic criterion ensuring the detection of synchronization in the framework of permutation tests. Our criterion is constructed such that it ensures the control of the Type II error, while the Type I error is controlled by construction. We illustrate our results within two classical models of interacting neurons, the jittering Poisson model and Hawkes processes having \(M\) components interacting in a mean field frame and evolving in stationary regime. For this latter model, we obtain a lower bound of the size \(n\) of the sample necessary to detect the dependency between two neurons.

北京阿比特科技有限公司