亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In causal inference studies, interest often lies in understanding the mechanisms through which a treatment affects an outcome. One approach is principal stratification (PS), which introduces well-defined causal effects in the presence of confounded post-treatment variables, or mediators, and clearly defines the assumptions for identification and estimation of those effects. The goal of this paper is to extend the PS framework to studies with continuous treatments and continuous post-treatment variables, which introduces a number of unique challenges both in terms of defining causal effects and performing inference. This manuscript provides three key methodological contributions: 1) we introduce novel principal estimands for continuous treatments that provide valuable insights into different causal mechanisms, 2) we utilize Bayesian nonparametric approaches to model the joint distribution of the potential mediating variables based on both Gaussian processes and Dirichlet process mixtures to ensure our approach is robust to model misspecification, and 3) we provide theoretical and numerical justification for utilizing a model for the potential outcomes to identify the joint distribution of the potential mediating variables. Lastly, we apply our methodology to a novel study of the relationship between the economy and arrest rates, and how this is potentially mediated by police capacity.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

Case-based explanations are an intuitive method to gain insight into the decision-making process of deep learning models in clinical contexts. However, medical images cannot be shared as explanations due to privacy concerns. To address this problem, we propose a novel method for disentangling identity and medical characteristics of images and apply it to anonymize medical images. The disentanglement mechanism replaces some feature vectors in an image while ensuring that the remaining features are preserved, obtaining independent feature vectors that encode the images' identity and medical characteristics. We also propose a model to manufacture synthetic privacy-preserving identities to replace the original image's identity and achieve anonymization. The models are applied to medical and biometric datasets, demonstrating their capacity to generate realistic-looking anonymized images that preserve their original medical content. Additionally, the experiments show the network's inherent capacity to generate counterfactual images through the replacement of medical features.

The optimization of open-loop shallow geothermal systems, which includes both design and operational aspects, is an important research area aimed at improving their efficiency and sustainability and the effective management of groundwater as a shallow geothermal resource. This paper investigates various approaches to address optimization problems arising from these research and implementation questions about GWHP systems. The identified optimization approaches are thoroughly analyzed based on criteria such as computational cost and applicability. Moreover, a novel classification scheme is introduced that categorizes the approaches according to the types of groundwater simulation model and the optimization algorithm used. Simulation models are divided into two types: numerical and simplified (analytical or data-driven) models, while optimization algorithms are divided into gradient-based and derivative-free algorithms. Finally, a comprehensive review of existing approaches in the literature is provided, highlighting their strengths and limitations and offering recommendations for both the use of existing approaches and the development of new, improved ones in this field.

The task of community detection, which aims to partition a network into clusters of nodes to summarize its large-scale structure, has spawned the development of many competing algorithms with varying objectives. Some community detection methods are inferential, explicitly deriving the clustering objective through a probabilistic generative model, while other methods are descriptive, dividing a network according to an objective motivated by a particular application, making it challenging to compare these methods on the same scale. Here we present a solution to this problem that associates any community detection objective, inferential or descriptive, with its corresponding implicit network generative model. This allows us to compute the description length of a network and its partition under arbitrary objectives, providing a principled measure to compare the performance of different algorithms without the need for "ground truth" labels. Our approach also gives access to instances of the community detection problem that are optimal to any given algorithm, and in this way reveals intrinsic biases in popular descriptive methods, explaining their tendency to overfit. Using our framework, we compare a number of community detection methods on artificial networks, and on a corpus of over 500 structurally diverse empirical networks. We find that more expressive community detection methods exhibit consistently superior compression performance on structured data instances, without having degraded performance on a minority of situations where more specialized algorithms perform optimally. Our results undermine the implications of the "no free lunch" theorem for community detection, both conceptually and in practice, since it is confined to unstructured data instances, unlike relevant community detection problems which are structured by requirement.

Complete observation of event histories is often impossible due to sampling effects such as right-censoring and left-truncation, but also due to reporting delays and incomplete event adjudication. This is for example the case during interim stages of clinical trials and for health insurance claims. In this paper, we develop a parametric method that takes the aforementioned effects into account, treating the latter two as partially exogenous. The method, which takes the form of a two-step M-estimation procedure, is applicable to multistate models in general, including competing risks and recurrent event models. The effect of reporting delays is derived via thinning, extending existing results for Poisson models. To address incomplete event adjudication, we propose an imputed likelihood approach which, compared to existing methods, has the advantage of allowing for dependencies between the event history and adjudication processes as well as allowing for unreported events and multiple event types. We establish consistency and asymptotic normality under standard identifiability, integrability, and smoothness conditions, and we demonstrate the validity of the percentile bootstrap. Finally, a simulation study shows favorable finite sample performance of our method compared to other alternatives, while an application to disability insurance data illustrates its practical potential.

In the context of clinical and biomedical studies, joint frailty models have been developed to study the joint temporal evolution of recurrent and terminal events, capturing both the heterogeneous susceptibility to experiencing a new episode and the dependence between the two processes. While discretely-distributed frailty is usually more exploitable by clinicians and healthcare providers, existing literature on joint frailty models predominantly assumes continuous distributions for the random effects. In this article, we present a novel joint frailty model that assumes bivariate discretely-distributed non-parametric frailties, with an unknown finite number of mass points. This approach facilitates the identification of latent structures among subjects, grouping them into sub-populations defined by a shared frailty value. We propose an estimation routine via Expectation-Maximization algorithm, which not only estimates the number of subgroups but also serves as an unsupervised classification tool. This work is motivated by a study of patients with Heart Failure (HF) receiving ACE inhibitors treatment in the Lombardia region of Italy. Recurrent events of interest are hospitalizations due to HF and terminal event is death for any cause.

Social behavior, defined as the process by which individuals act and react in response to others, is crucial for the function of societies and holds profound implications for mental health. To fully grasp the intricacies of social behavior and identify potential therapeutic targets for addressing social deficits, it is essential to understand its core principles. Although machine learning algorithms have made it easier to study specific aspects of complex behavior, current methodologies tend to focus primarily on single-animal behavior. In this study, we introduce LISBET (seLf-supervIsed Social BEhavioral Transformer), a model designed to detect and segment social interactions. Our model eliminates the need for feature selection and extensive human annotation by using self-supervised learning to detect and quantify social behaviors from dynamic body parts tracking data. LISBET can be used in hypothesis-driven mode to automate behavior classification using supervised finetuning, and in discovery-driven mode to segment social behavior motifs using unsupervised learning. We found that motifs recognized using the discovery-driven approach not only closely match the human annotations but also correlate with the electrophysiological activity of dopaminergic neurons in the Ventral Tegmental Area (VTA). We hope LISBET will help the community improve our understanding of social behaviors and their neural underpinnings.

Hesitant fuzzy sets are widely used in the instances of uncertainty and hesitation. The inclusion relationship is an important and foundational definition for sets. Hesitant fuzzy set, as a kind of set, needs explicit definition of inclusion relationship. Base on the hesitant fuzzy membership degree of discrete form, several kinds of inclusion relationships for hesitant fuzzy sets are proposed. And then some foundational propositions of hesitant fuzzy sets and the families of hesitant fuzzy sets are presented. Finally, some foundational propositions of hesitant fuzzy information systems with respect to parameter reductions are put forward, and an example and an algorithm are given to illustrate the processes of parameter reductions.

Donoho and Kipnis (2022) showed that the the higher criticism (HC) test statistic has a non-Gaussian phase transition but remarked that it is probably not optimal, in the detection of sparse differences between two large frequency tables when the counts are low. The setting can be considered to be heterogeneous, with cells containing larger total counts more able to detect smaller differences. We provide a general study here of sparse detection arising from such heterogeneous settings, and showed that optimality of the HC test statistic requires thresholding, for example in the case of frequency table comparison, to restrict to p-values of cells with total counts exceeding a threshold. The use of thresholding also leads to optimality of the HC test statistic when it is applied on the sparse Poisson means model of Arias-Castro and Wang (2015). The phase transitions we consider here are non-Gaussian, and involve an interplay between the rate functions of the response and sample size distributions. We also showed, both theoretically and in a numerical study, that applying thresholding to the Bonferroni test statistic results in better sparse mixture detection in heterogeneous settings.

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.

The fraction nonconforming is a key quality measure used in statistical quality control design in clinical laboratory medicine. The confidence bounds of normal populations of measurements for the fraction nonconforming each of the lower and upper quality specification limits when both the random and the systematic error are unknown can be calculated using the noncentral t-distribution, as it is described in detail and illustrated with examples.

北京阿比特科技有限公司