Cure rate models are mostly used to study data arising from cancer clinical trials. Its use in the context of infectious diseases has not been explored well. In 2008, Tournoud and Ecochard first proposed a mechanistic formulation of cure rate model in the context of infectious diseases with multiple exposures to infection. However, they assumed a simple Poisson distribution to capture the unobserved pathogens at each exposure time. In this paper, we propose a new cure rate model to study infectious diseases with discrete multiple exposures to infection. Our formulation captures both over-dispersion and under-dispersion with respect to the count on pathogens at each time of exposure. We also propose a new estimation method based on the expectation maximization algorithm to calculate the maximum likelihood estimates of the model parameters. We carry out a detailed Monte Carlo simulation study to demonstrate the performance of the proposed model and estimation algorithm. The flexibility of our proposed model also allows us to carry out a model discrimination. For this purpose, we use both likelihood ratio test and information-based criteria. Finally, we illustrate our proposed model using a recently collected data on COVID-19.
Recent years, large scale clinical data like patient surveys and medical record data are playing an increasing role in medical data science. These large-scale clinical data, collectively referred to as "real-world data (RWD)". It is expected to be widely used in large-scale observational studies of specific diseases, personal medicine or precise medicine, finding the responder of drugs or treatments. Applying RWD for estimating heterogeneous treat ment effect (HTE) has already been a trending topic. HTE has the potential to considerably impact the development of precision medicine by helping doctors make more informed precise treatment decisions and provide more personalized medical care. The statistical models used to estimate HTE is called treatment effect models. Powers et al. proposed a some treatment effect models for observational study, where they pointed out that the bagging causal MARS (BCM) performs outstanding compared to other models. While BCM has excellent performance, it still has room for improvement. In this paper, we proposed a new treatment effect model called shrinkage causal bagging MARS method to improve their shared basis conditional mean regression framework based on the following points: first, we estimated basis functions using transformed outcome, then applied the group LASSO method to optimize the model and estimate parameters. Besides, we are focusing on pursing better interpretability of model to improve the ethical acceptance. We designed simulations to verify the performance of our proposed method and our proposed method superior in mean square error and bias in most simulation settings. Also we applied it to real data set ACTG 175 to verify its usability, where our results are supported by previous studies.
Deep learning models are being adopted and applied on various critical decision-making tasks, yet they are trained to provide point predictions without providing degrees of confidence. The trustworthiness of deep learning models can be increased if paired with uncertainty estimations. Conformal Prediction has emerged as a promising method to pair machine learning models with prediction intervals, allowing for a view of the model's uncertainty. However, popular uncertainty estimation methods for conformal prediction fail to provide heteroskedastic intervals that are equally accurate for all samples. In this paper, we propose a method to estimate the uncertainty of each sample by calculating the variance obtained from a Deep Regression Forest. We show that the deep regression forest variance improves the efficiency and coverage of normalized inductive conformal prediction on a drug response prediction task.
When constructing parametric models to predict the cost of future claims, several important details have to be taken into account: (i) models should be designed to accommodate deductibles, policy limits, and coinsurance factors, (ii) parameters should be estimated robustly to control the influence of outliers on model predictions, and (iii) all point predictions should be augmented with estimates of their uncertainty. The methodology proposed in this paper provides a framework for addressing all these aspects simultaneously. Using payment-per-payment and payment-per-loss variables, we construct the adaptive version of method of winsorized moments (MWM) estimators for the parameters of truncated and censored lognormal distribution. Further, the asymptotic distributional properties of this approach are derived and compared with those of the maximum likelihood estimator (MLE) and method of trimmed moments (MTM) estimators. The latter being a primary competitor to MWM. Moreover, the theoretical results are validated with extensive simulation studies and risk measure sensitivity analysis. Finally, practical performance of these methods is illustrated using the well-studied data set of 1500 U.S. indemnity losses. With this real data set, it is also demonstrated that the composite models do not provide much improvement in the quality of predictive models compared to a stand-alone fitted distribution specially for truncated and censored sample data.
With increasing volume of data being used across machine learning tasks, the capability to target specific subsets of data becomes more important. To aid in this capability, the recently proposed Submodular Mutual Information (SMI) has been effectively applied across numerous tasks in literature to perform targeted subset selection with the aid of a exemplar query set. However, all such works are deficient in providing theoretical guarantees for SMI in terms of its sensitivity to a subset's relevance and coverage of the targeted data. For the first time, we provide such guarantees by deriving similarity-based bounds on quantities related to relevance and coverage of the targeted data. With these bounds, we show that the SMI functions, which have empirically shown success in multiple applications, are theoretically sound in achieving good query relevance and query coverage.
Class imbalance is a pervasive issue in the field of disease classification from medical images. It is necessary to balance out the class distribution while training a model for decent results. However, in the case of rare medical diseases, images from affected patients are much harder to come by compared to images from non-affected patients, resulting in unwanted class imbalance. Various processes of tackling class imbalance issues have been explored so far, each having its fair share of drawbacks. In this research, we propose an outlier detection based binary medical image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively, performing better than large deep learning models and other published works. As our proposed approach can provide competitive results without needing the disease-positive samples during training, it should prove to be useful in binary disease classification on imbalanced datasets.
Due to its optimal complexity, the multigrid (MG) method is one of the most popular approaches for solving large-scale linear systems arising from the discretization of partial differential equations. However, the parallel implementation of standard MG methods, which are inherently multiplicative, suffers from increasing communication complexity. In such cases, the additive variants of MG methods provide a good alternative due to their inherently parallel nature, although they exhibit slower convergence. This work combines the additive multigrid method with the multipreconditioned conjugate gradient (MPCG) method. In the proposed approach, the MPCG method employs the corrections from the different levels of the MG hierarchy as separate preconditioned search directions. In this approach, the MPCG method updates the current iterate by using the linear combination of the preconditioned search directions, where the optimal coefficients for the linear combination are computed by exploiting the energy norm minimization of the CG method. The idea behind our approach is to combine the $A$-conjugacy of the search directions of the MPCG method and the quasi $H_1$-orthogonality of the corrections from the MG hierarchy. In the numerical section, we study the performance of the proposed method compared to the standard additive and multiplicative MG methods used as preconditioners for the CG method.
Proxy causal learning (PCL) is a method for estimating the causal effect of treatments on outcomes in the presence of unobserved confounding, using proxies (structured side information) for the confounder. This is achieved via two-stage regression: in the first stage, we model relations among the treatment and proxies; in the second stage, we use this model to learn the effect of treatment on the outcome, given the context provided by the proxies. PCL guarantees recovery of the true causal effect, subject to identifiability conditions. We propose a novel method for PCL, the deep feature proxy variable method (DFPV), to address the case where the proxies, treatments, and outcomes are high-dimensional and have nonlinear complex relationships, as represented by deep neural network features. We show that DFPV outperforms recent state-of-the-art PCL methods on challenging synthetic benchmarks, including settings involving high dimensional image data. Furthermore, we show that PCL can be applied to off-policy evaluation for the confounded bandit problem, in which DFPV also exhibits competitive performance.
Accurate and interpretable diagnostic models are crucial in the safety-critical field of medicine. We investigate the interpretability of our proposed biomarker-based lung ultrasound diagnostic pipeline to enhance clinicians' diagnostic capabilities. The objective of this study is to assess whether explanations from a decision tree classifier, utilizing biomarkers, can improve users' ability to identify inaccurate model predictions compared to conventional saliency maps. Our findings demonstrate that decision tree explanations, based on clinically established biomarkers, can assist clinicians in detecting false positives, thus improving the reliability of diagnostic models in medicine.
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.
Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT_BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.