The focus of precision medicine is on decision support, often in the form of dynamic treatment regimes (DTRs), which are sequences of decision rules. At each decision point, the decision rules determine the next treatment according to the patient's baseline characteristics, the information on treatments and responses accrued by that point, and the patient's current health status, including symptom severity and other measures. However, DTR estimation with ordinal outcomes is rarely studied, and rarer still in the context of interference - where one patient's treatment may affect another's outcome. In this paper, we introduce the proposed weighted proportional odds model (WPOM): a regression-based, doubly-robust approach to single-stage DTR estimation for ordinal outcomes. This method also accounts for the possibility of interference between individuals sharing a household through the use of covariate balancing weights derived from joint propensity scores. Examining different types of balancing weights, we verify the double robustness of WPOM with our adjusted weights via simulation studies. We further extend WPOM to multi-stage DTR estimation with household interference. Lastly, we demonstrate our proposed methodology in the analysis of longitudinal survey data from the Population Assessment of Tobacco and Health study, which motivates this work.
In biomedical and public health association studies, binary outcome variables may be subject to misclassification, resulting in substantial bias in effect estimates. The feasibility of addressing binary outcome misclassification in regression models is often hindered by model identifiability issues. In this paper, we characterize the identifiability problems in this class of models as a specific case of "label switching" and leverage a pattern in the resulting parameter estimates to solve the permutation invariance of the complete data log-likelihood. Our proposed algorithm in binary outcome misclassification models does not require gold standard labels and relies only on the assumption that outcomes are correctly classified at least 50% of the time. A label switching correction is applied within estimation methods to recover unbiased effect estimates and to estimate misclassification rates. Open source software is provided to implement the proposed methods. We give a detailed simulation study for our proposed methodology and apply these methods to data from the 2020 Medical Expenditure Panel Survey (MEPS).
Medical studies for chronic disease are often interested in the relation between longitudinal risk factor profiles and individuals' later life disease outcomes. These profiles may typically be subject to intermediate structural changes due to treatment or environmental influences. Analysis of such studies may be handled by the joint model framework. However, current joint modeling does not consider structural changes in the residual variability of the risk profile nor consider the influence of subject-specific residual variability on the time-to-event outcome. In the present paper, we extend the joint model framework to address these two heterogeneous intra-individual variabilities. A Bayesian approach is used to estimate the unknown parameters and simulation studies are conducted to investigate the performance of the method. The proposed joint model is applied to the Framingham Heart Study to investigate the influence of anti-hypertensive medication on the systolic blood pressure variability together with its effect on the risk of developing cardiovascular disease. We show that anti-hypertensive medication is associated with elevated systolic blood pressure variability and increased variability elevates risk of developing cardiovascular disease.
Objective: Identifying study-eligible patients within clinical databases is a critical step in clinical research. However, accurate query design typically requires extensive technical and biomedical expertise. We sought to create a system capable of generating data model-agnostic queries while also providing novel logical reasoning capabilities for complex clinical trial eligibility criteria. Materials and Methods: The task of query creation from eligibility criteria requires solving several text-processing problems, including named entity recognition and relation extraction, sequence-to-sequence transformation, normalization, and reasoning. We incorporated hybrid deep learning and rule-based modules for these, as well as a knowledge base of the Unified Medical Language System (UMLS) and linked ontologies. To enable data-model agnostic query creation, we introduce a novel method for tagging database schema elements using UMLS concepts. To evaluate our system, called LeafAI, we compared the capability of LeafAI to a human database programmer to identify patients who had been enrolled in 8 clinical trials conducted at our institution. We measured performance by the number of actual enrolled patients matched by generated queries. Results: LeafAI matched a mean 43% of enrolled patients with 27,225 eligible across 8 clinical trials, compared to 27% matched and 14,587 eligible in queries by a human database programmer. The human programmer spent 26 total hours crafting queries compared to several minutes by LeafAI. Conclusions: Our work contributes a state-of-the-art data model-agnostic query generation system capable of conditional reasoning using a knowledge base. We demonstrate that LeafAI can rival an experienced human programmer in finding patients eligible for clinical trials.
A causal decomposition analysis allows researchers to determine whether the difference in a health outcome between two groups can be attributed to a difference in each group's distribution of one or more modifiable mediator variables. With this knowledge, researchers and policymakers can focus on designing interventions that target these mediator variables. Existing methods for causal decomposition analysis either focus on one mediator variable or assume that each mediator variable is conditionally independent given the group label and the mediator-outcome confounders. In this paper, we propose a flexible causal decomposition analysis method that can accommodate multiple correlated and interacting mediator variables, which are frequently seen in studies of health behaviors and studies of environmental pollutants. We extend a Monte Carlo-based causal decomposition analysis method to this setting by using a multivariate mediator model that can accommodate any combination of binary and continuous mediator variables. Furthermore, we state the causal assumptions needed to identify both joint and path-specific decomposition effects through each mediator variable. To illustrate the reduction in bias and confidence interval width of the decomposition effects under our proposed method, we perform a simulation study. We also apply our approach to examine whether differences in smoking status and dietary inflammation score explain any of the Black-White differences in incident diabetes using data from a national cohort study.
Finding similar patients is a common objective in precision medicine, facilitating treatment outcome assessment and clinical decision support. Choosing widely-available patient features and appropriate mathematical methods for similarity calculations is crucial. International Statistical Classification of Diseases and Related Health Problems (ICD) codes are used worldwide to encode diseases and are available for nearly all patients. Aggregated as sets consisting of primary and secondary diagnoses they can display a degree of comorbidity and reveal comorbidity patterns. It is possible to compute the similarity of patients based on their ICD codes by using semantic similarity algorithms. These algorithms have been traditionally evaluated using a single-term expert rated data set. However, real-word patient data often display varying degrees of documented comorbidities that might impair algorithm performance. To account for this, we present a scale term that considers documented comorbidity-variance. In this work, we compared the performance of 80 combinations of established algorithms in terms of semantic similarity based on ICD-code sets. The sets have been extracted from patients with a C25.X (pancreatic cancer) primary diagnosis and provide a variety of different combinations of ICD-codes. Using our scale term we yielded the best results with a combination of level-based information content, Leacock & Chodorow concept similarity and bipartite graph matching for the set similarities reaching a correlation of 0.75 with our expert's ground truth. Our results highlight the importance of accounting for comorbidity variance while demonstrating how well current semantic similarity algorithms perform.
Quantile treatment effects (QTEs) can characterize the potentially heterogeneous causal effect of a treatment on different points of the entire outcome distribution. Propensity score (PS) methods are commonly employed for estimating QTEs in non-randomized studies. Empirical and theoretical studies have shown that insufficient and unnecessary adjustment for covariates in PS models can lead to bias and efficiency loss in estimating treatment effects. Striking a balance between bias and efficiency through variable selection is a crucial concern in casual inference. It is essential to acknowledge that the covariates related treatment and outcome may vary across different quantiles of the outcome distribution. However, previous studies have overlooked to adjust for different covariates separately in the PS models when estimating different QTEs. In this article, we proposed the quantile regression outcome-adaptive lasso (QROAL) method to select covariates that can provide unbiased and efficient estimates of QTEs. A distinctive feature of our proposed method is the utilization of linear quantile regression models for constructing penalty weights, enabling covariate selection in PS models separately when estimating different QTEs. We conducted simulation studies to show the superiority of our proposed method over the outcome-adaptive lasso (OAL) method in variable selection. Moreover, the proposed method exhibited favorable performance compared to the OAL method in terms of root mean square error in a range of settings, including both homogeneous and heterogeneous scenarios. Additionally, we applied the QROAL method to datasets from the China Health and Retirement Longitudinal Study (CHARLS) to explore the impact of smoking status on the severity of depression symptoms.
Medical data mainly includes various types of biomedical signals and medical images, which can be used by professional doctors to make judgments on patients' health conditions. However, the interpretation of medical data requires a lot of human cost and there may be misjudgments, so many scholars use neural networks and deep learning to classify and study medical data, which can improve the efficiency and accuracy of doctors and detect diseases early for early diagnosis, etc. Therefore, it has a wide range of application prospects. However, traditional neural networks have disadvantages such as high energy consumption and high latency (slow computation speed). This paper presents recent research on signal classification and disease diagnosis based on a third-generation neural network, the spiking neuron network, using medical data including EEG signals, ECG signals, EMG signals and MRI images. The advantages and disadvantages of pulsed neural networks compared with traditional networks are summarized and its development orientation in the future is prospected.
Detailed anatomical information is essential to optimize medical decisions for surgical and pre-operative planning in patients with congenital heart disease. The visualization techniques commonly used in clinical routine for the exploration of complex cardiac data are based on multi-planar reformations, maximum intensity projection, and volume rendering, which rely on basic lighting models prone to image distortion. On the other hand, cinematic rendering (CR), a three-dimensional visualization technique based on physically-based rendering methods, can create volumetric images with high fidelity. However, there are a lot of parameters involved in CR that affect the visualization results, thus being dependent on the user's experience and requiring detailed evaluation protocols to compare available solutions. In this study, we have analyzed the impact of the most relevant parameters in a CR pipeline developed in the open-source version of the MeVisLab framework for the visualization of the heart anatomy of three congenital patients and two adults from CT images. The resulting visualizations were compared to a commercial tool used in the clinics with a questionnaire filled in by clinical users, providing similar definitions of structures, depth perception, texture appearance, realism, and diagnostic ability.
The idea of a systematic digital representation of the entire known human pathophysiology, which we could call the Virtual Human Twin, has been around for decades. To date, most research groups focused instead on developing highly specialised, highly focused patient-specific models able to predict specific quantities of clinical relevance. While it has facilitated harvesting the low-hanging fruits, this narrow focus is, in the long run, leaving some significant challenges that slow the adoption of digital twins in healthcare. This position paper lays the conceptual foundations for developing the Virtual Human Twin (VHT). The VHT is intended as a distributed and collaborative infrastructure, a collection of technologies and resources (data, models) that enable it, and a collection of Standard Operating Procedures (SOP) that regulate its use. The VHT infrastructure aims to facilitate academic researchers, public organisations, and the biomedical industry in developing and validating new digital twins in healthcare solutions with the possibility of integrating multiple resources if required by the specific context of use. Healthcare professionals and patients can also use the VHT infrastructure for clinical decision support or personalised health forecasting. As the European Commission launched the EDITH coordination and support action to develop a roadmap for the development of the Virtual Human Twin, this position paper is intended as a starting point for the consensus process and a call to arms for all stakeholders.
While there is wide agreement that physical activity is an important component of a healthy lifestyle, it is unclear how many people adhere to public health recommendations on physical activity. The Physical Activity Guidelines (PAG), published by the CDC, provide guidelines to American adults, but it is difficult to assess compliance with these guidelines. The PAG further complicate adherence assessment by recommending activity to occur in at least 10 minute bouts. To better understand the measurement capabilities of various instruments to quantify activity, and to propose an approach to evaluate activity relative to the PAG, researchers at Iowa State University administered the Physical Activity Measurement Survey (PAMS) to over 1,000 participants in four different Iowa counties. In this paper, we develop a two-part Bayesian measurement error model and apply it to the PAMS data in order to assess compliance to the PAG in the Iowa adult population. The model accurately accounts for the 10 minute bout requirement put forth in the PAG. The measurement error model corrects biased estimates and accounts for day to day variation in activity. The model is also applied to the nationally representative National Health and Nutrition Examination Survey.