Identifying disease phenotypes from electronic health records (EHRs) is critical for numerous secondary uses. Manually encoding physician knowledge into rules is particularly challenging for rare diseases due to inadequate EHR coding, necessitating review of clinical notes. Large language models (LLMs) offer promise in text understanding but may not efficiently handle real-world clinical documentation. We propose a zero-shot LLM-based method enriched by retrieval-augmented generation and MapReduce, which pre-identifies disease-related text snippets to be used in parallel as queries for the LLM to establish diagnosis. We show that this method as applied to pulmonary hypertension (PH), a rare disease characterized by elevated arterial pressures in the lungs, significantly outperforms physician logic rules ($F_1$ score of 0.62 vs. 0.75). This method has the potential to enhance rare disease cohort identification, expanding the scope of robust clinical research and care gap identification.
In credence goods markets such as health care or repair services, consumers rely on experts with superior information to adequately diagnose and treat them. Experts, however, are constrained in their diagnostic abilities, which hurts market efficiency and consumer welfare. Technological breakthroughs that substitute or complement expert judgments have the potential to alleviate consumer mistreatment. This article studies how competitive experts adopt novel diagnostic technologies when skills are heterogeneously distributed and obfuscated to consumers. We differentiate between novel technologies that increase expert abilities, and algorithmic decision aids that complement expert judgments, but do not affect an expert's personal diagnostic precision. We show that high-ability experts may be incentivized to forego the decision aid in order to escape a pooling equilibrium by differentiating themselves from low-ability experts. Results from an online experiment support our hypothesis, showing that high-ability experts are significantly less likely than low-ability experts to invest into an algorithmic decision aid. Furthermore, we document pervasive under-investments, and no effect on expert honesty.
Detecting early-stage ovarian cancer accurately and efficiently is crucial for timely treatment. Various methods for early diagnosis have been explored, including a focus on features derived from protein mass spectra, but these tend to overlook the complex interplay across protein expression levels. We propose an innovative method to automate the search for diagnostic features in these spectra by analyzing their inherent scaling characteristics. We compare two techniques for estimating the self-similarity in a signal using the scaling behavior of its wavelet packet decomposition. The methods are applied to the mass spectra using a rolling window approach, yielding a collection of self-similarity indexes that capture protein interactions, potentially indicative of ovarian cancer. Then, the most discriminatory scaling descriptors from this collection are selected for use in classification algorithms. To assess their effectiveness for early diagnosis of ovarian cancer, the techniques are applied to two datasets from the American National Cancer Institute. Comparative evaluation against an existing wavelet-based method shows that one wavelet packet-based technique led to improved diagnostic performance for one of the analyzed datasets (95.67% vs. 96.78% test accuracy, respectively). This highlights the potential of wavelet packet-based methods to capture novel diagnostic information related to ovarian cancer. This innovative approach offers promise for better early detection and improved patient outcomes in ovarian cancer.
The accurate recognition of symptoms in clinical reports is significantly important in the fields of healthcare and biomedical natural language processing. These entities serve as essential building blocks for clinical information extraction, enabling retrieval of critical medical insights from vast amounts of textual data. Furthermore, the ability to identify and categorize these entities is fundamental for developing advanced clinical decision support systems, aiding healthcare professionals in diagnosis and treatment planning. In this study, we participated in SympTEMIST, a shared task on the detection of symptoms, signs and findings in Spanish medical documents. We combine a set of large language models fine-tuned with the data released by the organizers.
Parkinson's disease (PD) is a debilitating neurological disorder that necessitates precise and early diagnosis for effective patient care. This study aims to develop a diagnostic model capable of achieving both high accuracy and minimizing false negatives, a critical factor in clinical practice. Given the limited training data, a feature selection strategy utilizing ANOVA is employed to identify the most informative features. Subsequently, various machine learning methods, including Echo State Networks (ESN), Random Forest, k-nearest Neighbors, Support Vector Classifier, Extreme Gradient Boosting, and Decision Tree, are employed and thoroughly evaluated. The statistical analyses of the results highlight ESN's exceptional performance, showcasing not only superior accuracy but also the lowest false negative rate among all methods. Consistently, statistical data indicates that the ESN method consistently maintains a false negative rate of less than 8% in 83% of cases. ESN's capacity to strike a delicate balance between diagnostic precision and minimizing misclassifications positions it as an exemplary choice for PD diagnosis, especially in scenarios characterized by limited data. This research marks a significant step towards more efficient and reliable PD diagnosis, with potential implications for enhanced patient outcomes and healthcare dynamics.
Predicting next visit diagnosis using Electronic Health Records (EHR) is an essential task in healthcare, critical for devising proactive future plans for both healthcare providers and patients. Nonetheless, many preceding studies have not sufficiently addressed the heterogeneous and hierarchical characteristics inherent in EHR data, inevitably leading to sub-optimal performance. To this end, we propose NECHO, a novel medical code-centric multimodal contrastive EHR learning framework with hierarchical regularisation. First, we integrate multifaceted information encompassing medical codes, demographics, and clinical notes using a tailored network design and a pair of bimodal contrastive losses, all of which pivot around a medical codes representation. We also regularise modality-specific encoders using a parental level information in medical ontology to learn hierarchical structure of EHR data. A series of experiments on MIMIC-III data demonstrates effectiveness of our approach.
Polygenic Risk Scores (PRS) developed from genome-wide association studies (GWAS) are of increasing interest for various clinical and research applications. Bayesian methods have been particularly popular for building PRS in genome-wide scale because of their natural ability to regularize model and borrow information in high-dimension. In this article, we present new theoretical results, methods, and extensive numerical studies to advance Bayesian methods for PRS applications. We conduct theoretical studies to identify causes of convergence issues of some Bayesian methods when required input GWAS summary-statistics and linkage disequilibrium (LD) (genetic correlation) data are derived from distinct samples. We propose a remedy to the problem by the projection of the summary-statistics data into the column space of the genetic correlation matrix. We further implement a PRS development algorithm under the Bayesian Bridge prior which can allow more flexible specification of effect-size distribution than those allowed under popular alternative methods. Finally, we conduct careful benchmarking studies of alternative Bayesian methods using both simulation studies and real datasets, where we carefully investigate both the effect of prior specification and estimation strategies for LD parameters. These studies show that the proposed algorithm, equipped with the projection approach, the flexible prior specification, and an efficient numerical algorithm leads to the development of the most robust PRS across a wide variety of scenarios.
Maintaining a high quality of life through physical activities (PA) to prevent health decline is crucial. However, the relationship between individuals health status, PA preferences, and motion factors is complex. PA discussions consistently show a positive correlation with healthy aging experiences, but no explicit relation to specific types of musculoskeletal exercises. Taking advantage of the increasingly widespread existence of smartphones, especially in Indonesia, this research utilizes embedded sensors for Human Activity Recognition (HAR). Based on 25 participants data, performing nine types of selected motion, this study has successfully identified important sensor attributes that play important roles in the right and left hands for muscle strength motions as the basis for developing machine learning models with the LSTM algorithm.
Recently, the emergence of a large number of Synthetic Aperture Radar (SAR) sensors and target datasets has made it possible to unify downstream tasks with self-supervised learning techniques, which can pave the way for building the foundation model in the SAR target recognition field. The major challenge of self-supervised learning for SAR target recognition lies in the generalizable representation learning in low data quality and noise.To address the aforementioned problem, we propose a knowledge-guided predictive architecture that uses local masked patches to predict the multiscale SAR feature representations of unseen context. The core of the proposed architecture lies in combining traditional SAR domain feature extraction with state-of-the-art scalable self-supervised learning for accurate generalized feature representations. The proposed framework is validated on various downstream datasets (MSTAR, FUSAR-Ship, SAR-ACD and SSDD), and can bring consistent performance improvement for SAR target recognition. The experimental results strongly demonstrate the unified performance improvement of the self-supervised learning technique for SAR target recognition across diverse targets, scenes and sensors.
Background: Cognitive biases in clinical decision-making significantly contribute to errors in diagnosis and suboptimal patient outcomes. Addressing these biases presents a formidable challenge in the medical field. This study explores the role of large language models (LLMs) in mitigating these biases through the utilization of a multi-agent framework. We simulate the clinical decision-making processes through multi-agent conversation and evaluate its efficacy in improving diagnostic accuracy. Methods: A total of 16 published and unpublished case reports where cognitive biases have resulted in misdiagnoses were identified from the literature. In the multi-agent system, we leveraged GPT-4 Turbo to facilitate interactions among four simulated agents to replicate clinical team dynamics. Each agent has a distinct role: 1) To make the initial and final diagnosis after considering the discussions, 2) The devil's advocate and correct confirmation and anchoring bias, 3) The tutor and facilitator of the discussion to reduce premature closure bias, and 4) To record and summarize the findings. A total of 80 simulations were evaluated for the accuracy of initial diagnosis, top differential diagnosis and final two differential diagnoses. Findings: In a total of 80 responses evaluating both initial and final diagnoses, the initial diagnosis had an accuracy of 0% (0/80), but following multi-agent discussions, the accuracy for the top differential diagnosis increased to 71.3% (57/80), and for the final two differential diagnoses, to 80.0% (64/80). The system demonstrated an ability to reevaluate and correct misconceptions, even in scenarios with misleading initial investigations. Interpretation: The LLM-driven multi-agent conversation system shows promise in enhancing diagnostic accuracy in diagnostically challenging medical scenarios.
Object tracking is challenging as target objects often undergo drastic appearance changes over time. Recently, adaptive correlation filters have been successfully applied to object tracking. However, tracking algorithms relying on highly adaptive correlation filters are prone to drift due to noisy updates. Moreover, as these algorithms do not maintain long-term memory of target appearance, they cannot recover from tracking failures caused by heavy occlusion or target disappearance in the camera view. In this paper, we propose to learn multiple adaptive correlation filters with both long-term and short-term memory of target appearance for robust object tracking. First, we learn a kernelized correlation filter with an aggressive learning rate for locating target objects precisely. We take into account the appropriate size of surrounding context and the feature representations. Second, we learn a correlation filter over a feature pyramid centered at the estimated target position for predicting scale changes. Third, we learn a complementary correlation filter with a conservative learning rate to maintain long-term memory of target appearance. We use the output responses of this long-term filter to determine if tracking failure occurs. In the case of tracking failures, we apply an incrementally learned detector to recover the target position in a sliding window fashion. Extensive experimental results on large-scale benchmark datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods in terms of efficiency, accuracy, and robustness.