In contemporary healthcare, to protect patient data, electronic health records have become invaluable repositories, creating vast opportunities to leverage deep learning techniques for predictive analysis. Retinal fundus images, cirrhosis stages, and heart disease diagnostic predictions have shown promising results through the integration of deep learning techniques for classifying diverse datasets. This study proposes a novel deep learning predictive analysis framework for classifying multiple datasets by pre-processing data from three distinct sources. A hybrid deep learning model combining Residual Networks and Artificial Neural Networks is proposed to detect acute and chronic diseases such as heart diseases, cirrhosis, and retinal conditions, outperforming existing models. Dataset preparation involves aspects such as categorical data transformation, dimensionality reduction, and missing data synthesis. Feature extraction is effectively performed using scaler transformation for categorical datasets and ResNet architecture for image datasets. The resulting features are integrated into a unified classification model. Rigorous experimentation and evaluation resulted in high accuracies of 93%, 99%, and 95% for retinal fundus images, cirrhosis stages, and heart disease diagnostic predictions, respectively. The efficacy of the proposed method is demonstrated through a detailed analysis of F1-score, precision, and recall metrics. This study offers a comprehensive exploration of methodologies and experiments, providing in-depth knowledge of deep learning predictive analysis in electronic health records.
In causal inference, treatment effects are typically estimated under the ignorability, or unconfoundedness, assumption, which is often unrealistic in observational data. By relaxing this assumption and conducting a sensitivity analysis, we introduce novel bounds and derive confidence intervals for the Average Potential Outcome (APO) - a standard metric for evaluating continuous-valued treatment or exposure effects. We demonstrate that these bounds are sharp under a continuous sensitivity model, in the sense that they give the smallest possible interval under this model, and propose a doubly robust version of our estimators. In a comparative analysis with the method of Jesson et al. (2022) (arXiv:2204.10022), using both simulated and real datasets, we show that our approach not only yields sharper bounds but also achieves good coverage of the true APO, with significantly reduced computation times.
Many critical decisions, such as personalized medical diagnoses and product pricing, are made based on insights gained from designing, observing, and analyzing a series of experiments. This highlights the crucial role of experimental design, which goes beyond merely collecting information on system parameters as in traditional Bayesian experimental design (BED), but also plays a key part in facilitating downstream decision-making. Most recent BED methods use an amortized policy network to rapidly design experiments. However, the information gathered through these methods is suboptimal for down-the-line decision-making, as the experiments are not inherently designed with downstream objectives in mind. In this paper, we present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility. We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design, whilst inferring the downstream decision, thus effectively amortizing both tasks within a unified workflow. We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.
Federated Learning (FL) has emerged as a transformative approach in healthcare, enabling collaborative model training across decentralized data sources while preserving user privacy. However, performance of FL rapidly degrades in practical scenarios due to the inherent bias in non Independent and Identically distributed (non-IID) data among participating clients, which poses significant challenges to model accuracy and generalization. Therefore, we propose the Bias-Aware Client Selection Algorithm (BACSA), which detects user bias and strategically selects clients based on their bias profiles. In addition, the proposed algorithm considers privacy preservation, fairness and constraints of wireless network environments, making it suitable for sensitive healthcare applications where Quality of Service (QoS), privacy and security are paramount. Our approach begins with a novel method for detecting user bias by analyzing model parameters and correlating them with the distribution of class-specific data samples. We then formulate a mixed-integer non-linear client selection problem leveraging the detected bias, alongside wireless network constraints, to optimize FL performance. We demonstrate that BACSA improves convergence and accuracy, compared to existing benchmarks, through evaluations on various data distributions, including Dirichlet and class-constrained scenarios. Additionally, we explore the trade-offs between accuracy, fairness, and network constraints, indicating the adaptability and robustness of BACSA to address diverse healthcare applications.
Monitoring unexpected health events and taking actionable measures to avert them beforehand is central to maintaining health and preventing disease. Therefore, a tool capable of predicting adverse health events and offering users actionable feedback about how to make changes in their diet, exercise, and medication to prevent abnormal health events could have significant societal impacts. Counterfactual explanations can provide insights into why a model made a particular prediction by generating hypothetical instances that are similar to the original input but lead to a different prediction outcome. Therefore, counterfactuals can be viewed as a means to design AI-driven health interventions to not only predict but also prevent adverse health outcomes such as blood glucose spikes, diabetes, and heart disease. In this paper, we design \textit{\textbf{ExAct}}, a novel model-agnostic framework for generating counterfactual explanations for chronic disease prevention and management. Leveraging insights from adversarial learning, ExAct characterizes the decision boundary for high-dimensional data and performs a grid search to generate actionable interventions. ExAct is unique in integrating prior knowledge about user preferences of feasible explanations into the process of counterfactual generation. ExAct is evaluated extensively using four real-world datasets and external simulators. With $82.8\%$ average validity in the simulation-aided validation, ExAct surpasses the state-of-the-art techniques for generating counterfactual explanations by at least $10\%$. Besides, counterfactuals from ExAct exhibit at least $6.6\%$ improved proximity compared to previous research.
The integration of AI into modern critical infrastructure systems, such as healthcare, has introduced new vulnerabilities that can significantly impact workflow, efficiency, and safety. Additionally, the increased connectivity has made traditional human-driven penetration testing insufficient for assessing risks and developing remediation strategies. Consequently, there is a pressing need for a distributed, adaptive, and efficient automated penetration testing framework that not only identifies vulnerabilities but also provides countermeasures to enhance security posture. This work presents ADAPT, a game-theoretic and neuro-symbolic framework for automated distributed adaptive penetration testing, specifically designed to address the unique cybersecurity challenges of AI-enabled healthcare infrastructure networks. We use a healthcare system case study to illustrate the methodologies within ADAPT. The proposed solution enables a learning-based risk assessment. Numerical experiments are used to demonstrate effective countermeasures against various tactical techniques employed by adversarial AI.
For most health or well-being interventions, the process of evaluation is distinct from the activity itself, both in terms of who is involved, and how the actual data is collected and analyzed. Tangible interaction affords the opportunity to combine direct and embodied collaboration with a holistic approach to data collection and evaluation. We demonstrate this potential by describing our experiences designing and using the Communal Loom, an artifact for art therapy that translates quantitative data to collectively woven artifacts.
The problem of attacks on new generation network infrastructures is becoming increasingly relevant, given the widening of the attack surface of these networks resulting from the greater number of devices that will access them in the future (sensors, actuators, vehicles, household appliances, etc.). Approaches to the design of intrusion detection systems must evolve and go beyond the traditional concept of perimeter control to build on new paradigms that exploit the typical characteristics of future 5G and 6G networks, such as in-network computing and intelligent programmable data planes. The aim of this research is to propose a disruptive paradigm in which devices in a typical data plane of a future programmable network have %classification and anomaly detection capabilities and cooperate in a fully distributed fashion to act as an ML-enabled Active Intrusion Detection System "embedded" into the network. The reported proof-of-concept experiments demonstrate that the proposed paradigm allows working effectively and with a good level of precision while occupying overall less CPU and RAM resources of the devices involved.
In public health, it is critical for policymakers to assess the relationship between the disease prevalence and associated risk factors or clinical characteristics, facilitating effective resources allocation. However, for diseases like female breast cancer (FBC), reliable prevalence data at specific geographical levels, such as the county-level, are limited because the gold standard data typically come from long-term cancer registries, which do not necessarily collect needed risk factors. In addition, it remains unclear whether fitting each model separately or jointly results in better estimation. In this paper, we identify two data sources to produce reliable county-level prevalence estimates in Missouri, USA: the population-based Missouri Cancer Registry (MCR) and the survey-based Missouri County-Level Study (CLS). We propose a two-stage Bayesian model to synthesize these sources, accounting for their differences in the methodological design, case definitions, and collected information. The first stage involves estimating the county-level FBC prevalence using the raking method for CLS data and the counting method for MCR data, calibrating the differences in the methodological design and case definition. The second stage includes synthesizing two sources with different sets of covariates using a Bayesian generalized linear mixed model with Zeller-Siow prior for the coefficients. Our data analyses demonstrate that using both data sources have better results than at least one data source, and including a data source membership matters when there exist systematic differences in these sources. Finally, we translate results into policy making and discuss methodological differences for data synthesis of registry and survey data.
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.