Emotional well-being significantly influences mental health and overall quality of life. As therapy chatbots become increasingly prevalent, their ability to comprehend and respond empathetically to users' emotions remains limited. This paper addresses this limitation by proposing an approach to enhance therapy chatbots with auditory perception, enabling them to understand users' feelings and provide human-like empathy. The proposed method incorporates speech emotion recognition (SER) techniques using Convolutional Neural Network (CNN) models and the ShEMO dataset to accurately detect and classify negative emotions, including anger, fear, and sadness. The SER model achieves a validation accuracy of 88%, demonstrating its effectiveness in recognizing emotional states from speech signals. Furthermore, a recommender system is developed, leveraging the SER model's output to generate personalized recommendations for managing negative emotions, for which a new bilingual dataset was generated as well since there is no such dataset available for this task. The recommender model achieves an accuracy of 98% by employing a combination of global vectors for word representation (GloVe) and LSTM models. To provide a more immersive and empathetic user experience, a text-to-speech model called GlowTTS is integrated, enabling the therapy chatbot to audibly communicate the generated recommendations to users in both English and Persian. The proposed approach offers promising potential to enhance therapy chatbots by providing them with the ability to recognize and respond to users' emotions, ultimately improving the delivery of mental health support for both English and Persian-speaking users.
Nowadays, detecting aberrant health issues is a difficult process. Falling, especially among the elderly, is a severe concern worldwide. Falls can result in deadly consequences, including unconsciousness, internal bleeding, and often times, death. A practical and optimal, smart approach of detecting falling is currently a concern. The use of vision-based fall monitoring is becoming more common among scientists as it enables senior citizens and those with other health conditions to live independently. For tracking, surveillance, and rescue, unmanned aerial vehicles use video or image segmentation and object detection methods. The Tello drone is equipped with a camera and with this device we determined normal and abnormal behaviors among our participants. The autonomous falling objects are classified using a convolutional neural network (CNN) classifier. The results demonstrate that the systems can identify falling objects with a precision of 0.9948.
Backpropagation (BP) is widely used for calculating gradients in deep neural networks (DNNs). Applied often along with stochastic gradient descent (SGD) or its variants, BP is considered as a de-facto choice in a variety of machine learning tasks including DNN training and adversarial attack/defense. Recently, a linear variant of BP named LinBP was introduced for generating more transferable adversarial examples for performing black-box attacks, by Guo et al. Although it has been shown empirically effective in black-box attacks, theoretical studies and convergence analyses of such a method is lacking. This paper serves as a complement and somewhat an extension to Guo et al.'s paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks, including adversarial attack and model training. We demonstrate that, somewhat surprisingly, LinBP can lead to faster convergence in these tasks in the same hyper-parameter settings, compared to BP. We confirm our theoretical results with extensive experiments.
We describe how hierarchical concepts can be represented in three types of layered neural networks. The aim is to support recognition of the concepts when partial information about the concepts is presented, and also when some of the neurons in the network might fail. Our failure model involves initial random failures. The three types of networks are: feed-forward networks with high connectivity, feed-forward networks with low connectivity, and layered networks with low connectivity and with both forward edges and "lateral" edges within layers. In order to achieve fault-tolerance, the representations all use multiple representative neurons for each concept. We show how recognition can work in all three of these settings, and quantify how the probability of correct recognition depends on several parameters, including the number of representatives and the neuron failure probability. We also discuss how these representations might be learned, in all three types of networks. For the feed-forward networks, the learning algorithms are similar to ones used in [4], whereas for networks with lateral edges, the algorithms are generally inspired by work on the assembly calculus [3, 6, 7].
AI assistants such as Alexa, Google Assistant, and Siri, are making their way into the healthcare sector, offering a convenient way for users to access different healthcare services. Trust is a vital factor in the uptake of healthcare services, but the factors affecting trust in voice assistants used for healthcare are under-explored and this specialist domain introduces additional requirements. This study explores the effects of different functional, personal, and risk factors on trust in and adoption of healthcare voice AI assistants (HVAs), generating a partial least squares structural model from a survey of 300 voice assistant users. Our results indicate that trust in HVAs can be significantly explained by functional factors (usefulness, content credibility, quality of service relative to a healthcare professional), together with security, and privacy risks and personal stance in technology. We also discuss differences in terms of trust between HVAs and general-purpose voice assistants as well as implications that are unique to HVAs.
The relationship between acute kidney injury (AKI) prediction and nephrotoxic drugs, or drugs that adversely affect kidney function, is one that has yet to be explored in the critical care setting. One contributing factor to this gap in research is the limited investigation of drug modalities in the intensive care unit (ICU) context, due to the challenges of processing prescription data into the corresponding drug representations and a lack in the comprehensive understanding of these drug representations. This study addresses this gap by proposing a novel approach that leverages patient prescription data as a modality to improve existing models for AKI prediction. We base our research on Electronic Health Record (EHR) data, extracting the relevant patient prescription information and converting it into the selected drug representation for our research, the extended-connectivity fingerprint (ECFP). Furthermore, we adopt a unique multimodal approach, developing machine learning models and 1D Convolutional Neural Networks (CNN) applied to clinical drug representations, establishing a procedure which has not been used by any previous studies predicting AKI. The findings showcase a notable improvement in AKI prediction through the integration of drug embeddings and other patient cohort features. By using drug features represented as ECFP molecular fingerprints along with common cohort features such as demographics and lab test values, we achieved a considerable improvement in model performance for the AKI prediction task over the baseline model which does not include the drug representations as features, indicating that our distinct approach enhances existing baseline techniques and highlights the relevance of drug data in predicting AKI in the ICU setting
Many epidemiological and clinical studies aim at analyzing a time-to-event endpoint. A common complication is right censoring. In some cases, it arises because subjects are still surviving after the study terminates or move out of the study area, in which case right censoring is typically treated as independent or non-informative. Such an assumption can be further relaxed to conditional independent censoring by leveraging possibly time-varying covariate information, if available, assuming censoring and failure time are independent among covariate strata. In yet other instances, events may be censored by other competing events like death and are associated with censoring possibly through prognoses. Realistically, measured covariates can rarely capture all such associations with certainty. For such dependent censoring, often covariate measurements are at best proxies of underlying prognoses. In this paper, we establish a nonparametric identification framework by formally admitting that conditional independent censoring may fail in practice and accounting for covariate measurements as imperfect proxies of underlying association. The framework suggests adaptive estimators which we give generic assumptions under which they are consistent, asymptotically normal, and doubly robust. We illustrate our framework with concrete settings, where we examine the finite-sample performance of our proposed estimators via a Monte-Carlo simulation and apply them to the SEER-Medicare dataset.
Cure rate models are mostly used to study data arising from cancer clinical trials. Its use in the context of infectious diseases has not been explored well. In 2008, Tournoud and Ecochard first proposed a mechanistic formulation of cure rate model in the context of infectious diseases with multiple exposures to infection. However, they assumed a simple Poisson distribution to capture the unobserved pathogens at each exposure time. In this paper, we propose a new cure rate model to study infectious diseases with discrete multiple exposures to infection. Our formulation captures both over-dispersion and under-dispersion with respect to the count on pathogens at each time of exposure. We also propose a new estimation method based on the expectation maximization algorithm to calculate the maximum likelihood estimates of the model parameters. We carry out a detailed Monte Carlo simulation study to demonstrate the performance of the proposed model and estimation algorithm. The flexibility of our proposed model also allows us to carry out a model discrimination. For this purpose, we use both likelihood ratio test and information-based criteria. Finally, we illustrate our proposed model using a recently collected data on COVID-19.
Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.
Graph neural networks (GNNs) have been proven to be effective in various network-related tasks. Most existing GNNs usually exploit the low-frequency signals of node features, which gives rise to one fundamental question: is the low-frequency information all we need in the real world applications? In this paper, we first present an experimental investigation assessing the roles of low-frequency and high-frequency signals, where the results clearly show that exploring low-frequency signal only is distant from learning an effective node representation in different scenarios. How can we adaptively learn more information beyond low-frequency information in GNNs? A well-informed answer can help GNNs enhance the adaptability. We tackle this challenge and propose a novel Frequency Adaptation Graph Convolutional Networks (FAGCN) with a self-gating mechanism, which can adaptively integrate different signals in the process of message passing. For a deeper understanding, we theoretically analyze the roles of low-frequency signals and high-frequency signals on learning node representations, which further explains why FAGCN can perform well on different types of networks. Extensive experiments on six real-world networks validate that FAGCN not only alleviates the over-smoothing problem, but also has advantages over the state-of-the-arts.
Human doctors with well-structured medical knowledge can diagnose a disease merely via a few conversations with patients about symptoms. In contrast, existing knowledge-grounded dialogue systems often require a large number of dialogue instances to learn as they fail to capture the correlations between different diseases and neglect the diagnostic experience shared among them. To address this issue, we propose a more natural and practical paradigm, i.e., low-resource medical dialogue generation, which can transfer the diagnostic experience from source diseases to target ones with a handful of data for adaptation. It is capitalized on a commonsense knowledge graph to characterize the prior disease-symptom relations. Besides, we develop a Graph-Evolving Meta-Learning (GEML) framework that learns to evolve the commonsense graph for reasoning disease-symptom correlations in a new disease, which effectively alleviates the needs of a large number of dialogues. More importantly, by dynamically evolving disease-symptom graphs, GEML also well addresses the real-world challenges that the disease-symptom correlations of each disease may vary or evolve along with more diagnostic cases. Extensive experiment results on the CMDD dataset and our newly-collected Chunyu dataset testify the superiority of our approach over state-of-the-art approaches. Besides, our GEML can generate an enriched dialogue-sensitive knowledge graph in an online manner, which could benefit other tasks grounded on knowledge graph.