The intricate connection between daily behaviours and health necessitates robust behaviour monitoring, particularly with the advent of IoT systems. This study introduces an innovative approach, exploiting the synergy of information from various IoT sources, to assess the alignment of behaviour routines with health guidelines. We grouped routines based on guideline compliance and used a clustering method to identify similarities in behaviours and key characteristics within each cluster. Applied to an elderly care case study, our approach unveils patterns leading to physical inactivity by categorising days based on recommended daily steps. Utilising data from wristbands, smartphones, and ambient sensors, the study provides insights not achievable with single-source data. Visualisation in a calendar view aids health experts in understanding patient behaviours, enabling precise interventions. Notably, the approach facilitates early detection of behaviour changes during events like COVID-19 and Ramadan, available in our dataset. This work signifies a promising path for behavioural analysis and discovering variations to empower smart healthcare, offering insights into patient health, personalised interventions, and healthier routines through continuous IoT-driven data analysis.
The calibration of MEMS triaxial gyroscopes is crucial for achieving precise attitude estimation for various wearable health monitoring applications. However, gyroscope calibration poses greater challenges compared to accelerometers and magnetometers. This paper introduces an efficient method for calibrating MEMS triaxial gyroscopes via only a servo motor, making it well-suited for field environments. The core strategy of the method involves utilizing the fact that the dot product of the measured gravity and the rotational speed in a fixed frame remains constant. To eliminate the influence of rotating centrifugal force on the accelerometer, the accelerometer data is measured while stationary. The proposed calibration experiment scheme, which allows gyroscopic measurements when operating each axis at a specific rotation speed, making it easier to evaluate the linearity across a related speed range constituted by a series of rotation speeds. Moreover, solely the classical least squares algorithm proves adequate for estimating the scale factor, notably streamlining the analysis of the calibration process. Extensive numerical simulations were conducted to analyze the proposed method's performance in calibrating a triaxial gyroscope model. Experimental validation was also carried out using a commercially available MEMS inertial measurement unit (LSM9DS1 from Arduino nano 33 BLE SENSE) and a servo motor capable of controlling precise speed. The experimental results effectively demonstrate the efficacy of the proposed calibration approach.
The degree centrality of a node, defined as the number of nodes adjacent to it, is often used as a measure of importance of a node to the structure of a network. This metric can be extended to paths in a network, where the degree centrality of a path is defined as the number of nodes adjacent to it. In this paper, we reconsider the problem of finding the most degree-central shortest path in an unweighted network. We propose a polynomial algorithm with the worst-case running time of $O(|E||V|^2\Delta(G))$, where $|V|$ is the number of vertices in the network, $|E|$ is the number of edges in the network, and $\Delta(G)$ is the maximum degree of the graph. We conduct a numerical study of our algorithm on synthetic and real-world networks and compare our results to the existing literature. In addition, we show that the same problem is NP-hard when a weighted graph is considered. Furthermore, we consider other centrality measures, such as the betweenness and closeness centrality, showing that the problem of finding the most betweenness-central shortest path is solvable in polynomial time and finding the most closeness-central shortest path is NP-hard, regardless of whether the graph is weighted or not.
Design optimization problems, e.g., shape optimization, that involve deformable bodies in unilateral contact are challenging as they require robust contact solvers, complex optimization methods that are typically gradient-based, and sensitivity derivations. Notably, the problems are nonsmooth, adding significant difficulty to the optimization process. We study design optimization problems in frictionless unilateral contact subject to pressure constraints, using both gradient-based and gradient-free optimization methods, namely Bayesian optimization. The contact simulation problem is solved via the mortar contact and finite element methods. For the gradient-based method, we use the direct differentiation method to compute the sensitivities of the cost and constraint function with respect to the design variables. Then, we use Ipopt to solve the optimization problems. For the gradient-free approach, we use a constrained Bayesian optimization algorithm based on the standard Gaussian Process surrogate model. We present numerical examples that control the contact pressure, inspired by real-life engineering applications, to demonstrate the effectiveness, strengths and shortcomings of both methods. Our results suggest that both optimization methods perform reasonably well for these nonsmooth problems.
The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.
Disease prediction holds considerable significance in modern healthcare, because of its crucial role in facilitating early intervention and implementing effective prevention measures. However, most recent disease prediction approaches heavily rely on laboratory test outcomes (e.g., blood tests and medical imaging from X-rays). Gaining access to such data for precise disease prediction is often a complex task from the standpoint of a patient and is always only available post-patient consultation. To make disease prediction available from patient-side, we propose Personalized Medical Disease Prediction (PoMP), which predicts diseases using patient health narratives including textual descriptions and demographic information. By applying PoMP, patients can gain a clearer comprehension of their conditions, empowering them to directly seek appropriate medical specialists and thereby reducing the time spent navigating healthcare communication to locate suitable doctors. We conducted extensive experiments using real-world data from Haodf to showcase the effectiveness of PoMP.
Pathology reports are rich in clinical and pathological details but are often presented in free-text format. The unstructured nature of these reports presents a significant challenge limiting the accessibility of their content. In this work, we present a practical approach based on the use of large multimodal models (LMMs) for automatically extracting information from scanned images of pathology reports with the goal of generating a standardised report specifying the value of different fields along with estimated confidence about the accuracy of the extracted fields. The proposed approach overcomes limitations of existing methods which do not assign confidence scores to extracted fields limiting their practical use. The proposed framework uses two stages of prompting a Large Multimodal Model (LMM) for information extraction and validation. The framework generalises to textual reports from multiple medical centres as well as scanned images of legacy pathology reports. We show that the estimated confidence is an effective indicator of the accuracy of the extracted information that can be used to select only accurately extracted fields. We also show the prognostic significance of structured and unstructured data from pathology reports and show that the automatically extracted field values significant prognostic value for patient stratification. The framework is available for evaluation via the URL: //labieb.dcs.warwick.ac.uk/.
Knowing whether vaccine protection wanes over time is important for health policy and drug development. However, quantifying waning effects is difficult. A simple contrast of vaccine efficacy at two different times compares different populations of individuals: those who were uninfected at the first time versus those who remain uninfected until the second time. Thus, the contrast of vaccine efficacy at early and late times can not be interpreted as a causal effect. We propose to quantify vaccine waning using the challenge effect, which is a contrast of outcomes under controlled exposures to the infectious agent following vaccination. We identify sharp bounds on the challenge effect under non-parametric assumptions that are broadly applicable in vaccine trials using routinely collected data. We demonstrate that the challenge effect can differ substantially from the conventional vaccine efficacy due to depletion of susceptible individuals from the risk set over time. Finally, we apply the methods to derive bounds on the waning of the BNT162b2 COVID-19 vaccine using data from a placebo-controlled randomized trial. Our estimates of the challenge effect suggest waning protection after 2 months beyond administration of the second vaccine dose.
A/B testing methodology is generally performed by private companies to increase user engagement and satisfaction about online features. Their usage is far from being transparent and may undermine user autonomy (e.g. polarizing individual opinions, mis- and dis- information spreading). For our analysis we leverage a crucial case study dataset (i.e. Upworthy) where news headlines were allocated to users and reshuffled for optimizing clicks. Our centre of focus is to determine how and under which conditions A/B testing affects the distribution of content on the collective level, specifically on different social network structures. In order to achieve that, we set up an agent-based model reproducing social interaction and an individual decision-making model. Our preliminary results indicate that A/B testing has a substantial influence on the qualitative dynamics of information dissemination on a social network. Moreover, our modeling framework promisingly embeds conjecturing policy (e.g. nudging, boosting) interventions.
Re-configurable Intelligent Surfaces (RIS) technology is increasingly becoming a potential component for next-generation wireless networks, offering enhanced performance in terms of throughput, spectral, and energy efficiency. However, the broadcast nature of RIS-assisted wireless communication makes it vulnerable to malicious attacks at the physical layer. At the same time, physical layer authentication is gaining popularity as a solution to secure wireless networks, thwarting different attacks such as cloning, spoofing, and impersonation by using the random features of the physical layer. In this paper, we investigate RIS-assisted wireless communication systems to unlock the potential of using RIS for physical layer authentication (PLA). In particular, we exploit two distinct features of the physical layer: pathloss and channel impulse response (CIR) for PLA in RIS-assisted wireless communication. We construct hypothesis tests for the estimated features and derive closed-form error expressions. Further, we consider the critical error, i.e., missed detection, as our objective function to minimize by optimizing the phase shift of the RIS pannel. We compare the performance of our proposed mechanisms with PLA schemes using the same features but with no RIS. Furthermore, we thoroughly evaluate our proposed schemes using performance metrics such as the probability of false alarm (PFA), the probability of missed detection (PMD), and the receiver operating characteristic (ROC) curves. The results demonstrate a clear positive impact of RIS on PLA, as it effectively reduces PMD values to zero when determining the optimal phase shift.
Autonomous systems are soon to be ubiquitous, from manufacturing autonomy to agricultural field robots, and from health care assistants to the entertainment industry. The majority of these systems are developed with modular sub-components for decision-making, planning, and control that may be hand-engineered or learning-based. While these existing approaches have been shown to perform well under the situations they were specifically designed for, they can perform especially poorly in rare, out-of-distribution scenarios that will undoubtedly arise at test-time. The rise of foundation models trained on multiple tasks with impressively large datasets from a variety of fields has led researchers to believe that these models may provide common sense reasoning that existing planners are missing. Researchers posit that this common sense reasoning will bridge the gap between algorithm development and deployment to out-of-distribution tasks, like how humans adapt to unexpected scenarios. Large language models have already penetrated the robotics and autonomous systems domains as researchers are scrambling to showcase their potential use cases in deployment. While this application direction is very promising empirically, foundation models are known to hallucinate and generate decisions that may sound reasonable, but are in fact poor. We argue there is a need to step back and simultaneously design systems that can quantify the certainty of a model's decision, and detect when it may be hallucinating. In this work, we discuss the current use cases of foundation models for decision-making tasks, provide a general definition for hallucinations with examples, discuss existing approaches to hallucination detection and mitigation with a focus on decision problems, and explore areas for further research in this exciting field.