Data-driven applications and services have been increasingly deployed in all aspects of life including healthcare and medical services in which a huge amount of personal data is collected, aggregated, and processed in a centralised server from various sources. As a consequence, preserving the data privacy and security of these applications is of paramount importance. Since May 2018, the new data protection legislation in the EU/UK, namely the General Data Protection Regulation (GDPR), has come into force and this has called for a critical need for modelling compliance with the GDPR's sophisticated requirements. Existing threat modelling techniques are not designed to model GDPR compliance, particularly in a complex system where personal data is collected, processed, manipulated, and shared with third parties. In this paper, we present a novel comprehensive solution for developing a threat modelling technique to address threats of non-compliance and mitigate them by taking GDPR requirements as the baseline and combining them with the existing security and privacy modelling techniques (i.e., \textit{STRIDE} and \textit{LINDDUN}, respectively). For this purpose, we propose a new data flow diagram integrated with the GDPR principles, develop a knowledge base for the non-compliance threats, and leverage an inference engine for reasoning the GDPR non-compliance threats over the knowledge base. Finally, we demonstrate our solution for threats of non-compliance with legal basis and accountability in a telehealth system to show the feasibility and effectiveness of the proposed solution.
The development of health indicators to measure healthy life expectancy (HLE) is an active field of research aimed at summarizing the health of a population. Although many health indicators have emerged in the literature as critical metrics in public health assessments, the methods and data to conduct this evaluation vary considerably in nature and quality. Traditionally, health data collection relies on population surveys. However, these studies, typically of limited size, encompass only a small yet representative segment of the population. This limitation can necessitate the separate estimation of incidence and mortality rates, significantly restricting the available analysis methods. In this article, we leverage an extract from the French National Hospital Discharge database to define health indicators. Our analysis focuses on the resulting Disease-Free Life Expectancy (Dis-FLE) indicator, which provides insights based on the hospital trajectory of each patient admitted to hospital in France during 2008-13. Through this research, we illustrate the advantages and disadvantages of employing large clinical datasets as the foundation for more robust health indicators. We shed light on the opportunities that such data offer for a more comprehensive understanding of the health status of a population. In particular, we estimate age-dependent hazard rates associated with sex, alcohol abuse, tobacco consumption, and obesity, as well as geographic location. Simultaneously, we delve into the challenges and limitations that arise when adopting such a data-driven approach.
Robotics affordances, providing information about what actions can be taken in a given situation, can aid robotics manipulation. However, learning about affordances requires expensive large annotated datasets of interactions or demonstrations. In this work, we show active learning can mitigate this problem and propose the use of uncertainty to drive an interactive affordance discovery process. We show that our method enables the efficient discovery of visual affordances for several action primitives, such as grasping, stacking objects, or opening drawers, strongly improving data efficiency and allowing us to learn grasping affordances on a real-world setup with an xArm 6 robot arm in a small number of trials.
Among various aspects of ensuring the responsible design of AI tools for healthcare applications, addressing fairness concerns has been a key focus area. Specifically, given the wide spread of electronic health record (EHR) data and their huge potential to inform a wide range of clinical decision support tasks, improving fairness in this category of health AI tools is of key importance. While such a broad problem (that is, mitigating fairness in EHR-based AI models) has been tackled using various methods, task- and model-agnostic methods are noticeably rare. In this study, we aimed to target this gap by presenting a new pipeline that generates synthetic EHR data, which is not only consistent with (faithful to) the real EHR data but also can reduce the fairness concerns (defined by the end-user) in the downstream tasks, when combined with the real data. We demonstrate the effectiveness of our proposed pipeline across various downstream tasks and two different EHR datasets. Our proposed pipeline can add a widely applicable and complementary tool to the existing toolbox of methods to address fairness in health AI applications such as those modifying the design of a downstream model. The codebase for our project is available at //github.com/healthylaife/FairSynth
Educational resource understanding is vital to online learning platforms, which have demonstrated growing applications recently. However, researchers and developers always struggle with using existing general natural language toolkits or domain-specific models. The issue raises a need to develop an effective and easy-to-use one that benefits AI education-related research and applications. To bridge this gap, we present a unified, modularized, and extensive library, EduNLP, focusing on educational resource understanding. In the library, we decouple the whole workflow to four key modules with consistent interfaces including data configuration, processing, model implementation, and model evaluation. We also provide a configurable pipeline to unify the data usage and model usage in standard ways, where users can customize their own needs. For the current version, we primarily provide 10 typical models from four categories, and 5 common downstream-evaluation tasks in the education domain on 8 subjects for users' usage. The project is released at: //github.com/bigdata-ustc/EduNLP.
Vehicle motion planning is an essential component of autonomous driving technology. Current rule-based vehicle motion planning methods perform satisfactorily in common scenarios but struggle to generalize to long-tailed situations. Meanwhile, learning-based methods have yet to achieve superior performance over rule-based approaches in large-scale closed-loop scenarios. To address these issues, we propose PlanAgent, the first mid-to-mid planning system based on a Multi-modal Large Language Model (MLLM). MLLM is used as a cognitive agent to introduce human-like knowledge, interpretability, and common-sense reasoning into the closed-loop planning. Specifically, PlanAgent leverages the power of MLLM through three core modules. First, an Environment Transformation module constructs a Bird's Eye View (BEV) map and a lane-graph-based textual description from the environment as inputs. Second, a Reasoning Engine module introduces a hierarchical chain-of-thought from scene understanding to lateral and longitudinal motion instructions, culminating in planner code generation. Last, a Reflection module is integrated to simulate and evaluate the generated planner for reducing MLLM's uncertainty. PlanAgent is endowed with the common-sense reasoning and generalization capability of MLLM, which empowers it to effectively tackle both common and complex long-tailed scenarios. Our proposed PlanAgent is evaluated on the large-scale and challenging nuPlan benchmarks. A comprehensive set of experiments convincingly demonstrates that PlanAgent outperforms the existing state-of-the-art in the closed-loop motion planning task. Codes will be soon released.
Spatiotemporal data is prevalent in a wide range of edge devices, such as those used in personal communication and financial transactions. Recent advancements have sparked a growing interest in integrating spatiotemporal analysis with large-scale language models. However, spatiotemporal data often contains sensitive information, making it unsuitable for open third-party access. To address this challenge, we propose a Graph-GAN-based model for generating privacy-protected spatiotemporal data. Our approach incorporates spatial and temporal attention blocks in the discriminator and a spatiotemporal deconvolution structure in the generator. These enhancements enable efficient training under Gaussian noise to achieve differential privacy. Extensive experiments conducted on three real-world spatiotemporal datasets validate the efficacy of our model. Our method provides a privacy guarantee while maintaining the data utility. The prediction model trained on our generated data maintains a competitive performance compared to the model trained on the original data.
Flapping-fin underwater vehicle propulsion systems provide an alternative to propeller-driven systems in situations that require involve a constrained environment or require high maneuverability. Testing new configurations through experiments or high-fidelity simulations is an expensive process, slowing development of new systems. This is especially true when introducing new fin geometries. In this work, we propose machine learning approaches for thrust prediction given the system's fin geometries and kinematics. We introduce data-efficient fin shape parameterization strategies that enable our network to predict thrust profiles for unseen fin geometries given limited fin shapes in input data. In addition to faster development of systems, generalizable surrogate models offer fast, accurate predictions that could be used on an unmanned underwater vehicle control system.
Unmanned aerial vehicles (UAVs) have been attracting significant attention because there is a high probability of line-of-sight links being obtained between them and terrestrial nodes in high-rise urban areas. In this work, we investigate cognitive radio networks (CRNs) by jointly designing three-dimensional (3D) trajectory, the transmit power of the UAV, and user scheduling. Considering the UAV's onboard energy consumption, an optimization problem is formulated in which the average achievable rate of the considered system is maximized by jointly optimizing the UAV's 3D trajectory, transmission power, and user scheduling. Due to the non-convex optimization problem, a lower bound on the average achievable rate is utilized to reduce the complexity of the solution. Subsequently, the original optimization problem is decoupled into four subproblems by using block coordinate descent, and each subproblem is transformed into manageable convex optimization problems by introducing slack variables and successive convex approximation. Numerical results validate the effectiveness of our proposed algorithm and demonstrate that the 3D trajectories of UAVs can enhance the average achievable rate of aerial CRNs.
Planning is a fundamental property of human intelligence. Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs. Can large language models (LLMs) succeed at this task? Here, we present the first large-scale study investigating this question. We find that a representative set of closed and open-source LLMs, including GPT-4 and LLaMA-2, behave poorly when not supplied with illustrations about the task-solving process in our benchmark AsyncHow. We propose a novel technique called Plan Like a Graph (PLaG) that combines graphs with natural language prompts and achieves state-of-the-art results. We show that although PLaG can boost model performance, LLMs still suffer from drastic degradation when task complexity increases, highlighting the limits of utilizing LLMs for simulating digital devices. We see our study as an exciting step towards using LLMs as efficient autonomous agents. Our code and data are available at //github.com/fangru-lin/graph-llm-asynchow-plan.
Automatic and precise fitness activity recognition can be beneficial in aspects from promoting a healthy lifestyle to personalized preventative healthcare. While IMUs are currently the prominent fitness tracking modality, through iMove, we show bio-impedence can help improve IMU-based fitness tracking through sensor fusion and contrastive learning.To evaluate our methods, we conducted an experiment including six upper body fitness activities performed by ten subjects over five days to collect synchronized data from bio-impedance across two wrists and IMU on the left wrist.The contrastive learning framework uses the two modalities to train a better IMU-only classification model, where bio-impedance is only required at the training phase, by which the average Macro F1 score with the input of a single IMU was improved by 3.22 \% reaching 84.71 \% compared to the 81.49 \% of the IMU baseline model. We have also shown how bio-impedance can improve human activity recognition (HAR) directly through sensor fusion, reaching an average Macro F1 score of 89.57 \% (two modalities required for both training and inference) even if Bio-impedance alone has an average macro F1 score of 75.36 \%, which is outperformed by IMU alone. In addition, similar results were obtained in an extended study on lower body fitness activity classification, demonstrating the generalisability of our approach.Our findings underscore the potential of sensor fusion and contrastive learning as valuable tools for advancing fitness activity recognition, with bio-impedance playing a pivotal role in augmenting the capabilities of IMU-based systems.