Testing, contact tracing, and isolation (TTI) is an epidemic management and control approach that is difficult to implement at scale. Here we demonstrate a scalable improvement to TTI that uses data assimilation (DA) on a contact network to learn about individual risks of infection. Network DA exploits diverse sources of health data together with proximity data from mobile devices. In simulations of the early COVID-19 epidemic in New York City, network DA identifies up to a factor 2 more infections than contact tracing when harnessing the same diagnostic test data. Targeting contact interventions with network DA reduces deaths by up to a factor 4 relative to TTI, provided compliance reaches around 75%. Network DA can be implemented by expanding the backend of existing exposure notification apps, thus greatly enhancing their capabilities. Implemented at scale, it has the potential to precisely and effectively control the ongoing or future epidemics while minimizing economic disruption.
In recent years, physiological signal based authentication has shown great promises,for its inherent robustness against forgery. Electrocardiogram (ECG) signal, being the most widely studied biosignal, has also received the highest level of attention in this regard. It has been proven with numerous studies that by analyzing ECG signals from different persons, it is possible to identify them, with acceptable accuracy. In this work, we present, EDITH, a deep learning-based framework for ECG biometrics authentication system. Moreover, we hypothesize and demonstrate that Siamese architectures can be used over typical distance metrics for improved performance. We have evaluated EDITH using 4 commonly used datasets and outperformed the prior works using less number of beats. EDITH performs competitively using just a single heartbeat (96-99.75% accuracy) and can be further enhanced by fusing multiple beats (100% accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture manages to reduce the identity verification Equal Error Rate (EER) to 1.29%. A limited case study of EDITH with real-world experimental data also suggests its potential as a practical authentication system.
Service Workers (SWs) are a powerful feature at the core of Progressive Web Apps, namely web applications that can continue to function when the user's device is offline and that have access to device sensors and capabilities previously accessible only by native applications. During the past few years, researchers have found a number of ways in which SWs may be abused to achieve different malicious purposes. For instance, SWs may be abused to build a web-based botnet, launch DDoS attacks, or perform cryptomining; they may be hijacked to create persistent cross-site scripting (XSS) attacks; they may be leveraged in the context of side-channel attacks to compromise users' privacy; or they may be abused for phishing or social engineering attacks using web push notifications-based malvertising. In this paper, we reproduce and analyze known attack vectors related to SWs and explore new abuse paths that have not previously been considered. We systematize the attacks into different categories, and then analyze whether, how, and estimate when these attacks have been published and mitigated by different browser vendors. Then, we discuss a number of open SW security problems that are currently unmitigated, and propose SW behavior monitoring approaches and new browser policies that we believe should be implemented by browsers to further improve SW security. Furthermore, we implement a proof-of-concept version of several policies in the Chromium code base, and also measure the behavior of SWs used by highly popular web applications with respect to these new policies. Our measurements show that it should be feasible to implement and enforce stricter SW security policies without a significant impact on most legitimate production SWs.
Balancing social utility and equity in distributing limited vaccines represents a critical policy concern for protecting against the prolonged COVID-19 pandemic. What is the nature of the trade-off between maximizing collective welfare and minimizing disparities between more and less privileged communities? To evaluate vaccination strategies, we propose a novel epidemic model that explicitly accounts for both demographic and mobility differences among communities and their association with heterogeneous COVID-19 risks, then calibrate it with large-scale data. Using this model, we find that social utility and equity can be simultaneously improved when vaccine access is prioritized for the most disadvantaged communities, which holds even when such communities manifest considerable vaccine reluctance. Nevertheless, equity among distinct demographic features are in tension due to their complex correlation in society. We design two behavior-and-demography-aware indices, community risk and societal harm, which capture the risks communities face and those they impose on society from not being vaccinated, to inform the design of comprehensive vaccine distribution strategies. Our study provides a framework for uniting utility and equity-based considerations in vaccine distribution, and sheds light on how to balance multiple ethical values in complex settings for epidemic control.
In the context of digital therapy interventions, such as internet-delivered Cognitive Behavioral Therapy (iCBT) for the treatment of depression and anxiety, extensive research has shown how the involvement of a human supporter or coach, who assists the person undergoing treatment, improves user engagement in therapy and leads to more effective health outcomes than unsupported interventions. Seeking to maximize the effects and outcomes of this human support, the research investigates how new opportunities provided through recent advances in the field of AI and machine learning (ML) can contribute useful data insights to effectively support the work practices of iCBT supporters. This paper reports detailed findings of an interview study with 15 iCBT supporters that deepens understanding of their existing work practices and information needs with the aim to meaningfully inform the development of useful, implementable ML applications particularly in the context of iCBT treatment for depression and anxiety. The analysis contributes (1) a set of six themes that summarize the strategies and challenges that iCBT supporters encounter in providing effective, personalized feedback to their mental health clients; and in response to these learnings, (2) presents for each theme concrete opportunities for how methods of ML could help support and address identified challenges and information needs. It closes with reflections on potential social, emotional and pragmatic implications of introducing new machine-generated data insights within supporter-led client review practices.
Recent advances in multi-agent reinforcement learning (MARL) provide a variety of tools that support the ability of agents to adapt to unexpected changes in their environment, and to operate successfully given their environment's dynamic nature (which may be intensified by the presence of other agents). In this work, we highlight the relationship between a group's ability to collaborate effectively and the group's resilience, which we measure as the group's ability to adapt to perturbations in the environment. To promote resilience, we suggest facilitating collaboration via a novel confusion-based communication protocol according to which agents broadcast observations that are misaligned with their previous experiences. We allow decisions regarding the width and frequency of messages to be learned autonomously by agents, which are incentivized to reduce confusion. We present empirical evaluation of our approach in a variety of MARL settings.
We present a flexible public transit network design model which optimizes a social access objective while guaranteeing that the system's costs and transit times remain within a preset margin of their current levels. The purpose of the model is to find a set of minor, immediate modifications to an existing bus network that can give more communities access to the chosen services while having a minimal impact on the current network's operator costs and user costs. Design decisions consist of reallocation of existing resources in order to adjust line frequencies and capacities. We present a hybrid tabu search/simulated annealing algorithm for the solution of this optimization-based model. As a case study we apply the model to the problem of improving equity of access to primary health care facilities in the Chicago metropolitan area. The results of the model suggest that it is possible to achieve better primary care access equity through reassignment of existing buses and implementation of express runs, while leaving overall service levels relatively unaffected.
This paper presents a novel strategy for autonomous teamed exploration of subterranean environments using legged and aerial robots. Tailored to the fact that subterranean settings, such as cave networks and underground mines, often involve complex, large-scale and multi-branched topologies, while wireless communication within them can be particularly challenging, this work is structured around the synergy of an onboard exploration path planner that allows for resilient long-term autonomy, and a multi-robot coordination framework. The onboard path planner is unified across legged and flying robots and enables navigation in environments with steep slopes, and diverse geometries. When a communication link is available, each robot of the team shares submaps to a centralized location where a multi-robot coordination framework identifies global frontiers of the exploration space to inform each system about where it should re-position to best continue its mission. The strategy is verified through a field deployment inside an underground mine in Switzerland using a legged and a flying robot collectively exploring for 45 min, as well as a longer simulation study with three systems.
Object stores are widely used software stacks that achieve excellent scale-out with a well-defined interface and robust performance. However, their traditional get/put interface is unable to exploit data locality at its fullest, and limits reaching its peak performance. In particular, there is one way to improve data locality that has not yet achieved mainstream adoption: the active object store. Although there are some projects that have implemented the main idea of the active object store such as Swift's Storlets or Ceph Object Classes, the scope of these implementations is limited. We believe that there is a huge potential for active object stores in the current status quo. Hyper-converged nodes are bringing more computing capabilities to storage nodes --and viceversa. The proliferation of non-volatile memory (NVM) technology is blurring the line between system memory (fast and scarce) and block devices (slow and abundant). More and more applications need to manage a sheer amount of data (data analytics, Big Data, Machine Learning & AI, etc.), demanding bigger clusters and more complex computations. All these elements are potential game changers that need to be evaluated in the scope of active object stores. In this article we propose an active object store software stack and evaluate it on an NVM-populated node. We will show how this setup is able to reduce execution times from 10% up to more than 90% in a variety of representative application scenarios. Our discussion will focus on the active aspect of the system as well as on the implications of the memory configuration.
Active inference is a unifying theory for perception and action resting upon the idea that the brain maintains an internal model of the world by minimizing free energy. From a behavioral perspective, active inference agents can be seen as self-evidencing beings that act to fulfill their optimistic predictions, namely preferred outcomes or goals. In contrast, reinforcement learning requires human-designed rewards to accomplish any desired outcome. Although active inference could provide a more natural self-supervised objective for control, its applicability has been limited because of the shortcomings in scaling the approach to complex environments. In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions. Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train. We compare to reinforcement learning agents that have access to human-designed reward functions, showing that our approach closely matches their performance. Finally, we also show that contrastive methods perform significantly better in the case of distractors in the environment and that our method is able to generalize goals to variations in the background.
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations can cause proficient but narrowly-learned policies to fail at test time. In this work, we propose to learn how to quickly and effectively adapt online to new situations as well as to perturbations. To enable sample-efficient meta-learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach trains a global model such that, when combined with recent data, the model can be be rapidly adapted to the local context. Our experiments demonstrate that our approach can enable simulated agents to adapt their behavior online to novel terrains, to a crippled leg, and in highly-dynamic environments.