Implicit bias may perpetuate healthcare disparities for marginalized patient populations. Such bias is expressed in communication between patients and their providers. We design an ecosystem with guidance from providers to make this bias explicit in patient-provider communication. Our end users are providers seeking to improve their quality of care for patients who are Black, Indigenous, People of Color (BIPOC) and/or Lesbian, Gay, Bisexual, Transgender, and Queer (LGBTQ). We present wireframes displaying communication metrics that negatively impact patient-centered care divided into the following categories: digital nudge, dashboard, and guided reflection. Our wireframes provide quantitative, real-time, and conversational feedback promoting provider reflection on their interactions with patients. This is the first design iteration toward the development of a tool to raise providers' awareness of their own implicit biases.
Recently, an increasing number of researchers, especially in the realm of political redistricting, have proposed sampling-based techniques to generate a subset of plans from the vast space of districting plans. These techniques have been increasingly adopted by U.S. courts of law and independent commissions as a tool for identifying partisan gerrymanders. Motivated by these recent developments, we develop a set of similar sampling techniques for designing school boundaries based on the flip proposal. Note that the flip proposal here refers to the change in the districting plan by a single assignment. These sampling-based techniques serve a dual purpose. They can be used as a baseline for comparing redistricting algorithms based on local search. Additionally, these techniques can help to infer the problem characteristics that may be further used for developing efficient redistricting methods. We empirically touch on both these aspects in regards to the problem of school redistricting.
The Covid-19 pandemic has caused impressive damages and disruptions in social, economic, and health systems (among others), and posed unprecedented challenges to public health and policy/decision-makers concerning the design and implementation of measures to mitigate its strong negative impacts. The Portuguese health authorities are currently using some decision analysis-like techniques to assess the impact of this pandemic and implementing measures for each county, region, or the whole country. Such decision tools led to some criticism and many stakeholders asked for novel approaches, in particular those having in consideration dynamical changes in the pandemic behavior arising, e.g., from new virus variants or vaccines. A multidisciplinary team formed by researchers of the Covid-19 Committee of Instituto Superior T\'ecnico at Universidade de Lisboa (CCIST analysts team) and medical doctors from the Crisis Office of the Portuguese Medical Association (GCOM experts team) gathered efforts and worked together in order to propose a new tool to help politicians and decision-makers in the combat of the pandemic. This paper presents the main steps and elements, which led to the construction of a pandemic impact assessment composite indicator, applied to the particular case of {\sc{Covid-19}} in Portugal. A multiple criteria approach based on an additive multi-attribute value theory (MAVT) aggregation model was used to construct the pandemic assessment composite indicator (PACI). The parameters of the additive model were built through a sociotechnical co-constructive interactive process between CCIST and GCOM team members. The deck of cards method was the technical tool adopted to help in building the value functions and the assessment of the criteria weights.
When neural network model and data are outsourced to cloud server for inference, it is desired to preserve the confidentiality of model and data as the involved parties (i.e., cloud server, model providing client and data providing client) may not trust mutually. Solutions were proposed based on multi-party computation, trusted execution environment (TEE) and leveled or fully homomorphic encryption (LHE/FHE), but their limitations hamper practical application. We propose a new framework based on synergistic integration of LHE and TEE, which enables collaboration among mutually-untrusted three parties, while minimizing the involvement of (relatively) resource-constrained TEE and allowing the full utilization of the untrusted but more resource-rich part of server. We also propose a generic and efficient LHE-based inference scheme as an important performance-determining component of the framework. We implemented/evaluated the proposed system on a moderate platform and show that, our proposed scheme is more applicable/scalable to various settings, and has better performance, compared to the state-of-the-art LHE-based solutions.
Machine-learning based recommender systems(RSs) has become an effective means to help people automatically discover their interests. Existing models often represent the rich information for recommendation, such as items, users, and contexts, as embedding vectors and leverage them to predict users' feedback. In the view of causal analysis, the associations between these embedding vectors and users' feedback are a mixture of the causal part that describes why an item is preferred by a user, and the non-causal part that merely reflects the statistical dependencies between users and items, for example, the exposure mechanism, public opinions, display position, etc. However, existing RSs mostly ignored the striking differences between the causal parts and non-causal parts when using these embedding vectors. In this paper, we propose a model-agnostic framework named IV4Rec that can effectively decompose the embedding vectors into these two parts, hence enhancing recommendation results. Specifically, we jointly consider users' behaviors in search scenarios and recommendation scenarios. Adopting the concepts in causal analysis, we embed users' search behaviors as instrumental variables (IVs), to help decompose original embedding vectors in recommendation, i.e., treatments. IV4Rec then combines the two parts through deep neural networks and uses the combined results for recommendation. IV4Rec is model-agnostic and can be applied to a number of existing RSs such as DIN and NRHUB. Experimental results on both public and proprietary industrial datasets demonstrate that IV4Rec consistently enhances RSs and outperforms a framework that jointly considers search and recommendation.
Reconfigurable intelligent surface (RIS) can effectively control the wavefront of the impinging signals and has emerged as a cost-effective promising solution to improve the spectrum and energy efficiency of wireless systems. Most existing researches on RIS assume that the hardware operations are perfect. However, both physical transceiver and RIS suffer from inevitable hardware impairments in practice, which can lead to severe system performance degradation and increase the complexity of beamforming optimization. Consequently, the existing researches on RIS, including channel estimation, beamforming optimization, spectrum and energy efficiency analysis, etc., cannot directly apply to the case of hardware impairments. In this paper, by taking hardware impairments into consideration, we conduct the joint transmit and reflect beamforming optimization, and reevaluate the system performance. First, we characterize the closed-form estimators of direct and cascaded channels in both single-user and multi-user cases, and analyze the impact of hardware impairments on channel estimation accuracy. Then, the optimal transmit beamforming solution is derived, and a gradient descent method-based algorithm is also proposed to optimize the reflect beamforming. Moreover, we analyze the three types of asymptotic channel capacities with respect to the transmit power, the antenna number, and the reflecting element number. Finally, in terms of the system energy consumption, we analyze the power scaling law and the energy efficiency. Our experimental results also reveal an encouraging phenomenon that the RIS-assisted wireless system with massive reflecting elements can achieve both high spectrum and energy efficiency without the need for massive antennas and without allocating too many resources to optimize the reflect beamforming.
When individuals arrive to receive help from mental health providers, they do not always have well specified and well established goals. It is the mental health providers responsibility to work collaboratively with patients to clarify their goals in the therapy sessions as well as life in general through clinical interviews, diagnostic assessments, and thorough observations. However, recognizing individuals important life goals is not always straightforward. Here we introduce a novel method that gauges a patient important goal pursuits from their relative sensitivity to goal related words. Past research has shown that a person active goal pursuits cause them to be more sensitive to the presence of goal related stimuli in the environment being able to consciously report those stimuli when others cannot see them. By presenting words related to a variety of different life goal pursuits very quickly for 50 msec or less, the patient would be expected to notice and be aware of words related to their strongest motivations but not the other goal related words. These may or may not be among the goals they have identified in therapy sessions, and the ones not previously identified can be fertile grounds for further discussion and exploration in subsequent therapy sessions. Results from eight patient volunteers are described and discussed in terms of the potential utility of this supplemental personal therapy aid.
While data are the primary fuel for machine learning models, they often suffer from missing values, especially when collected in real-world scenarios. However, many off-the-shelf machine learning models, including artificial neural network models, are unable to handle these missing values directly. Therefore, extra data preprocessing and curation steps, such as data imputation, are inevitable before learning and prediction processes. In this study, we propose a simple and intuitive yet effective method for pruning missing values (PROMISSING) during learning and inference steps in neural networks. In this method, there is no need to remove or impute the missing values; instead, the missing values are treated as a new source of information (representing what we do not know). Our experiments on simulated data, several classification and regression benchmarks, and a multi-modal clinical dataset show that PROMISSING results in similar prediction performance compared to various imputation techniques. In addition, our experiments show models trained using PROMISSING techniques are becoming less decisive in their predictions when facing incomplete samples with many unknowns. This finding hopefully advances machine learning models from being pure predicting machines to more realistic thinkers that can also say "I do not know" when facing incomplete sources of information.
Spatial connectivity is an important consideration when modelling infectious disease data across a geographical region. Connectivity can arise for many reasons, including shared characteristics between regions, and human or vector movement. Bayesian hierarchical models include structured random effects to account for spatial connectivity. However, conventional approaches require the spatial structure to be fully defined prior to model fitting. By applying penalised smoothing splines to coordinates, we create 2-dimensional smooth surfaces describing the spatial structure of the data whilst making minimal assumptions about the structure. The result is a non-stationary surface which is setting specific. These surfaces can be incorporated into a hierarchical modelling framework and interpreted similarly to traditional random effects. Through simulation studies we show that the splines can be applied to any continuous connectivity measure, including measures of human movement, and that the models can be extended to explore multiple sources of spatial structure in the data. Using Bayesian inference and simulation, the relative contribution of each spatial structure can be computed and used to generate hypotheses about the drivers of disease. These models were found to perform at least as well as existing modelling frameworks, whilst allowing for future extensions and multiple sources of spatial connectivity.
Aggregating signals from a collection of noisy sources is a fundamental problem in many domains including crowd-sourcing, multi-agent planning, sensor networks, signal processing, voting, ensemble learning, and federated learning. The core question is how to aggregate signals from multiple sources (e.g. experts) in order to reveal an underlying ground truth. While a full answer depends on the type of signal, correlation of signals, and desired output, a problem common to all of these applications is that of differentiating sources based on their quality and weighting them accordingly. It is often assumed that this differentiation and aggregation is done by a single, accurate central mechanism or agent (e.g. judge). We complicate this model in two ways. First, we investigate the setting with both a single judge, and one with multiple judges. Second, given this multi-agent interaction of judges, we investigate various constraints on the judges' reporting space. We build on known results for the optimal weighting of experts and prove that an ensemble of sub-optimal mechanisms can perform optimally under certain conditions. We then show empirically that the ensemble approximates the performance of the optimal mechanism under a broader range of conditions.
The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.