Informed consent has become increasingly salient for data privacy and its regulation. Entities from governments to for-profit companies have addressed concerns about data privacy with policies that enumerate the conditions for personal data storage and transfer. However, increased enumeration of and transparency in data privacy policies has not improved end-users' comprehension of how their data might be used: not only are privacy policies written in legal language that users may struggle to understand, but elements of these policies may compose in such a way that the consequences of the policy are not immediately apparent. We present a framework that uses Answer Set Programming (ASP) -- a type of logic programming -- to formalize privacy policies. Privacy policies thus become constraints on a narrative planning space, allowing end-users to forward-simulate possible consequences of the policy in terms of actors having roles and taking actions in a domain. We demonstrate through the example of the Health Insurance Portability and Accountability Act (HIPAA) how to use the system in various ways, including asking questions about possibilities and identifying which clauses of the law are broken by a given sequence of events.
Login notifications are intended to inform users about recent sign-ins and help them protect their accounts from unauthorized access. The notifications are usually sent if a login occurs from a new location or device, which could indicate malicious activity. They mostly contain information such as the location, date, time, and device used to sign in. Users are challenged to verify whether they recognize the login (because it has been them or someone they know) or to proactively protect their account from unwanted access by changing their password. In two user studies, we explore users' comprehension, reactions, and expectations of login notifications. We utilize two treatments to measure users' behavior in response to login notifications sent for a login they initiated themselves or based on a malicious actor relying on statistical sign-in information. Users feel relatively confident identifying legitimate logins but demonstrate various risky and insecure behaviors when it comes to malicious sign-ins. We discuss the identified problems and give recommendations for service providers to ensure usable and secure logins for everyone.
Predicting trajectories of pedestrians based on goal information in highly interactive scenes is a crucial step toward Intelligent Transportation Systems and Autonomous Driving. The challenges of this task come from two key sources: (1) complex social interactions in high pedestrian density scenarios and (2) limited utilization of goal information to effectively associate with past motion information. To address these difficulties, we integrate social forces into a Transformer-based stochastic generative model backbone and propose a new goal-based trajectory predictor called ForceFormer. Differentiating from most prior works that simply use the destination position as an input feature, we leverage the driving force from the destination to efficiently simulate the guidance of a target on a pedestrian. Additionally, repulsive forces are used as another input feature to describe the avoidance action among neighboring pedestrians. Extensive experiments show that our proposed method achieves on-par performance measured by distance errors with the state-of-the-art models but evidently decreases collisions, especially in dense pedestrian scenarios on widely used pedestrian datasets.
To prevent implicit privacy disclosure in sharing gradients among data owners (DOs) under federated learning (FL), differential privacy (DP) and its variants have become a common practice to offer formal privacy guarantees with low overheads. However, individual DOs generally tend to inject larger DP noises for stronger privacy provisions (which entails severe degradation of model utility), while the curator (i.e., aggregation server) aims to minimize the overall effect of added random noises for satisfactory model performance. To address this conflicting goal, we propose a novel dynamic privacy pricing (DyPP) game which allows DOs to sell individual privacy (by lowering the scale of locally added DP noise) for differentiated economic compensations (offered by the curator), thereby enhancing FL model utility. Considering multi-dimensional information asymmetry among players (e.g., DO's data distribution and privacy preference, and curator's maximum affordable payment) as well as their varying private information in distinct FL tasks, it is hard to directly attain the Nash equilibrium of the mixed-strategy DyPP game. Alternatively, we devise a fast reinforcement learning algorithm with two layers to quickly learn the optimal mixed noise-saving strategy of DOs and the optimal mixed pricing strategy of the curator without prior knowledge of players' private information. Experiments on real datasets validate the feasibility and effectiveness of the proposed scheme in terms of faster convergence speed and enhanced FL model utility with lower payment costs.
Existing statistical methods can estimate a policy, or a mapping from covariates to decisions, which can then instruct decision makers (e.g., whether to administer hypotension treatment based on covariates blood pressure and heart rate). There is great interest in using such data-driven policies in healthcare. However, it is often important to explain to the healthcare provider, and to the patient, how a new policy differs from the current standard of care. This end is facilitated if one can pinpoint the aspects of the policy (i.e., the parameters for blood pressure and heart rate) that change when moving from the standard of care to the new, suggested policy. To this end, we adapt ideas from Trust Region Policy Optimization (TRPO). In our work, however, unlike in TRPO, the difference between the suggested policy and standard of care is required to be sparse, aiding with interpretability. This yields ``relative sparsity," where, as a function of a tuning parameter, $\lambda$, we can approximately control the number of parameters in our suggested policy that differ from their counterparts in the standard of care (e.g., heart rate only). We propose a criterion for selecting $\lambda$, perform simulations, and illustrate our method with a real, observational healthcare dataset, deriving a policy that is easy to explain in the context of the current standard of care. Our work promotes the adoption of data-driven decision aids, which have great potential to improve health outcomes.
Autonomous navigation in crowded environments is an open problem with many applications, essential for the coexistence of robots and humans in the smart cities of the future. In recent years, deep reinforcement learning approaches have proven to outperform model-based algorithms. Nevertheless, even though the results provided are promising, the works are not able to take advantage of the capabilities that their models offer. They usually get trapped in local optima in the training process, that prevent them from learning the optimal policy. They are not able to visit and interact with every possible state appropriately, such as with the states near the goal or near the dynamic obstacles. In this work, we propose using intrinsic rewards to balance between exploration and exploitation and explore depending on the uncertainty of the states instead of on the time the agent has been trained, encouraging the agent to get more curious about unknown states. We explain the benefits of the approach and compare it with other exploration algorithms that may be used for crowd navigation. Many simulation experiments are performed modifying several algorithms of the state-of-the-art, showing that the use of intrinsic rewards makes the robot learn faster and reach higher rewards and success rates (fewer collisions) in shorter navigation times, outperforming the state-of-the-art.
Search systems on the Web rely on user input to generate relevant results. Since early information retrieval systems, users are trained to issue keyword searches and adapt to the language of the system. Recent research has shown that users often withhold detailed information about their initial information need, although they are able to express it in natural language. We therefore conduct a user study (N = 139) to investigate how four different design variants of search interfaces can encourage the user to reveal more information. Our results show that a chatbot-inspired search interface can increase the number of mentioned product attributes by 84% and promote natural language formulations by 139% in comparison to a standard search bar interface.
Context: Trustworthiness of software has become a first-class concern of users (e.g., to understand software-made decisions). Also, there is increasing demand to demonstrate regulatory compliance of software and end users want to understand how software-intensive systems make decisions that affect them. Objective: We aim to provide a step towards understanding provenance needs of the software industry to support trustworthy software. Provenance is information about entities, activities, and people involved in producing data, software, or output of software, and used to assess software quality, reliability and trustworthiness of digital products and services. Method: Based on data from in-person and questionnaire-based interviews with professionals in leadership roles we develop an ``influence map'' to analyze who drives provenance, when provenance is relevant, what is impacted by provenance and how provenance can be managed. Results: The influence map helps decision makers navigate concerns related to provenance. It can also act as a checklist for initial provenance analyses of systems. It is empirically-grounded and designed bottom-up (based on perceptions of practitioners) rather than top-down (from regulations or policies). Conclusion: We present an imperfect first step towards understanding provenance based on current perceptions and offer a preliminary view ahead.
Back-translation is widely known for its effectiveness in neural machine translation when there is little to no parallel data. In this approach, a source-to-target model is coupled with a target-to-source model trained in parallel. The target-to-source model generates noisy sources, while the source-to-target model is trained to reconstruct the targets and vice versa. Recent developments of multilingual pre-trained sequence-to-sequence models for programming languages have been very effective for a broad spectrum of downstream software engineering tasks. Hence, training them to build programming language translation systems via back-translation is compelling. However, these models cannot be further trained via back-translation since they learn to output sequences in the same language as the inputs during pre-training. As an alternative, we propose performing back-translation via code summarization and generation. In code summarization, a model learns to generate natural language (NL) summaries given code snippets. In code generation, the model learns to do the opposite. Therefore, target-to-source generation in back-translation can be viewed as a target-to-NL-to-source generation. We show that our proposed approach performs competitively with state-of-the-art methods. We have made the code publicly available.
While the maximum entropy (MaxEnt) reinforcement learning (RL) framework -- often touted for its exploration and robustness capabilities -- is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity. In this work, we propose the adoption of latent variable policies within the MaxEnt framework, which we show can provably approximate any policy distribution, and additionally, naturally emerges under the use of world models with a latent belief state. We discuss why latent variable policies are difficult to train, how naive approaches can fail, then subsequently introduce a series of improvements centered around low-cost marginalization of the latent state, allowing us to make full use of the latent state at minimal additional cost. We instantiate our method under the actor-critic framework, marginalizing both the actor and critic. The resulting algorithm, referred to as Stochastic Marginal Actor-Critic (SMAC), is simple yet effective. We experimentally validate our method on continuous control tasks, showing that effective marginalization can lead to better exploration and more robust training. Our implementation is open sourced at //github.com/zdhNarsil/Stochastic-Marginal-Actor-Critic.
Making sure that users understand privacy policies that impact them is a key challenge for a real GDPR deployment. Research studies are mostly carried in English, but in Europe and elsewhere, users speak a language that is not English. Replicating studies in different languages requires the availability of comparable cross-language privacy policies corpora. This work provides a methodology for building comparable cross-language in a national language and a reference study language. We provide an application example of our methodology comparing English and Italian extending the corpus of one of the first studies about users understanding of technical terms in privacy policies. We also investigate other open issues that can make replication harder.