European lawmakers have ruled that users on different platforms should be able to exchange messages with each other. Yet messaging interoperability opens up a Pandora's box of security and privacy challenges. While championed not just as an anti-trust measure but as a means of providing a better experience for the end user, interoperability runs the risk of making the user experience worse if poorly executed. There are two fundamental questions: how to enable the actual message exchange, and how to handle the numerous residual challenges arising from encrypted messages passing from one service provider to another -- including but certainly not limited to content moderation, user authentication, key management, and metadata sharing between providers. In this work, we identify specific open questions and challenges around interoperable communication in end-to-end encrypted messaging, and present high-level suggestions for tackling these challenges.
Neural temporal point processes(TPPs) have shown promise for modeling continuous-time event sequences. However, capturing the interactions between events is challenging yet critical for performing inference tasks like forecasting on event sequence data. Existing TPP models have focused on parameterizing the conditional distribution of future events but struggle to model event interactions. In this paper, we propose a novel approach that leverages Neural Relational Inference (NRI) to learn a relation graph that infers interactions while simultaneously learning the dynamics patterns from observational data. Our approach, the Contrastive Relational Inference-based Hawkes Process (CRIHP), reasons about event interactions under a variational inference framework. It utilizes intensity-based learning to search for prototype paths to contrast relationship constraints. Extensive experiments on three real-world datasets demonstrate the effectiveness of our model in capturing event interactions for event sequence modeling tasks.
The use of behavioural data in insurance is loaded with promises and unresolved issues. This paper explores the related opportunities and challenges analysing the use of telematics data in third-party liability motor insurance. Behavioural data are used not only to refine the risk profile of policyholders, but also to implement innovative coaching strategies, feeding back to the drivers the aggregated information obtained from the data. The purpose is to encourage an improvement in their driving style. Our research explores the effectiveness of coaching on the basis of an empirical investigation of the dataset of a company selling telematics motor insurance policies. The results of our quantitative analysis show that this effectiveness crucially depends on the propensity of policyholders to engage with the telematics app. We observe engagement as an additional kind of behaviour, producing second-order behavioural data that can also be recorded and strategically used by insurance companies. The conclusions discuss potential advantages and risks connected with this extended interpretation of behavioural data.
Recommender systems are widely used to help people find items that are tailored to their interests. These interests are often influenced by social networks, making it important to use social network information effectively in recommender systems. This is especially true for demographic groups with interests that differ from the majority. This paper introduces STUDY, a Socially-aware Temporally caUsal Decoder recommender sYstem. STUDY introduces a new socially-aware recommender system architecture that is significantly more efficient to learn and train than existing methods. STUDY performs joint inference over socially connected groups in a single forward pass of a modified transformer decoder network. We demonstrate the benefits of STUDY in the recommendation of books for students who are dyslexic, or struggling readers. Dyslexic students often have difficulty engaging with reading material, making it critical to recommend books that are tailored to their interests. We worked with our non-profit partner Learning Ally to evaluate STUDY on a dataset of struggling readers. STUDY was able to generate recommendations that more accurately predicted student engagement, when compared with existing methods.
Code clones are code snippets that are identical or similar to other snippets within the same or different files. They are often created through copy-and-paste practices during development and maintenance activities. Since code clones may require consistent updates and coherent management, they present a challenging issue in software maintenance. Therefore, many studies have been conducted to find various types of clones with accuracy, scalability, or performance. However, the exploration of the nature of code clones has been limited. Even the fundamental question of whether code snippets in the same clone set were written by the same author or different authors has not been thoroughly investigated. In this paper, we investigate the characteristics of code clones with a focus on authorship. We analyzed the authorship of code clones at the line-level granularity for Java files in 153 Apache projects stored on GitHub and addressed three research questions. Based on these research questions, we found that there are a substantial number of clone lines across all projects (an average of 18.5\% for all projects). Furthermore, authors who contribute to many non-clone lines also contribute to many clone lines. Additionally, we found that one-third of clone sets are primarily contributed to by multiple leading authors. These results confirm our intuitive understanding of clone characteristics, although no previous publications have provided empirical validation data from multiple projects. As the results could assist in designing better clone management techniques, we will explore the implications of developing an effective clone management tool.
Smart contracts manage blockchain assets. While smart contracts embody business processes, their platforms are not process-aware. Mainstream smart contract programming languages such as Solidity do not have explicit notions of roles, action dependencies, and time. Instead, these concepts are implemented in program code. This makes it very hard to design and analyze smart contracts. We argue that DCR graphs are a suitable formalization tool for smart contracts because they explicitly and visually capture these features. We utilize this expressiveness to show that many common high-level design patterns in smart-contract applications can be naturally modeled this way. Applying these patterns shows that DCR graphs facilitate the development and analysis of correct and reliable smart contracts by providing a clear and easy-to-understand specification.
Expanding the benefits of quantum computing to new domains remains a challenging task. Quantum applications are concentrated in only a few domains, and driven by these few, the quantum stack is limited in supporting the development or execution demands of new applications. In this work, we address this problem by identifying both a new application domain, and new directions to shape the quantum stack. We introduce computational cognitive models as a new class of quantum applications. Such models have been crucial in understanding and replicating human intelligence, and our work connects them with quantum computing for the first time. Next, we analyze these applications to make the case for redesigning the quantum stack for programmability and better performance. Among the research opportunities we uncover, we study two simple ideas of quantum cloud scheduling using data from gate-based and annealing-based quantum computers. On the respective systems, these ideas can enable parallel execution, and improve throughput. Our work is a contribution towards realizing versatile quantum systems that can broaden the impact of quantum computing on science and society.
This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.
Traffic forecasting is an important factor for the success of intelligent transportation systems. Deep learning models including convolution neural networks and recurrent neural networks have been applied in traffic forecasting problems to model the spatial and temporal dependencies. In recent years, to model the graph structures in the transportation systems as well as the contextual information, graph neural networks (GNNs) are introduced as new tools and have achieved the state-of-the-art performance in a series of traffic forecasting problems. In this survey, we review the rapidly growing body of recent research using different GNNs, e.g., graph convolutional and graph attention networks, in various traffic forecasting problems, e.g., road traffic flow and speed forecasting, passenger flow forecasting in urban rail transit systems, demand forecasting in ride-hailing platforms, etc. We also present a collection of open data and source resources for each problem, as well as future research directions. To the best of our knowledge, this paper is the first comprehensive survey that explores the application of graph neural networks for traffic forecasting problems. We have also created a public Github repository to update the latest papers, open data and source resources.
The chronological order of user-item interactions can reveal time-evolving and sequential user behaviors in many recommender systems. The items that users will interact with may depend on the items accessed in the past. However, the substantial increase of users and items makes sequential recommender systems still face non-trivial challenges: (1) the hardness of modeling the short-term user interests; (2) the difficulty of capturing the long-term user interests; (3) the effective modeling of item co-occurrence patterns. To tackle these challenges, we propose a memory augmented graph neural network (MA-GNN) to capture both the long- and short-term user interests. Specifically, we apply a graph neural network to model the item contextual information within a short-term period and utilize a shared memory network to capture the long-range dependencies between items. In addition to the modeling of user interests, we employ a bilinear function to capture the co-occurrence patterns of related items. We extensively evaluate our model on five real-world datasets, comparing with several state-of-the-art methods and using a variety of performance metrics. The experimental results demonstrate the effectiveness of our model for the task of Top-K sequential recommendation.
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms -- especially the collaborative filtering (CF) based approaches with shallow or deep models -- usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amount of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users' historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. In this work, we propose to reason over knowledge base embeddings for explainable recommendation. Specifically, we propose a knowledge base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.