We present a flexible public transit network design model which optimizes a social access objective while guaranteeing that the system's costs and transit times remain within a preset margin of their current levels. The purpose of the model is to find a set of minor, immediate modifications to an existing bus network that can give more communities access to the chosen services while having a minimal impact on the current network's operator costs and user costs. Design decisions consist of reallocation of existing resources in order to adjust line frequencies and capacities. We present a hybrid tabu search/simulated annealing algorithm for the solution of this optimization-based model. As a case study we apply the model to the problem of improving equity of access to primary health care facilities in the Chicago metropolitan area. The results of the model suggest that it is possible to achieve better primary care access equity through reassignment of existing buses and implementation of express runs, while leaving overall service levels relatively unaffected.
Structured capability access ("SCA") is an emerging paradigm for the safe deployment of artificial intelligence (AI). Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems. The aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely. The developer must both restrict how the AI system can be used, and prevent the user from circumventing these restrictions through modification or reverse engineering of the AI system. SCA is most effective when implemented through cloud-based AI services, rather than disseminating AI software that runs locally on users' hardware. Cloud-based interfaces provide the AI developer greater scope for controlling how the AI system is used, and for protecting against unauthorized modifications to the system's design. This chapter expands the discussion of "publication norms" in the AI community, which to date has focused on the question of how the informational content of AI research projects should be disseminated (e.g., code and models). Although this is an important question, there are limits to what can be achieved through the control of information flows. SCA views AI software not only as information that can be shared but also as a tool with which users can have arm's length interactions. There are early examples of SCA being practiced by AI developers, but there is much room for further development, both in the functionality of cloud-based interfaces and in the wider institutional framework.
Online mobile advertising ecosystems provide advertising and analytics services that collect, aggregate, process and trade rich amount of consumer's personal data and carries out interests-based ads targeting, which raised serious privacy risks and growing trends of users feeling uncomfortable while using internet services. In this paper, we address user's privacy concerns by developing an optimal dynamic optimisation cost-effective framework for preserving user privacy for profiling, ads-based inferencing, temporal apps usage behavioral patterns and interest-based ads targeting. A major challenge in solving this dynamic model is the lack of knowledge of time-varying updates during profiling process. We formulate a mixed-integer optimisation problem and develop an equivalent problem to show that proposed algorithm does not require knowledge of time-varying updates in user behavior. Following, we develop an online control algorithm to solve equivalent problem using Lyapunov optimisation and to overcome difficulty of solving nonlinear programming by decomposing it into various cases and achieve trade-off between user privacy, cost and targeted ads. We carry out extensive experimentations and demonstrate proposed framework's applicability by implementing its critical components using POC `System App'. We compare proposed framework with other privacy protecting approaches and investigate that it achieves better privacy and functionality for various performance parameters.
The trend towards the cloudification of the 3GPP LTE mobile network architecture and the emergence of federated cloud infrastructures call for alternative service delivery strategies for improved user experience and efficient resource utilization. We propose Follow-Me Cloud (FMC), a design tailored to this environment, but with a broader applicability, which allows mobile users to always be connected via the optimal data anchor and mobility gateways, while cloud-based services follow them and are delivered via the optimal service point inside the cloud infrastructure. FMC applies a Markov-Decision-Process-based algorithm for cost-effective, performance-optimized service migration decisions, while two alternative schemes to ensure service continuity and disruption-free operation are proposed, based on either Software Defined Networking technologies or the Locator/Identifier Separation Protocol. Numerical results from our analytic model for FMC, as well as testbed experiments with the two alternative FMC implementations we have developed, demonstrate quantitatively and qualitatively the advantages it can bring about.
With the increasing number of Internet of Things (IoT) devices, Machine Type Communication (MTC) has become an important use case of the Fifth Generation (5G) communication systems. Since MTC devices are mostly disconnected from Base Station (BS) for power saving, random access procedure is required for devices to transmit data. If many devices try random access simultaneously, preamble collision problem occurs, thus causing latency increase. In an environment where delay-sensitive and delay-tolerant devices coexist, the contention-based random access procedure cannot satisfy latency requirements of delay-sensitive devices. Therefore, we propose RAPID, a novel random access procedure, which is completed through two message exchanges for the delay-sensitive devices. We also develop Access Pattern Analyzer (APA), which estimates traffic characteristics of MTC devices. When UEs, performing RAPID and contention-based random access, coexist, it is important to determine a value which is the number of preambles for RAPID to reduce random access load. Thus, we analyze random access load using a Markov chain model to obtain the optimal number of preambles for RAPID. Simulation results show RAPID achieves 99.999% reliability with 80.8% shorter uplink latency, and also decreases random access load by 30.5% compared with state-of-the-art techniques.
Online peer-to-peer support platforms enable conversations between millions of people who seek and provide mental health support. If successful, web-based mental health conversations could improve access to treatment and reduce the global disease burden. Psychologists have repeatedly demonstrated that empathy, the ability to understand and feel the emotions and experiences of others, is a key component leading to positive outcomes in supportive conversations. However, recent studies have shown that highly empathic conversations are rare in online mental health platforms. In this paper, we work towards improving empathy in online mental health support conversations. We introduce a new task of empathic rewriting which aims to transform low-empathy conversational posts to higher empathy. Learning such transformations is challenging and requires a deep understanding of empathy while maintaining conversation quality through text fluency and specificity to the conversational context. Here we propose PARTNER, a deep reinforcement learning agent that learns to make sentence-level edits to posts in order to increase the expressed level of empathy while maintaining conversation quality. Our RL agent leverages a policy network, based on a transformer language model adapted from GPT-2, which performs the dual task of generating candidate empathic sentences and adding those sentences at appropriate positions. During training, we reward transformations that increase empathy in posts while maintaining text fluency, context specificity and diversity. Through a combination of automatic and human evaluation, we demonstrate that PARTNER successfully generates more empathic, specific, and diverse responses and outperforms NLP methods from related tasks like style transfer and empathic dialogue generation. Our work has direct implications for facilitating empathic conversations on web-based platforms.
Precise user and item embedding learning is the key to building a successful recommender system. Traditionally, Collaborative Filtering(CF) provides a way to learn user and item embeddings from the user-item interaction history. However, the performance is limited due to the sparseness of user behavior data. With the emergence of online social networks, social recommender systems have been proposed to utilize each user's local neighbors' preferences to alleviate the data sparsity for better user embedding modeling. We argue that, for each user of a social platform, her potential embedding is influenced by her trusted users. As social influence recursively propagates and diffuses in the social network, each user's interests change in the recursive process. Nevertheless, the current social recommendation models simply developed static models by leveraging the local neighbors of each user without simulating the recursive diffusion in the global social network, leading to suboptimal recommendation performance. In this paper, we propose a deep influence propagation model to stimulate how users are influenced by the recursive social diffusion process for social recommendation. For each user, the diffusion process starts with an initial embedding that fuses the related features and a free user latent vector that captures the latent behavior preference. The key idea of our proposed model is that we design a layer-wise influence propagation structure to model how users' latent embeddings evolve as the social diffusion process continues. We further show that our proposed model is general and could be applied when the user~(item) attributes or the social network structure is not available. Finally, extensive experimental results on two real-world datasets clearly show the effectiveness of our proposed model, with more than 13% performance improvements over the best baselines.
Although Recommender Systems have been comprehensively studied in the past decade both in industry and academia, most of current recommender systems suffer from the fol- lowing issues: 1) The data sparsity of the user-item matrix seriously affect the recommender system quality. As a result, most of traditional recommender system approaches are not able to deal with the users who have rated few items, which is known as cold start problem in recommender system. 2) Traditional recommender systems assume that users are in- dependently and identically distributed and ignore the social relation between users. However, in real life scenario, due to the exponential growth of social networking service, such as facebook and Twitter, social connections between different users play an significant role for recommender system task. In this work, aiming at providing a better recommender sys- tems by incorporating user social network information, we propose a matrix factorization framework with user social connection constraints. Experimental results on the real-life dataset shows that the proposed method performs signifi- cantly better than the state-of-the-art approaches in terms of MAE and RMSE, especially for the cold start users.
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.
The robust and efficient recognition of visual relations in images is a hallmark of biological vision. Here, we argue that, despite recent progress in visual recognition, modern machine vision algorithms are severely limited in their ability to learn visual relations. Through controlled experiments, we demonstrate that visual-relation problems strain convolutional neural networks (CNNs). The networks eventually break altogether when rote memorization becomes impossible such as when the intra-class variability exceeds their capacity. We further show that another type of feedforward network, called a relational network (RN), which was shown to successfully solve seemingly difficult visual question answering (VQA) problems on the CLEVR datasets, suffers similar limitations. Motivated by the comparable success of biological vision, we argue that feedback mechanisms including working memory and attention are the key computational components underlying abstract visual reasoning.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.