This paper describes the process of developing data visualisations to enhance a commercial software platform for combating insider threat, whose existing UI, while perfectly functional, was limited in its ability to allow analysts to easily spot the patterns and outliers that visualisation naturally reveals. We describe the design and development process, proceeding from initial tasks/requirements gathering, understanding the platform's data formats, the rationale behind the visualisation's design, and then refining the prototype through gathering feedback from representative domain experts who are also current users of the software. Through a number of example scenarios, we show that the visualisation can support the identified tasks and aid analysts in discovering and understanding potentially risky insider activity within a large user base.
With the continuous growth of online 3D printing community and the democratization of 3D printers, growing number of users start sharing their own 3D designs on open platforms, enabling a wide audience to search, download, and 3D print models for free. Although sharing is mostly for altruistic reasons at first, open platforms had also created potential job opportunities to compensate creative labors. This paper analyzes new job opportunities emerged in online 3D printing social platforms and patterns of seeking compensations, and reveals various motivations for posting creative content online. We find that offering exclusive membership through subscriptions, selling final products or printing services through web stores, and using affiliate links are primary means of earning profits, while there exist gaps between creators' expectations and realities. We show that various socio-economic promises emerged, leading to a win-win situation for both creators to gain extra income and audiences to have access to more quality content. We also discuss future challenges that need to be addressed, such as ethical use of opensource content.
A good data visualization is not only a distortion-free graphical representation of data but also a way to reveal underlying statistical properties of the data. Despite its common use across various stages of data analysis, selecting a good visualization often is a manual process involving many iterations. Recently there has been interest in reducing this effort by developing models that can recommend visualizations, but they are of limited use since they require large training samples (data and visualization pairs) and focus primarily on the design aspects rather than on assessing the effectiveness of the selected visualization. In this paper, we present VizAI, a generative-discriminative framework that first generates various statistical properties of the data from a number of alternative visualizations of the data. It is linked to a discriminative model that selects the visualization that best matches the true statistics of the data being visualized. VizAI can easily be trained with minimal supervision and adapts to settings with varying degrees of supervision easily. Using crowd-sourced judgements and a large repository of publicly available visualizations, we demonstrate that VizAI outperforms the state of the art methods that learn to recommend visualizations.
University campuses are essentially a microcosm of a city. They comprise diverse facilities such as residences, sport centres, lecture theatres, parking spaces, and public transport stops. Universities are under constant pressure to improve efficiencies while offering a better experience to various stakeholders including students, staff, and visitors. Nonetheless, anecdotal evidence indicates that campus assets are not being utilised efficiently, often due to the lack of data collection and analysis, thereby limiting the ability to make informed decisions on the allocation and management of resources. Advances in the Internet of Things (IoT) technologies that can sense and communicate data from the physical world, coupled with data analytics and Artificial intelligence (AI) that can predict usage patterns, have opened up new opportunities for organisations to lower cost and improve user experience. This thesis explores this opportunity via theory and experimentation using UNSW Sydney as a living laboratory.
Increasing urbanization and exacerbation of sustainability goals threaten the operational efficiency of current transportation systems and confront cities with complex choices with huge impact on future generations. At the same time, the rise of private, profit-maximizing Mobility Service Providers leveraging public resources, such as ride-hailing companies, entangles current regulation schemes. This calls for tools to study such complex socio-technical problems. In this paper, we provide a game-theoretic framework to study interactions between stakeholders of the mobility ecosystem, modeling regulatory aspects such as taxes and public transport prices, as well as operational matters for Mobility Service Providers such as pricing strategy, fleet sizing, and vehicle design. Our framework is modular and can readily accommodate different types of Mobility Service Providers, actions of municipalities, and low-level models of customers choices in the mobility system. Through both an analytical and a numerical case study for the city of Berlin, Germany, we showcase the ability of our framework to compute equilibria of the problem, to study fundamental tradeoffs, and to inform stakeholders and policy makers on the effects of interventions. Among others, we show tradeoffs between customers satisfaction, environmental impact, and public revenue, as well as the impact of strategic decisions on these metrics.
In designing distributed and parallel systems there are several approaches for programming interactions in a multiprocess environment. Usually, these approaches take care only of synchronization or communication in two-party interactions. This paper is concerned with a more general concept: multiparty interactions. In a multiparty interaction, several executing threads somehow "come together" to produce an intermediate and temporary combined state, use this state as a well-defined starting point for some joint activity, and then leave this interaction and continue their separate execution. The concept of multiparty interactions has been investigated by several researchers, but to the best of our knowledge, none have considered how faults in one or more participants of the multiparty interaction can best be dealt with. The goal of this paper is twofold: to show how an existing specification language can be extended in order to allow dependable multiparty interactions (DMIs) to be declared and to present an object-oriented framework for implementing DMIs in distributed systems. To show how our scheme can be used to program a system in which multiparty interactions are more than simple synchronizations or communications, we use a case study based on an industrial production cell model developed by Forschungszentrum Informatik, Karlsruhe, Germany.
There is a perennial need in the online advertising industry to refresh ad creatives, i.e., images and text used for enticing online users towards a brand. Such refreshes are required to reduce the likelihood of ad fatigue among online users, and to incorporate insights from other successful campaigns in related product categories. Given a brand, to come up with themes for a new ad is a painstaking and time consuming process for creative strategists. Strategists typically draw inspiration from the images and text used for past ad campaigns, as well as world knowledge on the brands. To automatically infer ad themes via such multimodal sources of information in past ad campaigns, we propose a theme (keyphrase) recommender system for ad creative strategists. The theme recommender is based on aggregating results from a visual question answering (VQA) task, which ingests the following: (i) ad images, (ii) text associated with the ads as well as Wikipedia pages on the brands in the ads, and (iii) questions around the ad. We leverage transformer based cross-modality encoders to train visual-linguistic representations for our VQA task. We study two formulations for the VQA task along the lines of classification and ranking; via experiments on a public dataset, we show that cross-modal representations lead to significantly better classification accuracy and ranking precision-recall metrics. Cross-modal representations show better performance compared to separate image and text representations. In addition, the use of multimodal information shows a significant lift over using only textual or visual information.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
With the emergence of Web 2.0, tag recommenders have become important tools, which aim to support users in finding descriptive tags for their bookmarked resources. Although current algorithms provide good results in terms of tag prediction accuracy, they are often designed in a data-driven way and thus, lack a thorough understanding of the cognitive processes that play a role when people assign tags to resources. This thesis aims at modeling these cognitive dynamics in social tagging in order to improve tag recommendations and to better understand the underlying processes. As a first attempt in this direction, we have implemented an interplay between individual micro-level (e.g., categorizing resources or temporal dynamics) and collective macro-level (e.g., imitating other users' tags) processes in the form of a novel tag recommender algorithm. The preliminary results for datasets gathered from BibSonomy, CiteULike and Delicious show that our proposed approach can outperform current state-of-the-art algorithms, such as Collaborative Filtering, FolkRank or Pairwise Interaction Tensor Factorization. We conclude that recommender systems can be improved by incorporating related principles of human cognition.
In this paper, we describe a solution to tackle a common set of challenges in e-commerce, which arise from the fact that new products are continually being added to the catalogue. The challenges involve properly personalising the customer experience, forecasting demand and planning the product range. We argue that the foundational piece to solve all of these problems is having consistent and detailed information about each product, information that is rarely available or consistent given the multitude of suppliers and types of products. We describe in detail the architecture and methodology implemented at ASOS, one of the world's largest fashion e-commerce retailers, to tackle this problem. We then show how this quantitative understanding of the products can be leveraged to improve recommendations in a hybrid recommender system approach.
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.