In developing regions a substantial number of users rely on legacy and ultra-low-cost mobile devices. Unfortunately, many of these devices are not equipped to run the standard authentication or identity apps that are available for smartphones. Increasingly, apps that display Quick Response (QR) codes are being used to communicate personal credentials (e.g., Covid-19 vaccination certificates). This paper describes a novel interface for QR code credentials that is compatible with legacy mobile devices. Our solution, which we have released under open source licensing, allows Web Application Enabled legacy mobile devices to load and display standard QR codes. This technique makes modern identity platforms available to previously excluded and economically disadvantaged populations.
In today's digital world, interaction with online platforms is ubiquitous, and thus content moderation is important for protecting users from content that do not comply with pre-established community guidelines. Having a robust content moderation system throughout every stage of planning is particularly important. We study the short-term planning problem of allocating human content reviewers to different harmful content categories. We use tools from fair division and study the application of competitive equilibrium and leximin allocation rules. Furthermore, we incorporate, to the traditional Fisher market setup, novel aspects that are of practical importance. The first aspect is the forecasted workload of different content categories. We show how a formulation that is inspired by the celebrated Eisenberg-Gale program allows us to find an allocation that not only satisfies the forecasted workload, but also fairly allocates the remaining reviewing hours among all content categories. The resulting allocation is also robust as the additional allocation provides a guardrail in cases where the actual workload deviates from the predicted workload. The second practical consideration is time dependent allocation that is motivated by the fact that partners need scheduling guidance for the reviewers across days to achieve efficiency. To address the time component, we introduce new extensions of the various fair allocation approaches for the single-time period setting, and we show that many properties extend in essence, albeit with some modifications. Related to the time component, we additionally investigate how to satisfy markets' desire for smooth allocation (e.g., partners for content reviewers prefer an allocation that does not vary much from time to time, to minimize staffing switch). We demonstrate the performance of our proposed approaches through real-world data obtained from Meta.
Process industry is one of the leading sectors of the world economy, characterized by difficult environmental conditions, low innovation and very high consumption of electricity. There is a strong push at worldwide level towards the dual objective of improving the efficiency of plants and products, significantly reducing the consumption of electricity and CO2 emissions. Digital transformation can be the enabling driver to achieve these goals, while ensuring better working conditions for workers. Currently, digital sensors in plants produce a large amount of data which constitutes a potential value that is not exploited. Digital technologies, with process design using digital twins, can combine the physical and the virtual worlds, bringing innovation with great efficiency and drastic reduction of waste. In accordance with the guidelines of Industrie 4.0, the H2020 funded CAPRI project aims to innovate the process industry, with a modular and scalable Reference Architecture, based on open source software, which can be implemented both in brownfield and greenfield scenarios. The ability to distribute processing between the edge, where the data is created, and the cloud, where the greatest computational resources are available, facilitates the development of integrated digital solutions with cognitive capabilities. The reference architecture will be finally validated in the asphalt, steel and pharma pilot plants, allowing the development of integrated planning solutions, with scheduling and control of the plants, optimizing the efficiency and reliability of the supply chain, and balancing energy efficiency.
While research continues to investigate and improve the accuracy, fairness, and normative appropriateness of content moderation processes on large social media platforms, even the best process cannot be effective if users reject its authority as illegitimate. We present a survey experiment comparing the perceived institutional legitimacy of four popular content moderation processes. We conducted a within-subjects experiment in which we showed US Facebook users moderation decisions and randomized the description of whether those decisions were made by paid contractors, algorithms, expert panels, or juries of users. Prior work suggests that juries will have the highest perceived legitimacy due to the benefits of judicial independence and democratic representation. However, expert panels had greater perceived legitimacy than algorithms or juries. Moreover, outcome alignment - agreement with the decision - played a larger role than process in determining perceived legitimacy. These results suggest benefits to incorporating expert oversight in content moderation and underscore that any process will face legitimacy challenges derived from disagreement about outcomes.
Despite its benefits for children's skill development and parent-child bonding, many parents do not often engage in interactive storytelling by having story-related dialogues with their child due to limited availability or challenges in coming up with appropriate questions. While recent advances made AI generation of questions from stories possible, the fully-automated approach excludes parent involvement, disregards educational goals, and underoptimizes for child engagement. Informed by need-finding interviews and participatory design (PD) results, we developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences. StoryBuddy's design highlighted the need for accommodating dynamic user needs between the desire for parent involvement and parent-child bonding and the goal of minimizing parent intervention when busy. The PD revealed varied assessment and educational goals of parents, which StoryBuddy addressed by supporting configuring question types and tracking child progress. A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
Face is one of the most widely employed traits for person recognition, even in many large-scale applications. Despite technological advancements in face recognition systems, they still face obstacles caused by pose, expression, occlusion, and aging variations. Owing to the COVID-19 pandemic, contactless identity verification has become exceedingly vital. Recently, few studies have been conducted on the effect of face mask on adult face recognition systems (FRS). However, the impact of aging with face mask on child subject recognition has not been adequately explored. Thus, the main objective of this study is analyzing the child longitudinal impact together with face mask and other covariates on FRS. Specifically, we performed a comparative investigation of three top performing publicly available face matchers and a post-COVID-19 commercial-off-the-shelf (COTS) system under child cross-age verification and identification settings using our generated synthetic mask and no-mask samples. Furthermore, we investigated the longitudinal consequence of eyeglasses with mask and no-mask. The study exploited no-mask longitudinal child face dataset (i.e., extended Indian Child Longitudinal Face Dataset) that contains 26,258 face images of 7,473 subjects in the age group of [2, 18] over an average time span of 3.35 years. Due to the combined effects of face mask and face aging, the FaceNet, PFE, ArcFace, and COTS face verification system accuracies decrease approximately 25%, 22%, 18%, 12%, respectively.
The introduction of machine learning (ML) components in software projects has created the need for software engineers to collaborate with data scientists and other specialists. While collaboration can always be challenging, ML introduces additional challenges with its exploratory model development process, additional skills and knowledge needed, difficulties testing ML systems, need for continuous evolution and monitoring, and non-traditional quality requirements such as fairness and explainability. Through interviews with 45 practitioners from 28 organizations, we identified key collaboration challenges that teams face when building and deploying ML systems into production. We report on common collaboration points in the development of production ML systems for requirements, data, and integration, as well as corresponding team patterns and challenges. We find that most of these challenges center around communication, documentation, engineering, and process and collect recommendations to address these challenges.
In this paper, we present ML-Quadrat, an open-source research prototype that is based on the Eclipse Modeling Framework (EMF) and the state of the art in the literature of Model-Driven Software Engineering (MDSE) for smart Cyber-Physical Systems (CPS) and the Internet of Things (IoT). Its envisioned users are mostly software developers who might not have deep knowledge and skills in the heterogeneous IoT platforms and the diverse Artificial Intelligence (AI) technologies, specifically regarding Machine Learning (ML). ML-Quadrat is released under the terms of the Apache 2.0 license on Github. Additionally, we demonstrate an early tool prototype of DriotData, a web-based Low-Code platform targeting citizen data scientists and citizen/end-user software developers. DriotData exploits and adopts ML-Quadrat in the industry by offering an extended version of it as a subscription-based service to companies, mainly Small- and Medium-Sized Enterprises (SME). The current preliminary version of DriotData has three web-based model editors: text-based, tree-/form-based and diagram-based. The latter is designed for domain experts in the problem or use case domains (namely the IoT vertical domains) who might not have knowledge and skills in the field of IT. Finally, a short video demonstrating the tools is available on YouTube: //youtu.be/VAuz25w0a5k
New interactive physical-digital play technologies are shaping the way children plan. These technologies refer to digital play technologies that engage children in analogue forms of behaviour, either alone or with others. Current interactive physical-digital play technologies include robots, digital agents, mixed or augmented reality devices, and smart-eye based gaming. Little is known, however, about the ways in which these technologies could promote or damage child development. This systematic review was aimed at understanding if and how these physical-digital play technologies promoted developmentally relevant behaviour in typically developing 0 to 12 year-olds. Psychology, Education, and Computer Science databases were searched producing 635 paper. A total of 31 papers met the inclusion criteria, of which 17 were of high enough quality to be included for synthesis. Results indicate that these new interactive play technologies could have a positive effect on children's developmentally relevant behaviour. The review indicated specific ways in which different behaviour were promoted. Providing information about own performance promoted self-monitoring. Slowing interactivity, play interdependency, and joint object accessibility promoted collaboration. Offering delimited choices promoted decision making. Problem solving and physical activity were promoted by requiring children to engage in them to keep playing. Four principles underpinned the ways in which physical digital play technologies afforded child behaviour. These included social expectations framing play situations, the directiveness of action regulations (inviting, guiding or forcing behaviours), the technical features of play technologies (digital play mechanics and physical characteristics), and the alignment between play goals, play technology and the play behaviours promoted.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.
We propose an Active Learning approach to image segmentation that exploits geometric priors to streamline the annotation process. We demonstrate this for both background-foreground and multi-class segmentation tasks in 2D images and 3D image volumes. Our approach combines geometric smoothness priors in the image space with more traditional uncertainty measures to estimate which pixels or voxels are most in need of annotation. For multi-class settings, we additionally introduce two novel criteria for uncertainty. In the 3D case, we use the resulting uncertainty measure to show the annotator voxels lying on the same planar patch, which makes batch annotation much easier than if they were randomly distributed in the volume. The planar patch is found using a branch-and-bound algorithm that finds a patch with the most informative instances. We evaluate our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on regular images of horses and faces. We demonstrate a substantial performance increase over state-of-the-art approaches.