Online reviews in the form of user-generated content (UGC) significantly impact consumer decision-making. However, the pervasive issue of not only human fake content but also machine-generated content challenges UGC's reliability. Recent advances in Large Language Models (LLMs) may pave the way to fabricate indistinguishable fake generated content at a much lower cost. Leveraging OpenAI's GPT-4-Turbo and DALL-E-2 models, we craft AiGen-FoodReview, a multi-modal dataset of 20,144 restaurant review-image pairs divided into authentic and machine-generated. We explore unimodal and multimodal detection models, achieving 99.80% multimodal accuracy with FLAVA. We use attributes from readability and photographic theories to score reviews and images, respectively, demonstrating their utility as hand-crafted features in scalable and interpretable detection models, with comparable performance. The paper contributes by open-sourcing the dataset and releasing fake review detectors, recommending its use in unimodal and multimodal fake review detection tasks, and evaluating linguistic and visual features in synthetic versus authentic data.
Despite the recent advances in unified image segmentation (IS), developing a unified video segmentation (VS) model remains a challenge. This is mainly because generic category-specified VS tasks need to detect all objects and track them across consecutive frames, while prompt-guided VS tasks require re-identifying the target with visual/text prompts throughout the entire video, making it hard to handle the different tasks with the same architecture. We make an attempt to address these issues and present a novel unified VS architecture, namely UniVS, by using prompts as queries. UniVS averages the prompt features of the target from previous frames as its initial query to explicitly decode masks, and introduces a target-wise prompt cross-attention layer in the mask decoder to integrate prompt features in the memory pool. By taking the predicted masks of entities from previous frames as their visual prompts, UniVS converts different VS tasks into prompt-guided target segmentation, eliminating the heuristic inter-frame matching process. Our framework not only unifies the different VS tasks but also naturally achieves universal training and testing, ensuring robust performance across different scenarios. UniVS shows a commendable balance between performance and universality on 10 challenging VS benchmarks, covering video instance, semantic, panoptic, object, and referring segmentation tasks. Code can be found at \url{//github.com/MinghanLi/UniVS}.
Generative AI models are increasingly powering software applications, offering the capability to produce expressive content across varied contexts. However, unlike previous iterations of human-AI design, the emerging design process for generative capabilities primarily hinges on prompt engineering strategies. Given this fundamental shift in approach, our work aims to understand how collaborative software teams set up and apply design guidelines and values, iteratively prototype prompts, and evaluate prompts to achieve desired outcomes. We conducted design studies with 39 industry professionals, including designers, software engineers, and product managers. Our findings reveal a content-centric prototyping approach in which teams begin with the content they want to generate, then identify specific attributes, constraints, and values, and explore methods to give users the ability to influence and interact with those attributes. Based on associated challenges, such as the lack of model interpretability and overfitting the design to examples, we outline considerations for generative AI prototyping.
As more users turn to video-sharing platforms like YouTube as an information source, they may consume misinformation despite their best efforts. In this work, we investigate ways that users can better assess the credibility of videos by first exploring how users currently determine credibility using existing signals on platforms and then by introducing and evaluating new credibility-based signals. We conducted 12 contextual inquiry interviews with YouTube users, determining that participants used a combination of existing signals, such as the channel name, the production quality, and prior knowledge, to evaluate credibility, yet sometimes stumbled in their efforts to do so. We then developed Viblio, a prototype system that enables YouTube users to view and add citations and related information while watching a video based on our participants' needs. From an evaluation with 12 people, all participants found Viblio to be intuitive and useful in the process of evaluating a video's credibility and could see themselves using Viblio in the future.
Blind or Low-Vision (BLV) users often rely on audio descriptions (AD) to access video content. However, conventional static ADs can leave out detailed information in videos, impose a high mental load, neglect the diverse needs and preferences of BLV users, and lack immersion. To tackle these challenges, we introduce SPICA, an AI-powered system that enables BLV users to interactively explore video content. Informed by prior empirical studies on BLV video consumption, SPICA offers novel interactive mechanisms for supporting temporal navigation of frame captions and spatial exploration of objects within key frames. Leveraging an audio-visual machine learning pipeline, SPICA augments existing ADs by adding interactivity, spatial sound effects, and individual object descriptions without requiring additional human annotation. Through a user study with 14 BLV participants, we evaluated the usability and usefulness of SPICA and explored user behaviors, preferences, and mental models when interacting with augmented ADs.
Researchers use information about the amount of time people spend on digital media for numerous purposes. While social media platforms commonly do not allow external access to measure the use time directly, a usual alternative method is to use participants' self-estimation. However, doubts were raised about the self-estimation's accuracy, posing questions regarding the cognitive factors that underline people's perceptions of the time they spend on social media. In this work, we build on prior studies and explore a novel social media platform in the context of use time: TikTok. We conduct platform-independent measurements of people's self-reported and server-logged TikTok usage (n=255) to understand how users' demographics and platform engagement influence their perceptions of the time they spend on the platform and their estimation accuracy. Our work adds to the body of work seeking to understand time estimations in different digital contexts and identifies new influential engagement factors.
In this letter, we design a downlink multi-user communication framework based on Rate-Splitting Multiple Access (RSMA) for semantic-aware networks. First, we formulate an optimization problem to obtain the optimal user scheduling, precoding, and power allocation schemes jointly. We consider the metric Age of Incorrect Information (AoII) in the objective function of the formulated problem to maximize the freshness of the overall information to be transmitted. Using big-M and Successive Convex Approximation (SCA) methods, we convert the resulting non-convex problem with conditional objective and constraints into a convex one and propose an iterative algorithm to solve it. By numerical results, we show that RSMA achieves a lower AoII than SDMA owing to its superior performance under multi-user interference.
IoT devices are currently facing continuous malicious attacks due to their widespread use. Among these IoT devices, web vulnerabilities are also widely exploited because of their inherent characteristics, such as improper permission controls and insecure interfaces. Recently, the embedded system web interface framework has become highly diverse, and specific vulnerabilities can arise if developers forget to detect user input parameters or if the detection process is not strict enough. Therefore, discovering vulnerabilities in the web interfaces of IoT devices accurately and comprehensively through an automated method is a major challenge. This paper aims to work out the challenge. We have developed an automated vulnerability detection system called LuaTaint for the typical web interface framework, LuCI. The system employs static taint analysis to address web security issues on mobile terminal platforms to ensure detection coverage. It integrates rules pertaining to page handler control logic within the taint detection process to improve its extensibility. We also implemented a post-processing step with the assistance of large language models to enhance accuracy and reduce the need for manual analysis. We have created a prototype of LuaTaint and tested it on 92 IoT firmwares from 8 well-known vendors. LuaTaint has discovered 68 unknown vulnerabilities.
This chapter explores the practice of conducting user research studies and design assessments in virtual reality (VR). An overview of key VR hardware and software tools is provided, including game engines, such as Unity and Unreal Engine. Qualitative and quantitative research methods, along with their various synergies with VR, are likewise discussed, and some of the challenges associated with VR, such as limited sensory stimulation, are reflected upon. VR is proving particularly useful in the context of space systems development, where its utilisation offers a cost-effective and secure method for simulating extraterrestrial environments, allowing for rapid prototyping and evaluation of innovative concepts under representative operational conditions. To illustrate this, we present a case study detailing the application of VR to aid aerospace engineers testing their ideas with end-users and stakeholders during early design stages of the European Space Agency's (ESA) prospective Argonaut lunar lander. This case study demonstrates the effectiveness of VR simulations in gathering important feedback concerning the operability of the Argonaut lander in poor lighting conditions as well as surfacing relevant ergonomics considerations and constraints. The chapter concludes by discussing the strengths and weaknesses associated with VR-based user studies and proposes future research directions, emphasising the necessity for novel VR interfaces to overcome existing technical limitations.
Governments use propaganda, including through visual content -- or Politically Salient Image Patterns (PSIP) -- on social media, to influence and manipulate public opinion. In the present work, we collected Telegram post-history of from 989 Russian milbloggers to better understand the social and political narratives that circulated online in the months surrounding Russia's 2022 full-scale invasion of Ukraine. Overall, we found an 8,925% increase (p<0.001) in the number of posts and a 5,352% increase (p<0.001) in the number of images posted by these accounts in the two weeks prior to the invasion. We also observed a similar increase in the number and intensity of politically salient manipulated images that circulated on Telegram. Although this paper does not evaluate malice or coordination in these activities, we do conclude with a call for further research into the role that manipulated visual media has in the lead-up to instability events and armed conflict.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.