Background: Given that VR is applied in multiple domains, understanding the effects of cyber-sickness on human cognition and motor skills and the factors contributing to cybersickness gains urgency. This study aimed to explore the predictors of cybersickness and its interplay with cognitive and motor skills. Methods: 30 participants, 20-45 years old, completed the MSSQ and the CSQ-VR, and were immersed in VR. During immersion, they were exposed to a roller coaster ride. Before and after the ride, participants responded to CSQ-VR and performed VR-based cognitive and psychomotor tasks. Post-VR session, participants completed the CSQ-VR again. Results: Motion sickness susceptibility, during adulthood, was the most prominent predictor of cybersickness. Pupil dilation emerged as a significant predictor of cybersickness. Experience in videogaming was a significant predictor of both cybersickness and cognitive/motor functions. Cybersickness negatively affected visuospatial working memory and psychomotor skills. Overall cybersickness', nausea and vestibular symptoms' intensities significantly decreased after removing the VR headset. Conclusions: In order of importance, motion sickness susceptibility and gaming experience are significant predictors of cybersickness. Pupil dilation appears as a cybersickness' biomarker. Cybersickness negatively affects visuospatial working memory and psychomotor skills. Cybersickness and its effects on performance should be examined during and not after immersion.
Macro-level modeling is still the dominant approach in many demographic applications because of its simplicity. Individual-level models, on the other hand, provide a more comprehensive understanding of observed patterns; however, their estimation using real data has remained a challenge. The approach we introduce in this article attempts to overcome this limitation. Using likelihood-free inference techniques, we show that it is possible to estimate the parameters of a simple but demographically interpretable individual-level model of the reproductive process from a set of aggregate fertility rates. By estimating individual-level quantities from widely available aggregate data, this approach can contribute to a better understanding of reproductive behavior and its driving mechanisms. It also allows for a more direct link between individual-level and population-level processes. We illustrate our approach using data from three natural fertility populations.
The past decade has witnessed the rapid development of geospatial artificial intelligence (GeoAI) primarily due to the ground-breaking achievements in deep learning and machine learning. A growing number of scholars from cartography have demonstrated successfully that GeoAI can accelerate previously complex cartographic design tasks and even enable cartographic creativity in new ways. Despite the promise of GeoAI, researchers and practitioners have growing concerns about the ethical issues of GeoAI for cartography. In this paper, we conducted a systematic content analysis and narrative synthesis of research studies integrating GeoAI and cartography to summarize current research and development trends regarding the usage of GeoAI for cartographic design. Based on this review and synthesis, we first identify dimensions of GeoAI methods for cartography such as data sources, data formats, map evaluations, and six contemporary GeoAI models, each of which serves a variety of cartographic tasks. These models include decision trees, knowledge graph and semantic web technologies, deep convolutional neural networks, generative adversarial networks, graph neural networks, and reinforcement learning. Further, we summarize seven cartographic design applications where GeoAI have been effectively employed: generalization, symbolization, typography, map reading, map interpretation, map analysis, and map production. We also raise five potential ethical challenges that need to be addressed in the integration of GeoAI for cartography: commodification, responsibility, privacy, bias, and (together) transparency, explainability, and provenance. We conclude by identifying four potential research directions for future cartographic research with GeoAI: GeoAI-enabled active cartographic symbolism, human-in-the-loop GeoAI for cartography, GeoAI-based mapping-as-a-service, and generative GeoAI for cartography.
Accurately estimating Origin-Destination (OD) matrices is a topic of increasing interest for efficient transportation network management and sustainable urban planning. Traditionally, travel surveys have supported this process; however, their availability and comprehensiveness can be limited. Moreover, the recent COVID-19 pandemic has triggered unprecedented shifts in mobility patterns, underscoring the urgency of accurate and dynamic mobility data supporting policies and decisions with data-driven evidence. In this study, we tackle these challenges by introducing an innovative pipeline for estimating dynamic OD matrices. The real motivating problem behind this is based on the Trenord railway transportation network in Lombardy, Italy. We apply a novel approach that integrates ticket and subscription sales data with passenger counts obtained from Automated Passenger Counting (APC) systems, making use of the Iterative Proportional Fitting (IPF) algorithm. Our work effectively addresses the complexities posed by incomplete and diverse data sources, showcasing the adaptability of our pipeline across various transportation contexts. Ultimately, this research bridges the gap between available data sources and the escalating need for precise OD matrices. The proposed pipeline fosters a comprehensive grasp of transportation network dynamics, providing a valuable tool for transportation operators, policymakers, and researchers. Indeed, to highlight the potentiality of dynamic OD matrices, we showcase some methods to perform anomaly detection of mobility trends in the network through such matrices and interpret them in light of events that happened in the last months of 2022.
Perception of offensiveness is inherently subjective, shaped by the lived experiences and socio-cultural values of the perceivers. Recent years have seen substantial efforts to build AI-based tools that can detect offensive language at scale, as a means to moderate social media platforms, and to ensure safety of conversational AI technologies such as ChatGPT and Bard. However, existing approaches treat this task as a technical endeavor, built on top of data annotated for offensiveness by a global crowd workforce without any attention to the crowd workers' provenance or the values their perceptions reflect. We argue that cultural and psychological factors play a vital role in the cognitive processing of offensiveness, which is critical to consider in this context. We re-frame the task of determining offensiveness as essentially a matter of moral judgment -- deciding the boundaries of ethically wrong vs. right language within an implied set of socio-cultural norms. Through a large-scale cross-cultural study based on 4309 participants from 21 countries across 8 cultural regions, we demonstrate substantial cross-cultural differences in perceptions of offensiveness. More importantly, we find that individual moral values play a crucial role in shaping these variations: moral concerns about Care and Purity are significant mediating factors driving cross-cultural differences. These insights are of crucial importance as we build AI models for the pluralistic world, where the values they espouse should aim to respect and account for moral values in diverse geo-cultural contexts.
Monocular Re-Localization (MRL) is a critical component in numerous autonomous applications, which estimates 6 degree-of-freedom poses with regards to the scene map based on a single monocular image. In recent decades, significant progress has been made in the development of MRL techniques. Numerous landmark algorithms have accomplished extraordinary success in terms of localization accuracy and robustness against visual interference. In MRL research, scene maps are represented in various forms, and they determine how MRL methods work and even how MRL methods perform. However, to the best of our knowledge, existing surveys do not provide systematic reviews of MRL from the respective of map. This survey fills the gap by comprehensively reviewing MRL methods employing monocular cameras as main sensors, promoting further research. 1) We commence by delving into the problem definition of MRL and exploring current challenges, while also comparing ours with with previous published surveys. 2) MRL methods are then categorized into five classes according to the representation forms of utilized map, i.e., geo-tagged frames, visual landmarks, point clouds, and vectorized semantic map, and we review the milestone MRL works of each category. 3) To quantitatively and fairly compare MRL methods with various map, we also review some public datasets and provide the performances of some typical MRL methods. The strengths and weakness of different types of MRL methods are analyzed. 4) We finally introduce some topics of interest in this field and give personal opinions. This survey can serve as a valuable referenced materials for newcomers and researchers interested in MRL, and a continuously updated summary of this survey, including reviewed papers and datasets, is publicly available to the community at: //github.com/jinyummiao/map-in-mono-reloc.
Quantum computing provides a new dimension in computation, utilizing the principles of quantum mechanics to potentially solve complex problems that are currently intractable for classical computers. However, little research has been conducted about the architecture decisions made in quantum software development, which have a significant influence on the functionality, performance, scalability, and reliability of these systems. The study aims to empirically investigate and analyze architecture decisions made during the development of quantum software systems, identifying prevalent challenges and limitations by using the posts and issues from Stack Exchange and GitHub. We used a qualitative approach to analyze the obtained data from Stack Exchange Sites and GitHub projects. Specifically, we collected data from 151 issues (from 47 GitHub projects) and 43 posts (from three Stack Exchange sites) related to architecture decisions in quantum software development. The results show that in quantum software development (1) architecture decisions are articulated in six linguistic patterns, the most common of which are Solution Proposal and Information Giving, (2) the two major categories of architectural decisions are Implementation Decision and Technology Decision, (3) Quantum Programming Framework is the most common application domain among the sixteen application domains identified, (4) Maintainability is the most frequently considered quality attribute, and (5) Design Issue and Performance Issue are the major limitations and challenges that practitioners face when making architecture decisions in quantum software development. Our results show that the limitations and challenges encountered in architecture decision-making during the development of quantum software systems are strongly linked to the particular features (e.g., quantum entanglement, superposition, and decoherence) of those systems.
Chain-of-thought reasoning, a cognitive process fundamental to human intelligence, has garnered significant attention in the realm of artificial intelligence and natural language processing. However, there still remains a lack of a comprehensive survey for this arena. To this end, we take the first step and present a thorough survey of this research field carefully and widely. We use X-of-Thought to refer to Chain-of-Thought in a broad sense. In detail, we systematically organize the current research according to the taxonomies of methods, including XoT construction, XoT structure variants, and enhanced XoT. Additionally, we describe XoT with frontier applications, covering planning, tool use, and distillation. Furthermore, we address challenges and discuss some future directions, including faithfulness, multi-modal, and theory. We hope this survey serves as a valuable resource for researchers seeking to innovate within the domain of chain-of-thought reasoning.
In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.
Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.