We consider the problem of how a platform designer, owner, or operator can improve the design and operation of a digital platform by leveraging a computational cognitive model that represents users's folk theories about a platform as a sociotechnical system. We do so in the context of Reddit, a social media platform whose owners and administrators make extensive use of shadowbanning, a non-transparent content moderation mechanism that filters a user's posts and comments so that they cannot be seen by fellow community members or the public. After demonstrating that the design and operation of Reddit have led to an abundance of spurious suspicions of shadowbanning in case the mechanism was not in fact invoked, we develop a computational cognitive model of users's folk theories about the antecedents and consequences of shadowbanning that predicts when users will attribute their on-platform observations to a shadowban. The model is then used to evaluate the capacity of interventions available to a platform designer, owner, and operator to reduce the incidence of these false suspicions. We conclude by considering the implications of this approach for the design and operation of digital platforms at large.
Digital twins hold substantial promise in many applications, but rigorous procedures for assessing their accuracy are essential for their widespread deployment in safety-critical settings. By formulating this task within the framework of causal inference, we show that attempts to certify the correctness of a twin using real-world observational data are unsound unless potentially tenuous assumptions are made about the data-generating process. To avoid these assumptions, we propose an assessment strategy that instead aims to find cases where the twin is not correct, and present a general-purpose statistical procedure for doing so that may be used across a wide variety of applications and twin models. Our approach yields reliable and actionable information about the twin under minimal assumptions about the twin and the real-world process of interest. We demonstrate the effectiveness of our methodology via a large-scale case study involving sepsis modelling within the Pulse Physiology Engine, which we assess using the MIMIC-III dataset of ICU patients.
The Plackett--Luce model is a popular approach for ranking data analysis, where a utility vector is employed to determine the probability of each outcome based on Luce's choice axiom. In this paper, we investigate the asymptotic theory of utility vector estimation by maximizing different types of likelihood, such as the full-, marginal-, and quasi-likelihood. We provide a rank-matching interpretation for the estimating equations of these estimators and analyze their asymptotic behavior as the number of items being compared tends to infinity. In particular, we establish the uniform consistency of these estimators under conditions characterized by the topology of the underlying comparison graph sequence and demonstrate that the proposed conditions are sharp for common sampling scenarios such as the nonuniform random hypergraph model and the hypergraph stochastic block model; we also obtain the asymptotic normality of these estimators and discuss the trade-off between statistical efficiency and computational complexity for practical uncertainty quantification. Both results allow for nonuniform and inhomogeneous comparison graphs with varying edge sizes and different asymptotic orders of edge probabilities. We verify our theoretical findings by conducting detailed numerical experiments.
Beyond 100G passive optical networks (PONs) will be required to meet the ever-increasing traffic demand in the future. Coherent optical technologies are the competitive solutions for the future beyond 100G PON but also face challenges such as the high computational complexity of digital signal processing (DSP). A high oversampling rate in coherent optical technologies results in the high computational complexity of DSP. Therefore, DSP running in a non-integer-oversampling below 2 samples-per-symbol (sps) is preferred, which can not only reduce computational complexity but also obviously lower the requirement for the analog-to-digital converter. In this paper, we propose a non-integer-oversampling DSP for meeting the requirements of coherent PON. The proposed DSP working at 9/8-sps and 5/4-sps oversampling rates can be reduced by 44.04% and 40.78% computational complexity compared to that working at the 2-sps oversampling rate, respectively. Moreover, a 400-Gb/s-net-rate coherent PON based on digital subcarrier multiplexing was demonstrated to verify the feasibility of the non-integer-oversampling DSP. There is almost no penalty on the receiver sensitivity when the non-integer-oversampling DSP is adopted. In conclusion, the non-integer-oversampling DSP shows great potential in the future coherent PON.
The accelerated change in our planet due to human activities has led to grand societal challenges including health crises, intensified extreme weather events, food security, environmental injustice, etc. Digital twin systems combined with emerging technologies such as artificial intelligence and edge computing provide opportunities to support planning and decision-making to address these challenges. Digital twins for Earth systems (DT4ESs) are defined as the digital representation of the complex integrated Earth system including both natural processes and human activities. They have the potential to enable a diverse range of users to explore what-if scenarios across spatial and temporal scales to improve our understanding, prediction, mitigation, and adaptation to grand societal challenges. The 4th NOAA AI Workshop convened around 100 members who are developing or interested in participating in the development of DT4ES to discuss a shared community vision and path forward on fostering a future ecosystem of interoperable DT4ES. This paper summarizes the workshop discussions around DT4ES. We first defined the foundational features of a viable digital twins for Earth system that can be used to guide the development of various use cases of DT4ES. Finally, we made practical recommendations for the community on different aspects of collaboration in order to enable a future ecosystem of interoperable DT4ES, including equity-centered use case development, community-driven investigation of interoperability for DT4ES, trust-oriented co-development, and developing a community of practice.
Social networks have been widely studied over the last century from multiple disciplines to understand societal issues such as inequality in employment rates, managerial performance, and epidemic spread. Today, these and many more issues can be studied at global scale thanks to the digital footprints that we generate when browsing the Web or using social media platforms. Unfortunately, scientists often struggle to access to such data primarily because it is proprietary, and even when it is shared with privacy guarantees, such data is either no representative or too big. In this tutorial, we will discuss recent advances and future directions in network modeling. In particular, we focus on how to exploit synthetic networks to study real-world problems such as data privacy, spreading dynamics, algorithmic bias, and ranking inequalities. We start by reviewing different types of generative models for social networks including node-attributed and scale-free networks. Then, we showcase how to perform a network selection analysis to characterize the mechanisms of edge formation of any given real-world network.
This paper introduces Artificial Intelligence Clinics on Mobile (AICOM), an open-source project devoted to answering the United Nations Sustainable Development Goal 3 (SDG3) on health, which represents a universal recognition that health is fundamental to human capital and social and economic development. The core motivation for the AICOM project is the fact that over 80% of the people in the least developed countries (LDCs) own a mobile phone, even though less than 40% of these people have internet access. Hence, through enabling AI-based disease diagnostics and screening capability on affordable mobile phones without connectivity will be a critical first step to addressing healthcare access problems. The technologies developed in the AICOM project achieve exactly this goal, and we have demonstrated the effectiveness of AICOM on monkeypox screening tasks. We plan to continue expanding and open-sourcing the AICOM platform, aiming for it to evolve into an universal AI doctor for the Underserved and Hard-to-Reach.
In recent years, online social networks have been the target of adversaries who seek to introduce discord into societies, to undermine democracies and to destabilize communities. Often the goal is not to favor a certain side of a conflict but to increase disagreement and polarization. To get a mathematical understanding of such attacks, researchers use opinion-formation models from sociology, such as the Friedkin--Johnsen model, and formally study how much discord the adversary can produce when altering the opinions for only a small set of users. In this line of work, it is commonly assumed that the adversary has full knowledge about the network topology and the opinions of all users. However, the latter assumption is often unrealistic in practice, where user opinions are not available or simply difficult to estimate accurately. To address this concern, we raise the following question: Can an attacker sow discord in a social network, even when only the network topology is known? We answer this question affirmatively. We present approximation algorithms for detecting a small set of users who are highly influential for the disagreement and polarization in the network. We show that when the adversary radicalizes these users and if the initial disagreement/polarization in the network is not very high, then our method gives a constant-factor approximation on the setting when the user opinions are known. To find the set of influential users, we provide a novel approximation algorithm for a variant of MaxCut in graphs with positive and negative edge weights. We experimentally evaluate our methods, which have access only to the network topology, and we find that they have similar performance as methods that have access to the network topology and all user opinions. We further present an NP-hardness proof, which was an open question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].
Social metaverse is a shared digital space combining a series of interconnected virtual worlds for users to play, shop, work, and socialize. In parallel with the advances of artificial intelligence (AI) and growing awareness of data privacy concerns, federated learning (FL) is promoted as a paradigm shift towards privacy-preserving AI-empowered social metaverse. However, challenges including privacy-utility tradeoff, learning reliability, and AI model thefts hinder the deployment of FL in real metaverse applications. In this paper, we exploit the pervasive social ties among users/avatars to advance a social-aware hierarchical FL framework, i.e., SocialFL for a better privacy-utility tradeoff in the social metaverse. Then, an aggregator-free robust FL mechanism based on blockchain is devised with a new block structure and an improved consensus protocol featured with on/off-chain collaboration. Furthermore, based on smart contracts and digital watermarks, an automatic federated AI (FedAI) model ownership provenance mechanism is designed to prevent AI model thefts and collusive avatars in social metaverse. Experimental findings validate the feasibility and effectiveness of proposed framework. Finally, we envision promising future research directions in this emerging area.
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.