Quality control is an ongoing concern in citizen science that is often managed by replication to consensus in online tasks such as image classification. Numerous factors can lead to disagreement, including image quality problems, interface specifics, and the complexity of the content itself. We conducted trace ethnography with statistical and qualitative analyses of six Snapshot Safari projects to understand the content characteristics that can lead to uncertainty and low consensus. This study contributes content categorization based on aggregate classifications to characterize image complexity, with analysis that confirms that the categories impact classification efficiency, and an inductively generated set of additional image quality issues that also impact volunteers' ability to confidently classify content. The results suggest that different conceptualizations and measures of consensus may be needed for different types of content, and aggregate responses offer a way to identify content that needs different handling when complexity cannot be determined $a$ $priori$.
We consider the problem of achieving fair classification in Federated Learning (FL) under data heterogeneity. Most of the approaches proposed for fair classification require diverse data that represent the different demographic groups involved. In contrast, it is common for each client to own data that represents only a single demographic group. Hence the existing approaches cannot be adopted for fair classification models at the client level. To resolve this challenge, we propose several aggregation techniques. We empirically validate these techniques by comparing the resulting fairness metrics and accuracy on CelebA, UTK, and FairFace datasets.
Learning from continuous data streams via classification/regression is prevalent in many domains. Adapting to evolving data characteristics (concept drift) while protecting data owners' private information is an open challenge. We present a differentially private ensemble solution to this problem with two distinguishing features: it allows an \textit{unbounded} number of ensemble updates to deal with the potentially never-ending data streams under a fixed privacy budget, and it is \textit{model agnostic}, in that it treats any pre-trained differentially private classification/regression model as a black-box. Our method outperforms competitors on real-world and simulated datasets for varying settings of privacy, concept drift, and data distribution.
Over the years, web content has evolved from simple text and static images hosted on a single server to a complex, interactive and multimedia-rich content hosted on different servers. As a result, a modern website during its loading time fetches content not only from its owner's domain but also from a range of third-party domains providing additional functionalities and services. Here we infer the network of the third-party domains by observing the domains' interactions within users' browsers from all over the globe. We find that this network possesses structural properties commonly found in other complex networks in nature and society, such as power-law degree distribution, strong clustering, and the small-world property. These properties imply that a hyperbolic geometry underlies the ecosystem's topology and we use statistical inference methods to find the domains' coordinates in this geometry, which abstract how popular and similar the domains are. The hyperbolic map we obtain is meaningful, revealing collaborations between controversial services and social networks that have not been previously revealed. Furthermore, the map can facilitate applications, such as the prediction of third-party domains co-hosting on the same physical machine, and merging in terms of company acquisition. Such predictions cannot be made by just observing the domains' interactions within the users' browsers.
Corruption has a huge impact on economic growth, democracy, and inequality. Its consequences at the human level are incalculable. Public procurement, where public resources are used to purchase goods or services from the private sector, are particularly susceptible to corrupt practices. However, a government turnover may bring significant changes in the way public contracting is done, and thus, in the levels and types of corruption involved in public procurement. In this respect, M\'exico lived a historical government transition in 2018, with the new government promising a crackdown on corruption. In this work, we analyze data from more than 1.5 million contracts corresponding from 2013 to 2020, to study to what extent this change of government affected the characteristics of public contracting, and we try to determine whether these changes affect how corruption takes place. To do this, we propose a statistical framework to compare the characteristics of the contracting practices within each administration, separating the contracts in different classes depending on whether or not they were made with companies that have now been identified as being involved in corrupt practices. We find that while the amount of resources spent with companies that turned out to be corrupt has decreased substantially, many of the patterns followed to contract these companies were maintained, and some of those in which changes did occur, are suggestive of a larger risk of corruption.
Modern cars are evolving in many ways. Technologies such as infotainment systems and companion mobile applications collect a variety of personal data from drivers to enhance the user experience. This paper investigates the extent to which car drivers understand the implications for their privacy, including that car manufacturers must treat that data in compliance with the relevant regulations. It does so by distilling out drivers' concerns on privacy and relating them to their perceptions of trust on car cyber-security. A questionnaire is designed for such purposes to collect answers from a set of 1101 participants, so that the results are statistically relevant. In short, privacy concerns are modest, perhaps because there still is insufficient general awareness on the personal data that are involved, both for in-vehicle treatment and for transmission over the Internet. Trust perceptions on cyber-security are modest too (lower than those on car safety), a surprising contradiction to our research hypothesis that privacy concerns and trust perceptions on car cyber-security are opponent. We interpret this as a clear demand for information and awareness-building campaigns for car drivers, as well as for technical cyber-security and privacy measures that are truly considerate of the human factor.
What happens when a machine learning dataset is deprecated for legal, ethical, or technical reasons, but continues to be widely used? In this paper, we examine the public afterlives of several prominent deprecated or redacted datasets, including ImageNet, 80 Million Tiny Images, MS-Celeb-1M, Duke MTMC, Brainwash, and HRT Transgender, in order to inform a framework for more consistent, ethical, and accountable dataset deprecation. Building on prior research, we find that there is a lack of consistency, transparency, and centralized sourcing of information on the deprecation of datasets, and as such, these datasets and their derivatives continue to be cited in papers and circulate online. These datasets that never die -- which we term "zombie datasets" -- continue to inform the design of production-level systems, causing technical, legal, and ethical challenges; in so doing, they risk perpetuating the harms that prompted their supposed withdrawal, including concerns around bias, discrimination, and privacy. Based on this analysis, we propose a Dataset Deprecation Framework that includes considerations of risk, mitigation of impact, appeal mechanisms, timeline, post-deprecation protocol, and publication checks that can be adapted and implemented by the machine learning community. Drawing on work on datasheets and checklists, we further offer two sample dataset deprecation sheets and propose a centralized repository that tracks which datasets have been deprecated and could be incorporated into the publication protocols of venues like NeurIPS.
Deep neural networks have significantly contributed to the success in predictive accuracy for classification tasks. However, they tend to make over-confident predictions in real-world settings, where domain shifting and out-of-distribution (OOD) examples exist. Most research on uncertainty estimation focuses on computer vision because it provides visual validation on uncertainty quality. However, few have been presented in the natural language process domain. Unlike Bayesian methods that indirectly infer uncertainty through weight uncertainties, current evidential uncertainty-based methods explicitly model the uncertainty of class probabilities through subjective opinions. They further consider inherent uncertainty in data with different root causes, vacuity (i.e., uncertainty due to a lack of evidence) and dissonance (i.e., uncertainty due to conflicting evidence). In our paper, we firstly apply evidential uncertainty in OOD detection for text classification tasks. We propose an inexpensive framework that adopts both auxiliary outliers and pseudo off-manifold samples to train the model with prior knowledge of a certain class, which has high vacuity for OOD samples. Extensive empirical experiments demonstrate that our model based on evidential uncertainty outperforms other counterparts for detecting OOD examples. Our approach can be easily deployed to traditional recurrent neural networks and fine-tuned pre-trained transformers.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.
We introduce an algorithmic method for population anomaly detection based on gaussianization through an adversarial autoencoder. This method is applicable to detection of `soft' anomalies in arbitrarily distributed highly-dimensional data. A soft, or population, anomaly is characterized by a shift in the distribution of the data set, where certain elements appear with higher probability than anticipated. Such anomalies must be detected by considering a sufficiently large sample set rather than a single sample. Applications include, but not limited to, payment fraud trends, data exfiltration, disease clusters and epidemics, and social unrests. We evaluate the method on several domains and obtain both quantitative results and qualitative insights.