There exists a growing discourse around the domination of Big Tech on the landscape of artificial intelligence (AI) research, yet our comprehension of this phenomenon remains cursory. This paper aims to broaden and deepen our understanding of Big Tech's reach and power within AI research. It highlights the dominance not merely in terms of sheer publication volume but rather in the propagation of new ideas or \textit{memes}. Current studies often oversimplify the concept of influence to the share of affiliations in academic papers, typically sourced from limited databases such as arXiv or specific academic conferences. The main goal of this paper is to unravel the specific nuances of such influence, determining which AI ideas are predominantly driven by Big Tech entities. By employing network and memetic analysis on AI-oriented paper abstracts and their citation network, we are able to grasp a deeper insight into this phenomenon. By utilizing two databases: OpenAlex and S2ORC, we are able to perform such analysis on a much bigger scale than previous attempts. Our findings suggest, that while Big Tech-affiliated papers are disproportionately more cited in some areas, the most cited papers are those affiliated with both Big Tech and Academia. Focusing on the most contagious memes, their attribution to specific affiliation groups (Big Tech, Academia, mixed affiliation) seems to be equally distributed between those three groups. This suggests that the notion of Big Tech domination over AI research is oversimplified in the discourse. Ultimately, this more nuanced understanding of Big Tech's and Academia's influence could inform a more symbiotic alliance between these stakeholders which would better serve the dual goals of societal welfare and the scientific integrity of AI research.
In this paper we formally define the hierarchical clustering network problem (HCNP) as the problem to find a good hierarchical partition of a network. This new problem focuses on the dynamic process of the clustering rather than on the final picture of the clustering process. To address it, we introduce a new ierarchical clustering algorithm in networks, based on a new shortest path betweenness measure. To calculate it, the communication between each pair of nodes is weighed by he importance of the nodes that establish this communication. The weights or importance associated to each pair of nodes are calculated as the Shapley value of a game, named as the linear modularity game. This new measure, (the node-game shortest path betweenness measure), is used to obtain a hierarchical partition of the network by eliminating the link with the highest value. To evaluate the performance of our algorithm, we introduce several criteria that allow us to compare different dendrograms of a network from two point of view: modularity and homogeneity. Finally, we propose a faster algorithm based on a simplification of the node-game shortest path betweenness measure, whose order is quadratic on sparse networks. This fast version is competitive from a computational point of view with other hierarchical fast algorithms, and, in general, it provides better results.
We study variants of the average treatment effect on the treated with population parameters replaced by their sample counterparts. For each estimand, we derive the limiting distribution with respect to a semiparametric efficient estimator of the population effect and provide guidance on variance estimation. Included in our analysis is the well-known sample average treatment effect on the treated, for which we obtain some unexpected results. Unlike the ordinary sample average treatment effect, we find that the asymptotic variance for the sample average treatment effect on the treated is point-identified and consistently estimable, but it potentially exceeds that of the population estimand. To address this shortcoming, we propose a modification that yields a new estimand, the mixed average treatment effect on the treated, which is always estimated more precisely than both the population and sample effects. We also introduce a second new estimand that arises from an alternative interpretation of the treatment effect on the treated with which all individuals are weighted by the propensity score.
This paper presents an experimental study on radio frequency (RF) fingerprinting of Bluetooth Classic devices. Our research aims to provide a practical evaluation of the possibilities for RF fingerprinting of everyday Bluetooth connected devices that may cause privacy risks. We have built an experimental setup for recording Bluetooth connection in a radio frequency isolated environment using commercially available SDR (software defined radio) systems, extracted fingerprints of the Bluetooth radio data in the form of carrier frequency offset and scaling factor from 6 different devices, and performed k-nearest neighbors (kNN) classification achieving 84\% accuracy. The experiment demonstrates that no matter what privacy measures are being taken in the protocol layer, the physical layer leaks significant information about the device to unauthorized listeners. In the context of the ever-growing Bluetooth device market, this research serves as a clarion call for device manufacturers, regulators, and end-users to acknowledge the privacy risks posed by RF fingerprinting and lays a foundation for more sizeable Bluetooth fingerprinting analysis research.
We consider clustering in group decision making where the opinions are given by pairwise comparison matrices. In particular, the k-medoids model is suggested to classify the matrices as it has a linear programming problem formulation. Its objective function depends on the measure of dissimilarity between the matrices but not on the weights derived from them. With one cluster, our methodology provides an alternative to the conventional aggregation procedures. It can also be used to quantify the reliability of the aggregation. The proposed theoretical framework is applied to a large-scale experimental dataset, on which it is able to automatically detect some mistakes made by the decision-makers.
In the ever-expanding landscape of Artificial Intelligence (AI), where innovation thrives and new products and services are continuously being delivered, ensuring that AI systems are designed and developed responsibly throughout their entire lifecycle is crucial. To this end, several AI ethics principles and guidelines have been issued to which AI systems should conform. Nevertheless, relying solely on high-level AI ethics principles is far from sufficient to ensure the responsible engineering of AI systems. In this field, AI professionals often navigate by sight. Indeed, while recommendations promoting Trustworthy AI (TAI) exist, these are often high-level statements that are difficult to translate into concrete implementation strategies. There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals. To address this challenge, our work presents an experience report where we develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice - and report insights from its application in an industrial case study. The framework is built on the result of a systematic review of the state of the practice, a survey, and think-aloud interviews with 34 AI practitioners. The framework, unlike most of those already in the literature, is designed to provide actionable guidelines and tools to support different types of stakeholders throughout the entire Software Development Life Cycle (SDLC). Our goal is to empower AI professionals to confidently navigate the ethical dimensions of TAI through practical insights, ensuring that the vast potential of AI is exploited responsibly for the benefit of society as a whole.
We present MIPS, a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm. As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub. We discuss opportunities and challenges for scaling up this approach to make machine-learned models more interpretable and trustworthy.
Artificial intelligence (AI) has driven many information and communication technology (ICT) breakthroughs. Nonetheless, the scope of ICT systems has expanded far beyond AI since the Turing test proposal. Critically, recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI. In some cases, even works from mathematics, statistics, and engineering would be affected. Worryingly, AI misdefinitions are observed from Western societies to the Global South. In this paper, we propose a framework to score how \textit{validated as appropriately-defined for regulation} (VADER) an AI definition is. Our online, publicly-available VADER framework scores the coverage of premises that should underlie AI definitions for regulation, which aim to (i) reproduce principles observed in other successful technology regulations, and (ii) include all AI techniques and approaches while excluding non-AI works. Regarding the latter, our score is based on a dataset of representative AI, non-AI ICT, and non-ICT examples. We demonstrate our contribution by reviewing the AI regulation proposals of key players, namely the United States, United Kingdom, European Union, and Brazil. Importantly, none of the proposals assessed achieve the appropriateness score, ranging from a revision need to a concrete risk to ICT systems and works from other fields.
In this paper, we address one of the most important topics in the field of Social Networks Analysis: the community detection problem with additional information. That additional information is modeled by a fuzzy measure that represents the risk of polarization. Particularly, we are interested in dealing with the problem of taking into account the polarization of nodes in the community detection problem. Adding this type of information to the community detection problem makes it more realistic, as a community is more likely to be defined if the corresponding elements are willing to maintain a peaceful dialogue. The polarization capacity is modeled by a fuzzy measure based on the JDJpol measure of polarization related to two poles. We also present an efficient algorithm for finding groups whose elements are no polarized. Hereafter, we work in a real case. It is a network obtained from Twitter, concerning the political position against the Spanish government taken by several influential users. We analyze how the partitions obtained change when some additional information related to how polarized that society is, is added to the problem.
This report on axisymmetric ultraspherical/Gegenbauer polynomials and their use in Ambisonic directivity design in 2D and 3D presents an alternative mathematical formalism to what can be read in, e.g., my and Matthias Frank's book on Ambisonics or J\'er\^ome Daniel's thesis, Gary Elko's differential array book chapters, or Boaz Rafaely's spherical microphone array book. Ultraspherical/Gegenbauer polynomials are highly valuable when designing axisymmetric beams and understanding spherical t designs, and this report will shed some light on what circular, spherical, and ultraspherical axisymmetric polynomials are. While mathematically interesting by themselves already, they can be useful in spherical beamforming as described in the literature on spherical and differential microphone arrays. In this report, these ultraspherical/Gegenbauer polynomials will be used to uniformly derive for arbitrary dimensions D the various directivity designs or Ambisonic order weightings known from literature: max-DI/basic, max-rE , supercardioid, cardioid/inphase. Is there a way to relate higher-order cardioids and supercardioids? How could one define directivity patterns with an on-axis flatness constraint?
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.