In the intricate landscape of social media genuine content dissemination may be altered by a number of threats. Coordinated Behavior (CB), defined as orchestrated efforts by entities to deceive or mislead users about their identity and intentions, emerges as a tactic to exploit or manipulate online discourse. This study delves into the relationship between CB and toxic conversation on Twitter. Using a dataset of 11 million tweets from 1 million users preceding the 2019 UK General Elections, we show that users displaying CB typically disseminate less harmful content, irrespective of political affiliation. However, distinct toxicity patterns emerge among different CB cohorts. Compared to their non-CB counterparts, CB participants show marginally elevated toxicity levels only when considering their original posts. We further show the effects of CB-driven toxic content on non-CB users, gauging its impact based on political leanings. Our findings suggest a nuanced but statistically significant influence of CB on digital discourse.
We consider the asymptotic properties of Approximate Bayesian Computation (ABC) for the realistic case of summary statistics with heterogeneous rates of convergence. We allow some statistics to converge faster than the ABC tolerance, other statistics to converge slower, and cover the case where some statistics do not converge at all. We give conditions for the ABC posterior to converge, and provide an explicit representation of the shape of the ABC posterior distribution in our general setting; in particular, we show how the shape of the posterior depends on the number of slow statistics. We then quantify the gain brought by the local linear post-processing step.
Quantum networks crucially rely on the availability of high-quality entangled pairs of qubits, known as entangled links, distributed across distant nodes. Maintaining the quality of these links is a challenging task due to the presence of time-dependent noise, also known as decoherence. Entanglement purification protocols offer a solution by converting multiple low-quality entangled states into a smaller number of higher-quality ones. In this work, we introduce a framework to analyse the performance of entanglement buffering setups that combine entanglement consumption, decoherence, and entanglement purification. We propose two key metrics: the availability, which is the steady-state probability that an entangled link is present, and the average consumed fidelity, which quantifies the steady-state quality of consumed links. We then investigate a two-node system, where each node possesses two quantum memories: one for long-term entanglement storage, and another for entanglement generation. We model this setup as a continuous-time stochastic process and derive analytical expressions for the performance metrics. Our findings unveil a trade-off between the availability and the average consumed fidelity. We also bound these performance metrics for a buffering system that employs the well-known bilocal Clifford purification protocols. Importantly, our analysis demonstrates that, in the presence of noise, consistently purifying the buffered entanglement increases the average consumed fidelity, even when some buffered entanglement is discarded due to purification failures.
Orthogonal meta-learners, such as DR-learner, R-learner and IF-learner, are increasingly used to estimate conditional average treatment effects. They improve convergence rates relative to na\"{\i}ve meta-learners (e.g., T-, S- and X-learner) through de-biasing procedures that involve applying standard learners to specifically transformed outcome data. This leads them to disregard the possibly constrained outcome space, which can be particularly problematic for dichotomous outcomes: these typically get transformed to values that are no longer constrained to the unit interval, making it difficult for standard learners to guarantee predictions within the unit interval. To address this, we construct orthogonal meta-learners for the prediction of counterfactual outcomes which respect the outcome space. As such, the obtained i-learner or imputation-learner is more generally expected to outperform existing learners, even when the outcome is unconstrained, as we confirm empirically in simulation studies and an analysis of critical care data. Our development also sheds broader light onto the construction of orthogonal learners for other estimands.
In recent times, the prevalence of home NATs and the widespread implementation of Carrier-Grade NATs have posed significant challenges to various applications, particularly those relying on Peer-to-Peer communication. This paper addresses these issues by conducting a thorough review of related literature and exploring potential techniques to mitigate the problems. The literature review focuses on the disruptive effects of home NATs and CGNATs on application performance. Additionally, the study examines existing approaches used to alleviate these disruptions. Furthermore, this paper presents a comprehensive guide on how to puncture a NAT and facilitate direct communication between two peers behind any type of NAT. The techniques outlined in the guide are rigorously tested using a simple application running the IPv8 network overlay, along with their built-in NAT penetration procedures. To evaluate the effectiveness of the proposed techniques, 5G communication is established between two phones using four different Dutch telephone carriers. The results indicate successful cross-connectivity with three out of the four carriers tested, showcasing the practical applicability of the suggested methods.
Selection models are ubiquitous in statistics. In recent years, they have regained considerable popularity as the working inferential models in many selective inference problems. In this paper, we derive an asymptotic expansion of the local likelihood ratios of selection models. We show that under mild regularity conditions, they are asymptotically equivalent to a sequence of Gaussian selection models. This generalizes the Local Asymptotic Normality framework of Le Cam (1960). Furthermore, we derive the asymptotic shape of Bayesian posterior distributions constructed from selection models, and show that they can be significantly miscalibrated in a frequentist sense.
Analyzing a unique real-time dataset from across 26 social media platforms, we show why the hate-extremism ecosystem now has unprecedented reach and recruitment paths online; why it is now able to exert instant and massive global mainstream influence, e.g. following the October 7 Hamas attack; why it will become increasingly robust in 2024 and beyond; why recent E.U. laws fall short because the effect of many smaller, lesser-known platforms outstrips larger ones like Twitter; and why law enforcement should expect increasingly hard-to-understand paths ahead of offline mass attacks. This new picture of online hate and extremism challenges current notions of a niche activity at the 'fringe' of the Internet driven by specific news sources. But it also suggests a new opportunity for system-wide control akin to adaptive vs. extinction treatments for cancer.
We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets -- movie ratings, hyperspectral images, and high-dimensional functions -- suggest that the proposed algorithms are more efficient and have better reconstruction ability than other candidates.
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than two qubits. In this study, we use deep convolutional neural networks, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. The neural models' code and training schemes, as well as data generation procedures, are available at github.com/Maticraft/quantum_correlations.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.