Social media platforms have witnessed a substantial increase in social bot activity, significantly affecting online discourse. Our study explores the dynamic nature of bot engagement related to Extinction Rebellion climate change protests from 18 November 2019 to 10 December 2019. We find that bots exert a greater influence on human behavior than vice versa during heated online periods. To assess the causal impact of human-bot communication, we compared communication histories between human users who directly interacted with bots and matched human users who did not. Our findings demonstrate a consistent negative impact of bot interactions on subsequent human sentiment, with exposed users displaying significantly more negative sentiment than their counterparts. Furthermore, the nature of bot interaction influences human tweeting activity and the sentiment towards protests. Political astroturfing bots increase activity, whereas other bots decrease it. Sentiment changes towards protests depend on the user's original support level, indicating targeted manipulation. However, bot interactions do not change activists' engagement towards protests. Despite the seemingly minor impact of individual bot encounters, the cumulative effect is profound due to the large volume of bot communication. Our findings underscore the importance of unrestricted access to social media data for studying the prevalence and influence of social bots, as with new technological advancements distinguishing between bots and humans becomes nearly impossible.
Online communities offer their members various benefits, such as information access, social and emotional support, and entertainment. Despite the important role that founders play in shaping communities, prior research has focused primarily on what drives users to participate and contribute; the motivations and goals of founders remain underexplored. To uncover how and why online communities get started, we present findings from a survey of 951 recent founders of Reddit communities. We find that topical interest is the most common motivation for community creation, followed by motivations to exchange information, connect with others, and self-promote. Founders have heterogeneous goals for their nascent communities, but they tend to privilege community quality and engagement over sheer growth. These differences in founders' early attitudes towards their communities help predict not only the community-building actions that they pursue, but also the ability of their communities to attract visitors, contributors, and subscribers over the first 28 days. We end with a discussion of the implications for researchers, designers, and founders of online communities.
Synthetic data are an attractive concept to enable privacy in data sharing. A fundamental question is how similar the privacy-preserving synthetic data are compared to the true data. Using metric privacy, an effective generalization of differential privacy beyond the discrete setting, we raise the problem of characterizing the optimal privacy-accuracy tradeoff by the metric geometry of the underlying space. We provide a partial solution to this problem in terms of the "entropic scale", a quantity that captures the multiscale geometry of a metric space via the behavior of its packing numbers. We illustrate the applicability of our privacy-accuracy tradeoff framework via a diverse set of examples of metric spaces.
Contraction coefficients give a quantitative strengthening of the data processing inequality. As such, they have many natural applications whenever closer analysis of information processing is required. However, it is often challenging to calculate these coefficients. As a remedy we discuss a quantum generalization of Doeblin coefficients. These give an efficiently computable upper bound on many contraction coefficients. We prove several properties and discuss generalizations and applications. In particular, we give additional stronger bounds for PPT channels and introduce reverse Doeblin coefficients that bound certain expansion coefficients.
Reachability and other path-based measures on temporal graphs can be used to understand spread of infection, information, and people in modelled systems. Due to delays and errors in reporting, temporal graphs derived from data are unlikely to perfectly reflect reality, especially with respect to the precise times at which edges appear. To reflect this uncertainty, we consider a model in which some number $\zeta$ of edge appearances may have their timestamps perturbed by $\pm\delta$ for some $\delta$. Within this model, we investigate temporal reachability and consider the problem of determining the maximum number of vertices any vertex can reach under these perturbations. We show that this problem is intractable in general but is efficiently solvable when $\zeta$ is sufficiently large. We also give algorithms which solve this problem in several restricted settings. We complement this with some contrasting results concerning the complexity of related temporal eccentricity problems under perturbation.
In backbone networks, it is fundamental to quickly protect traffic against any unexpected event, such as failures or congestions, which may impact Quality of Service (QoS). Standard solutions based on Segment Routing (SR), such as Topology-Independent Loop-Free Alternate (TI-LFA), are used in practice to handle failures, but no distributed solutions exist for distributed and tactical congestion mitigation. A promising approach leveraging SR has been recently proposed to quickly steer traffic away from congested links over alternative paths. As the pre-computation of alternative paths plays a paramount role to efficiently mitigating congestions, we investigate the associated path computation problem aiming at maximizing the amount of traffic that can be rerouted as well as the resilience against any 1-link failure. In particular, we focus on two variants of this problem. First, we maximize the residual flow after all possible failures. We show that the problem is NP-Hard, and we solve it via a Benders decomposition algorithm. Then, to provide a practical and scalable solution, we solve a relaxed variant problem, that maximizes, instead of flow, the number of surviving alternative paths after all possible failures. We provide a polynomial algorithm. Through numerical experiments, we compare the two variants and show that they allow to increase the amount of rerouted traffic and the resiliency of the network after any 1-link failure.
Numerous online services are data-driven: the behavior of users affects the system's parameters, and the system's parameters affect the users' experience of the service, which in turn affects the way users may interact with the system. For example, people may choose to use a service only for tasks that already works well, or they may choose to switch to a different service. These adaptations influence the ability of a system to learn about a population of users and tasks in order to improve its performance broadly. In this work, we analyze a class of such dynamics -- where users allocate their participation amongst services to reduce the individual risk they experience, and services update their model parameters to reduce the service's risk on their current user population. We refer to these dynamics as \emph{risk-reducing}, which cover a broad class of common model updates including gradient descent and multiplicative weights. For this general class of dynamics, we show that asymptotically stable equilibria are always segmented, with sub-populations allocated to a single learner. Under mild assumptions, the utilitarian social optimum is a stable equilibrium. In contrast to previous work, which shows that repeated risk minimization can result in (Hashimoto et al., 2018; Miller et al., 2021), we find that repeated myopic updates with multiple learners lead to better outcomes. We illustrate the phenomena via a simulated example initialized from real data.
Traversing 3-D complex environments has always been a significant challenge for legged locomotion. Existing methods typically rely on external sensors such as vision and lidar to preemptively react to obstacles by acquiring environmental information. However, in scenarios like nighttime or dense forests, external sensors often fail to function properly, necessitating robots to rely on proprioceptive sensors to perceive diverse obstacles in the environment and respond promptly. This task is undeniably challenging. Our research finds that methods based on collision detection can enhance a robot's perception of environmental obstacles. In this work, we propose an end-to-end learning-based quadruped robot motion controller that relies solely on proprioceptive sensing. This controller can accurately detect, localize, and agilely respond to collisions in unknown and complex 3D environments, thereby improving the robot's traversability in complex environments. We demonstrate in both simulation and real-world experiments that our method enables quadruped robots to successfully traverse challenging obstacles in various complex environments.
Model averaging (MA), a technique for combining estimators from a set of candidate models, has attracted increasing attention in machine learning and statistics. In the existing literature, there is an implicit understanding that MA can be viewed as a form of shrinkage estimation that draws the response vector towards the subspaces spanned by the candidate models. This paper explores this perspective by establishing connections between MA and shrinkage in a linear regression setting with multiple nested models. We first demonstrate that the optimal MA estimator is the best linear estimator with monotonically non-increasing weights in a Gaussian sequence model. The Mallows MA (MMA), which estimates weights by minimizing the Mallows' $C_p$ over the unit simplex, can be viewed as a variation of the sum of a set of positive-part Stein estimators. Indeed, the latter estimator differs from the MMA only in that its optimization of Mallows' $C_p$ is within a suitably relaxed weight set. Motivated by these connections, we develop a novel MA procedure based on a blockwise Stein estimation. The resulting Stein-type MA estimator is asymptotically optimal across a broad parameter space when the variance is known. Numerical results support our theoretical findings. The connections established in this paper may open up new avenues for investigating MA from different perspectives. A discussion on some topics for future research concludes the paper.
The relevance of shallow-depth quantum circuits has recently increased, mainly due to their applicability to near-term devices. In this context, one of the main goals of quantum circuit complexity is to find problems that can be solved by quantum shallow circuits but require more computational resources classically. Our first contribution in this work is to prove new separations between classical and quantum constant-depth circuits. Firstly, we show a separation between constant-depth quantum circuits with quantum advice $\mathsf{QNC}^0/\mathsf{qpoly}$, and $\mathsf{AC}^0[p]$, which is the class of classical constant-depth circuits with unbounded-fan in and $\pmod{p}$ gates. In addition, we show a separation between $\mathsf{QAC}^0$, which additionally has Toffoli gates with unbounded control, and $\mathsf{AC}^0[p]$. This establishes the first such separation for a shallow-depth quantum class that does not involve quantum fan-out gates. Secondly, we consider $\mathsf{QNC}^0$ circuits with infinite-size gate sets. We show that these circuits, along with (classical or quantum) prime modular gates, can implement threshold gates, showing that $\mathsf{QNC}^0[p]=\mathsf{QTC}^0$. Finally, we also show that in the infinite-size gateset case, these quantum circuit classes for higher-dimensional Hilbert spaces do not offer any advantage to standard qubit implementations.
Social media platforms can quickly disseminate STEM content to diverse audiences, but their operation can be mysterious. We used open-source machine learning methods such as clustering, regression, and sentiment analysis to analyze over 1000 videos and metrics thereof from 6 social media STEM creators. Our data provide insights into how audiences generate interest signals(likes, bookmarks, comments, shares), on the correlation of various signals with views, and suggest that content from newer creators is disseminated differently. We also share insights on how to optimize dissemination by analyzing data available exclusively to content creators as well as via sentiment analysis of comments.