The spread of online misinformation on social media is increasingly perceived as a problem for societal cohesion and democracy. The role of political leaders in this process has attracted less research attention, even though politicians who "speak their mind" are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence. Analyzing communications by members of the U.S. Congress on Twitter between 2011 and 2022, we show that politicians' conception of honesty has undergone a distinct shift, with authentic belief-speaking that may be decoupled from evidence becoming more prominent and more differentiated from explicitly evidence-based truth seeking. We show that for Republicans - but not Democrats - an increase of belief-speaking of 10% is associated with a decrease of 12.8 points of quality (NewsGuard scoring system) in the sources shared in a tweet. Conversely, an increase in truth-seeking language is associated with an increase in quality of sources for both parties. The results support the hypothesis that the current dissemination of misinformation in political discourse is in part driven by an alternative understanding of truth and honesty that emphasizes invocation of subjective belief at the expense of reliance on evidence.
Wireless communication is enabling billions of people to connect to each other and the internet, transforming every sector of the economy, and building the foundations for powerful new technologies that hold great promise to improve lives at an unprecedented rate and scale. The rapid increase in the number of devices and the associated demands for higher data rates and broader network coverage fuels the need for more robust wireless technologies. The key technology identified to address this problem is referred to as Cell-Free Massive MIMO (CF-mMIMO). CF-mMIMO is accompanied by many challenges, one of which is efficiently allocating limited resources. In this paper, we focus on a major resource allocation problem in wireless networks, namely the Pilot Assignment problem (PA). We show that PA is strongly NP-hard and that it does not admit a polynomial-time constant-factor approximation algorithm. Further, we show that PA cannot be approximated in polynomial time within $\mathcal{O}(K^2)$ (where $K$ is the number of users) when the system consists of at least three pilots. Finally, we present an approximation lower bound of $1.058$ (resp. $\epsilon|K|^2$, for $\epsilon >0$) in special cases where the system consists of exactly two (resp. three) pilots.
To track trends in the perception of literary translation around the political transformation in 1989 in Hungary, a coding system was developed on the paragraphs of the 1980-1999 issues of the literary journal Alf\"old. This paper describes how we trained BERT models to carry over the coding system to the 1980-1999 issues of the literary journal Nagyvil\'ag. We use extensive hyperparameter tuning, loss functions robust to label unbalance, 10-fold cross-validation for precise evaluations and a model ensemble for prediction, manual validation on the predict set, a new calibration method to better predict label counts for sections of the Nagyvil\'ag corpus, and to study the relations between labels, we construct label relation networks.
Personalized recommendations form an important part of today's internet ecosystem, helping artists and creators to reach interested users, and helping users to discover new and engaging content. However, many users today are skeptical of platforms that personalize recommendations, in part due to historically careless treatment of personal data and data privacy. Now, businesses that rely on personalized recommendations are entering a new paradigm, where many of their systems must be overhauled to be privacy-first. In this article, we propose an algorithm for personalized recommendations that facilitates both precise and differentially-private measurement. We consider advertising as an example application, and conduct offline experiments to quantify how the proposed privacy-preserving algorithm affects key metrics related to user experience, advertiser value, and platform revenue compared to the extremes of both (private) non-personalized and non-private, personalized implementations.
We study Lindstrom quantifiers that satisfy certain closure properties which are motivated by the study of polymorphisms in the context of constraint satisfaction problems (CSP). When the algebra of polymorphisms of a finite structure B satisfies certain equations, this gives rise to a natural closure condition on the class of structures that map homomorphically to B. The collection of quantifiers that satisfy closure conditions arising from a fixed set of equations are rather more general than those arising as CSP. For any such conditions P, we define a pebble game that delimits the distinguishing power of the infinitary logic with all quantifiers that are P-closed. We use the pebble game to show that the problem of deciding whether a system of linear equations is solvable in Z2 is not expressible in the infinitary logic with all quantifiers closed under a near-unanimity condition.
Text summarization is an essential task in natural language processing, and researchers have developed various approaches over the years, ranging from rule-based systems to neural networks. However, there is no single model or approach that performs well on every type of text. We propose a system that recommends the most suitable summarization model for a given text. The proposed system employs a fully connected neural network that analyzes the input content and predicts which summarizer should score the best in terms of ROUGE score for a given input. The meta-model selects among four different summarization models, developed for the Slovene language, using different properties of the input, in particular its Doc2Vec document representation. The four Slovene summarization models deal with different challenges associated with text summarization in a less-resourced language. We evaluate the proposed SloMetaSum model performance automatically and parts of it manually. The results show that the system successfully automates the step of manually selecting the best model.
Aiming for a mixbiotic society that combines freedom and solidarity among people with diverse values, I focused on nonviolent communication (NVC) that enables compassionate giving in various situations of social division and conflict, and tried a generative AI for it. Specifically, ChatGPT was used in place of the traditional certified trainer to test the possibility of mediating (modifying) input sentences in four processes: observation, feelings, needs, and requests. The results indicate that there is potential for the application of generative AI, although not yet at a practical level. Suggested improvement guidelines included adding model responses, relearning revised responses, specifying appropriate terminology for each process, and re-asking for required information. The use of generative AI will be useful initially to assist certified trainers, to prepare for and review events and workshops, and in the future to support consensus building and cooperative behavior in digital democracy, platform cooperatives, and cyber-human social co-operating systems. It is hoped that the widespread use of NVC mediation using generative AI will lead to the early realization of a mixbiotic society.
Particle methods based on evolving the spatial derivatives of the solution were originally introduced to simulate reaction-diffusion processes, inspired by vortex methods for the Navier--Stokes equations. Such methods, referred to as gradient random walk methods, were extensively studied in the '90s and have several interesting features, such as being grid free, automatically adapting to the solution by concentrating elements where the gradient is large and significantly reducing the variance of the standard random walk approach. In this work, we revive these ideas by showing how to generalize the approach to a larger class of partial differential equations, including hyperbolic systems of conservation laws. To achieve this goal, we first extend the classical Monte Carlo method to relaxation approximation of systems of conservation laws, and subsequently consider a novel particle dynamics based on the spatial derivatives of the solution. The methodology, combined with asymptotic-preserving splitting discretization, yields a way to construct a new class of gradient-based Monte Carlo methods for hyperbolic systems of conservation laws. Several results in one spatial dimension for scalar equations and systems of conservation laws show that the new methods are very promising and yield remarkable improvements compared to standard Monte Carlo approaches, either in terms of variance reduction as well as in describing the shock structure.
We propose and analyze a nonlinear dynamic model of continuous-time multi-dimensional belief formation over signed social networks. Our model accounts for the effects of a structured belief system, self-appraisal, internal biases, and various sources of cognitive dissonance posited by recent theories in social psychology. We prove that strong beliefs emerge on the network as a consequence of a bifurcation. We analyze how the balance of social network effects in the model controls the nature of the bifurcation and, therefore, the belief-forming limit-set solutions. Our analysis provides constructive conditions on how multi-stable network belief equilibria and belief oscillations emerging at a belief-forming bifurcation depend on the communication network graph and belief system network graph. Our model and analysis provide new theoretical insights on the dynamics of social systems and a new principled framework for designing decentralized decision-making on engineered networks in the presence of structured relationships among alternatives.
Neural network pruning is a highly effective technique aimed at reducing the computational and memory demands of large neural networks. In this research paper, we present a novel approach to pruning neural networks utilizing Bayesian inference, which can seamlessly integrate into the training procedure. Our proposed method leverages the posterior probabilities of the neural network prior to and following pruning, enabling the calculation of Bayes factors. The calculated Bayes factors guide the iterative pruning. Through comprehensive evaluations conducted on multiple benchmarks, we demonstrate that our method achieves desired levels of sparsity while maintaining competitive accuracy.
In the future, it is anticipated that software-defined networking (SDN) will become the preferred platform for deploying diverse networks. Compared to traditional networks, SDN separates the control and data planes for efficient domain-wide traffic routing and management. The controllers in the control plane are responsible for programming data plane forwarding devices, while the top layer, the application plane, enforces policies and programs the network. The different levels of the SDN use interfaces for communication. However, SDN faces challenges with traffic distribution, such as load imbalance, which can negatively affect the network performance. Consequently, developers have developed various SDN load-balancing solutions to enhance SDN effectiveness. In addition, researchers are considering the potential of implementing some artificial intelligence (AI) approaches into SDN to improve network resource usage and overall performance due to the fast growth of the AI field. This survey focuses on the following: Firstly, analyzing the SDN architecture and investigating the problem of load balancing in SDN. Secondly, categorizing AI-based load balancing methods and thoroughly assessing these mechanisms from various perspectives, such as the algorithm/technique employed, the tackled problem, and their strengths and weaknesses. Thirdly, summarizing the metrics utilized to measure the effectiveness of these techniques. Finally, identifying the trends and challenges of AI-based load balancing for future research.