Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi-experimental study on two popular Reddit communities (r/askreddit and r/science) by collecting their data spanning 13 months-a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. Our findings suggest that explanations clarify and reinforce the social norms of online spaces, enhance community engagement, and benefit many more members than previously understood. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities.
Autonomous systems, including generative AI, have been adopted faster than previous digital innovations. Their impact on society might as well be more profound, with a radical restructuring of the economy of knowledge and dramatic consequences for social and institutional balances. Different attitudes to control these systems have emerged rooted in the classical pillars of legal systems, proprietary rights, and social responsibility. We show how an illusion of control might be guiding governments and regulators, while autonomous systems might be driving us to inescapable delusion.
As cyber attacks continue to increase in frequency and sophistication, detecting malware has become a critical task for maintaining the security of computer systems. Traditional signature-based methods of malware detection have limitations in detecting complex and evolving threats. In recent years, machine learning (ML) has emerged as a promising solution to detect malware effectively. ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify. This paper presents a comprehensive review of the state-of-the-art ML techniques used in malware detection, including supervised and unsupervised learning, deep learning, and reinforcement learning. We also examine the challenges and limitations of ML-based malware detection, such as the potential for adversarial attacks and the need for large amounts of labeled data. Furthermore, we discuss future directions in ML-based malware detection, including the integration of multiple ML algorithms and the use of explainable AI techniques to enhance the interpret ability of ML-based detection systems. Our research highlights the potential of ML-based techniques to improve the speed and accuracy of malware detection, and contribute to enhancing cybersecurity
We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features. The specific dictionary depends on the activation and depth. We consider 2-layer networks with piecewise linear activations, deep narrow ReLU networks with up to 4 layers, and rectangular and tree networks with sign activation and arbitrary depth. Interestingly in ReLU networks, a fourth layer creates features that represent reflections of training data about themselves. The Lasso representation sheds insight to globally optimal networks and the solution landscape.
Most of the existing research on degrees-of-freedom (DoF) with imperfect channel state information at the transmitter (CSIT) assume the messages are private, which may not reflect reality as the two receivers can request the same content. To overcome this limitation, we therefore consider the hybrid unicast and multicast messages. In particular, we characterize the optimal DoF region for the two-user multiple-input multiple-output (MIMO) broadcast channel (BC) with imperfect CSIT and hybrid messages. For the converse, we establish a three-step procedure to exploit the utmost possible relaxation. For the achievability, since the DoF region is with specific three-dimensional structure regarding antenna configurations and CSIT qualities, we verify the existence or non-existence of corner point candidates via the feature of antenna configurations and CSIT qualities categorization and provide a hybrid message-aware rate-splitting scheme. Besides, we show that to achieve the strictly positive corner points, it is unnecessary to split the unicast messages into private and common parts. This implies adding a multicast message may mitigate the rate-splitting complexity.
Online media has revolutionized the way political information is disseminated and consumed on a global scale, and this shift has compelled political figures to adopt new strategies of capturing and retaining voter attention. These strategies often rely on emotional persuasion and appeal, and as visual content becomes increasingly prevalent in virtual space, much of political communication too has come to be marked by evocative video content and imagery. The present paper offers a novel approach to analyzing material of this kind. We apply a deep-learning-based computer-vision algorithm to a sample of 220 YouTube videos depicting political leaders from 15 different countries, which is based on an existing trained convolutional neural network architecture provided by the Python library fer. The algorithm returns emotion scores representing the relative presence of 6 emotional states (anger, disgust, fear, happiness, sadness, and surprise) and a neutral expression for each frame of the processed YouTube video. We observe statistically significant differences in the average score of expressed negative emotions between groups of leaders with varying degrees of populist rhetoric as defined by the Global Party Survey (GPS), indicating that populist leaders tend to express negative emotions to a greater extent during their public performance than their non-populist counterparts. Overall, our contribution provides insight into the characteristics of visual self-representation among political leaders, as well as an open-source workflow for further computational studies of their non-verbal communication.
Hex-dominant mesh generation has received significant attention in recent research due to its superior robustness compared to pure hex-mesh generation techniques. In this work, we introduce the first structure for analyzing hex-dominant meshes. This structure builds on the base complex of pure hex-meshes but incorporates the non-hex elements for a more comprehensive and complete representation. We provide its definition and describe its construction steps. Based on this structure, we present an extraction and categorization of sheets using advanced graph matching techniques to handle the non-hex elements. This enables us to develop an enhanced visual analysis of the structure for any hex-dominant meshes.We apply this structure-based visual analysis to compare hex-dominant meshes generated by different methods to study their advantages and disadvantages. This complements the standard quality metric based on the non-hex element percentage for hex-dominant meshes. Moreover, we propose a strategy to extract a cleaned (optimized) valence-based singularity graph wireframe to analyze the structure for both mesh and sheets. Our results demonstrate that the proposed hybrid base complex provides a coarse representation for mesh element, and the proposed valence singularity graph wireframe provides a better internal visualization of hex-dominant meshes.
The development of the internet has allowed for the global distribution of content, redefining media communication and property structures through various streaming platforms. Previous studies successfully clarified the factors contributing to trends in each streaming service, yet the similarities and differences between platforms are commonly unexplored; moreover, the influence of social connections and cultural similarity is usually overlooked. We hereby examine the social aspects of three significant streaming services--Netflix, Spotify, and YouTube--with an emphasis on the dissemination of content across countries. Using two-year-long trending chart datasets, we find that streaming content can be divided into two types: video-oriented (Netflix) and audio-oriented (Spotify). This characteristic is differentiated by accounting for the significance of social connectedness and linguistic similarity: audio-oriented content travels via social links, but video-oriented content tends to spread throughout linguistically akin countries. Interestingly, user-generated contents, YouTube, exhibits a dual characteristic by integrating both visual and auditory characteristics, indicating the platform is evolving into unique medium rather than simply residing a midpoint between video and audio media.
We present Bluebell, a program logic for reasoning about probabilistic programs where unary and relational styles of reasoning come together to create new reasoning tools. Unary-style reasoning is very expressive and is powered by foundational mechanisms to reason about probabilistic behaviour like independence and conditioning. The relational style of reasoning, on the other hand, naturally shines when the properties of interest compare the behaviour of similar programs (e.g. when proving differential privacy) managing to avoid having to characterize the output distributions of the individual programs. So far, the two styles of reasoning have largely remained separate in the many program logics designed for the deductive verification of probabilistic programs. In Bluebell, we unify these styles of reasoning through the introduction of a new modality called "joint conditioning" that can encode and illuminate the rich interaction between conditional independence and relational liftings; the two powerhouses from the two styles of reasoning.
This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.
Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era.