Social networks are usually considered as positive sources of social support, a role which has been extensively studied in the context of domestic violence. To victims of abuse, social networks often provide initial emotional and practical help as well useful information ahead of formal institutions. Recently, however, attention has been paid to the negative responses of social networks. In this article, we advance the theoretical debate on social networks as a source of social support by moving beyond the distinction between positive and negative ties. We do so by proposing the concepts of relational ambivalence and consistency, which describe the interactive processes by which people, intentionally or inadvertently, disregard or align with each other role relational expectations, therefore undermining or reinforcing individual choices of action. We analyse the qualitative accounts of nineteen female victims of domestic violence in Sweden, who described the responses of their personal networks during and after the abuse. We observe how the relationships embedded in these networks were described in ambivalent and consistent terms, and how they played a role in supporting or undermining women in reframing their loving relationships as abusive; in accounting or dismissing perpetrators responsibilities for the abuse; in relieving women from role expectations and obligations or in burdening them with further responsibilities; and in supporting or challenging their pathways out of domestic abuse. Our analysis suggests that social isolation cannot be considered a simple result of a lack of support but of the complex dynamics in which support is offered and accepted or withdrawn and refused.
The Stratonovich's value of information (VoI) is quantity that measure how much inferential gain is obtained from a perturbed sample under information leakage constraint. In this paper, we introduce a generalized VoI for a general loss function and general information leakage. Then we derive an upper bound of the generalized VoI. Moreover, for a classical loss function, we provide a achievable condition of the upper bound which is weaker than that of in previous studies. Since VoI can be viewed as a formulation of a privacy-utility trade-off (PUT) problem, we provide an interpretation of the achievable condition in the PUT context.
The 5G wireless networks are potentially revolutionizing future technologies. The 5G technologies are expected to foresee demands of diverse vertical applications with diverse requirements including high traffic volume, massive connectivity, high quality of service, and low latency. To fulfill such requirements in 5G and beyond, new emerging technologies such as SDN, NFV, MEC, and CC are being deployed. However, these technologies raise several issues regarding transparency, decentralization, and reliability. Furthermore, 5G networks are expected to connect many heterogeneous devices and machines which will raise several security concerns regarding users' confidentiality, data privacy, and trustworthiness. To work seamlessly and securely in such scenarios, future 5G networks need to deploy smarter and more efficient security functions. Motivated by the aforementioned issues, blockchain was proposed by researchers to overcome 5G issues because of its capacities to ensure transparency, data reliability, trustworthiness, immutability in a distributed environment. Indeed, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized technologies. In this chapter, we discuss the integration of the blockchain with 5G networks and beyond. We then present how blockchain applications in 5G networks and beyond could facilitate enabling various services at the edge and the core.
The inequality in capital or resource distribution is among the important phenomena observed in populations. The sources of inequality and methods for controlling it are of practical interest. To study this phenomenon, we introduce a model of interaction between agents in the network designed for reducing the inequality in the distribution of capital. To achieve the effect of inequality reduction, we interpret the outcome of the elementary game played in the network such that the wining of the game is translated into the reduction of the inequality. We study different interpretations of the introduced scheme and their impact on the behaviour of agents in the terms of the capital distribution, and we provide examples based on the capital dependent Parrondo's paradox. The results presented in this study provide insight into the mechanics of the inequality formation in the society.
Recent years have witnessed remarkable progress towards computational fake news detection. To mitigate its negative impact, we argue that it is critical to understand what user attributes potentially cause users to share fake news. The key to this causal-inference problem is to identify confounders -- variables that cause spurious associations between treatments (e.g., user attributes) and outcome (e.g., user susceptibility). In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities. Learning such user behavior is typically subject to selection bias in users who are susceptible to share news on social media. Drawing on causal inference theories, we first propose a principled approach to alleviating selection bias in fake news dissemination. We then consider the learned unbiased fake news sharing behavior as the surrogate confounder that can fully capture the causal links between user attributes and user susceptibility. We theoretically and empirically characterize the effectiveness of the proposed approach and find that it could be useful in protecting society from the perils of fake news.
There recently has been a surge of interest in developing a new class of deep learning (DL) architectures that integrate an explicit time dimension as a fundamental building block of learning and representation mechanisms. In turn, many recent results show that topological descriptors of the observed data, encoding information on the shape of the dataset in a topological space at different scales, that is, persistent homology of the data, may contain important complementary information, improving both performance and robustness of DL. As convergence of these two emerging ideas, we propose to enhance DL architectures with the most salient time-conditioned topological information of the data and introduce the concept of zigzag persistence into time-aware graph convolutional networks (GCNs). Zigzag persistence provides a systematic and mathematically rigorous framework to track the most important topological features of the observed data that tend to manifest themselves over time. To integrate the extracted time-conditioned topological descriptors into DL, we develop a new topological summary, zigzag persistence image, and derive its theoretical stability guarantees. We validate the new GCNs with a time-aware zigzag topological layer (Z-GCNETs), in application to traffic forecasting and Ethereum blockchain price prediction. Our results indicate that Z-GCNET outperforms 13 state-of-the-art methods on 4 time series datasets.
Stock trend forecasting, aiming at predicting the stock future trends, is crucial for investors to seek maximized profits from the stock market. Many event-driven methods utilized the events extracted from news, social media, and discussion board to forecast the stock trend in recent years. However, existing event-driven methods have two main shortcomings: 1) overlooking the influence of event information differentiated by the stock-dependent properties; 2) neglecting the effect of event information from other related stocks. In this paper, we propose a relational event-driven stock trend forecasting (REST) framework, which can address the shortcoming of existing methods. To remedy the first shortcoming, we propose to model the stock context and learn the effect of event information on the stocks under different contexts. To address the second shortcoming, we construct a stock graph and design a new propagation layer to propagate the effect of event information from related stocks. The experimental studies on the real-world data demonstrate the efficiency of our REST framework. The results of investment simulation show that our framework can achieve a higher return of investment than baselines.
In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media. The wide spread of disinformation and fake news can cause detrimental societal effects. Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation. The goal of this chapter is to pave the way for appreciating the challenges and advancements via: (1) introducing the types of information disorder on social media and examine their differences and connections; (2) describing important and emerging tasks to combat disinformation for characterization, detection and attribution; and (3) discussing a weak supervision approach to detect disinformation with limited labeled data. We then provide an overview of the chapters in this book that represent the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. We hope this book to be a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains.
The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.
We propose a novel approach to multimodal sentiment analysis using deep neural networks combining visual analysis and natural language processing. Our goal is different than the standard sentiment analysis goal of predicting whether a sentence expresses positive or negative sentiment; instead, we aim to infer the latent emotional state of the user. Thus, we focus on predicting the emotion word tags attached by users to their Tumblr posts, treating these as "self-reported emotions." We demonstrate that our multimodal model combining both text and image features outperforms separate models based solely on either images or text. Our model's results are interpretable, automatically yielding sensible word lists associated with emotions. We explore the structure of emotions implied by our model and compare it to what has been posited in the psychology literature, and validate our model on a set of images that have been used in psychology studies. Finally, our work also provides a useful tool for the growing academic study of images - both photographs and memes - on social networks.
The Everyday Sexism Project documents everyday examples of sexism reported by volunteer contributors from all around the world. It collected 100,000 entries in 13+ languages within the first 3 years of its existence. The content of reports in various languages submitted to Everyday Sexism is a valuable source of crowdsourced information with great potential for feminist and gender studies. In this paper, we take a computational approach to analyze the content of reports. We use topic-modelling techniques to extract emerging topics and concepts from the reports, and to map the semantic relations between those topics. The resulting picture closely resembles and adds to that arrived at through qualitative analysis, showing that this form of topic modeling could be useful for sifting through datasets that had not previously been subject to any analysis. More precisely, we come up with a map of topics for two different resolutions of our topic model and discuss the connection between the identified topics. In the low resolution picture, for instance, we found Public space/Street, Online, Work related/Office, Transport, School, Media harassment, and Domestic abuse. Among these, the strongest connection is between Public space/Street harassment and Domestic abuse and sexism in personal relationships.The strength of the relationships between topics illustrates the fluid and ubiquitous nature of sexism, with no single experience being unrelated to another.