The advancements in peer-to-peer wireless power transfer (P2P-WPT) have empowered the portable and mobile devices to wirelessly replenish their battery by directly interacting with other nearby devices. The existing works unrealistically assume the users to exchange energy with any of the users and at every such opportunity. However, due to the users' mobility, the inter-node meetings in such opportunistic mobile networks vary, and P2P energy exchange in such scenarios remains uncertain. Additionally, the social interests and interactions of the users influence their mobility as well as the energy exchange between them. The existing P2P-WPT methods did not consider the joint problem for energy exchange due to user's inevitable mobility, and the influence of sociality on the latter. As a result of computing with imprecise information, the energy balance achieved by these works at a slower rate as well as impaired by energy loss for the crowd. Motivated by this problem scenario, in this work, we present a wireless crowd charging method, namely MoSaBa, which leverages mobility prediction and social information for improved energy balancing. MoSaBa incorporates two dimensions of social information, namely social context and social relationships, as additional features for predicting contact opportunities. In this method, we explore the different pairs of peers such that the energy balancing is achieved at a faster rate as well as the energy balance quality improves in terms of maintaining low energy loss for the crowd. We justify the peer selection method in MoSaBa by detailed performance evaluation. Compared to the existing state-of-the-art, the proposed method achieves better performance trade-offs between energy-efficiency, energy balance quality and convergence time.
The most prominent tasks in emotion analysis are to assign emotions to texts and to understand how emotions manifest in language. An important observation for natural language processing is that emotions can be communicated implicitly by referring to events alone, appealing to an empathetic, intersubjective understanding of events, even without explicitly mentioning an emotion name. In psychology, the class of emotion theories known as appraisal theories aims at explaining the link between events and emotions. Appraisals can be formalized as variables that measure a cognitive evaluation by people living through an event that they consider relevant. They include the assessment if an event is novel, if the person considers themselves to be responsible, if it is in line with the own goals, and many others. Such appraisals explain which emotions are developed based on an event, e.g., that a novel situation can induce surprise or one with uncertain consequences could evoke fear. We analyze the suitability of appraisal theories for emotion analysis in text with the goal of understanding if appraisal concepts can reliably be reconstructed by annotators, if they can be predicted by text classifiers, and if appraisal concepts help to identify emotion categories. To achieve that, we compile a corpus by asking people to textually describe events that triggered particular emotions and to disclose their appraisals. Then, we ask readers to reconstruct emotions and appraisals from the text. This setup allows us to measure if emotions and appraisals can be recovered purely from text and provides a human baseline to judge model's performance measures. Our comparison of text classification methods to human annotators shows that both can reliably detect emotions and appraisals with similar performance. We further show that appraisal concepts improve the categorization of emotions in text.
Recurring outbreaks of COVID-19 have posed enduring effects on global society, which calls for a predictor of pandemic waves using various data with early availability. Existing prediction models that forecast the first outbreak wave using mobility data may not be applicable to the multiwave prediction, because the evidence in the USA and Japan has shown that mobility patterns across different waves exhibit varying relationships with fluctuations in infection cases. Therefore, to predict the multiwave pandemic, we propose a Social Awareness-Based Graph Neural Network (SAB-GNN) that considers the decay of symptom-related web search frequency to capture the changes in public awareness across multiple waves. Our model combines GNN and LSTM to model the complex relationships among urban districts, inter-district mobility patterns, web search history, and future COVID-19 infections. We train our model to predict future pandemic outbreaks in the Tokyo area using its mobility and web search data from April 2020 to May 2021 across four pandemic waves collected by Yahoo Japan Corporation under strict privacy protection rules. Results demonstrate our model outperforms state-of-the-art baselines such as ST-GNN, MPNN, and GraphLSTM. Though our model is not computationally expensive (only 3 layers and 10 hidden neurons), the proposed model enables public agencies to anticipate and prepare for future pandemic outbreaks.
Machine learning (ML) and artificial intelligence (AI) tools increasingly permeate every possible social, political, and economic sphere; sorting, taxonomizing and predicting complex human behaviour and social phenomena. However, from fallacious and naive groundings regarding complex adaptive systems to datasets underlying models, these systems are beset by problems, challenges, and limitations. They remain opaque and unreliable, and fail to consider societal and structural oppressive systems, disproportionately negatively impacting those at the margins of society while benefiting the most powerful. The various challenges, problems and pitfalls of these systems are a hot topic of research in various areas, such as critical data/algorithm studies, science and technology studies (STS), embodied and enactive cognitive science, complexity science, Afro-feminism, and the broadly construed emerging field of Fairness, Accountability, and Transparency (FAccT). Yet, these fields of enquiry often proceed in silos. This thesis weaves together seemingly disparate fields of enquiry to examine core scientific and ethical challenges, pitfalls, and problems of AI. In this thesis I, a) review the historical and cultural ecology from which AI research emerges, b) examine the shaky scientific grounds of machine prediction of complex behaviour illustrating how predicting complex behaviour with precision is impossible in principle, c) audit large scale datasets behind current AI demonstrating how they embed societal historical and structural injustices, d) study the seemingly neutral values of ML research and put forward 67 prominent values underlying ML research, e) examine some of the insidious and worrying applications of computer vision research, and f) put forward a framework for approaching challenges, failures and problems surrounding ML systems as well as alternative ways forward.
We present a systematic refactoring of the conventional treatment of privacy analyses, basing it on mathematical concepts from the framework of Quantitative Information Flow (QIF). The approach we suggest brings three principal advantages: it is flexible, allowing for precise quantification and comparison of privacy risks for attacks both known and novel; it can be computationally tractable for very large, longitudinal datasets; and its results are explainable both to politicians and to the general public. We apply our approach to a very large case study: the Educational Censuses of Brazil, curated by the governmental agency INEP, which comprise over 90 attributes of approximately 50 million individuals released longitudinally every year since 2007. These datasets have only very recently (2018-2021) attracted legislation to regulate their privacy -- while at the same time continuing to maintain the openness that had been sought in Brazilian society. INEP's reaction to that legislation was the genesis of our project with them. In our conclusions here we share the scientific, technical, and communication lessons we learned in the process.
Cell-free massive MIMO is a variant of multiuser MIMO and massive MIMO, in which the total number of antennas $LM$ is distributed among the $L$ remote radio units (RUs) in the system, enabling macrodiversity and joint processing. Due to pilot contamination and system scalability, each RU can only serve a limited number of users. Obtaining the optimal number of users simultaneously served on one resource block (RB) by the $L$ RUs regarding the sum spectral efficiency (SE) is not a simple challenge though, as many of the system parameters are intertwined. For example, the dimension $\tau_p$ of orthogonal Demodulation Reference Signal (DMRS) pilots limits the number of users that an RU can serve. Thus, depending on $\tau_p$, the optimal user load yielding the maximum sum SE will vary. Another key parameter is the users' uplink transmit power $P^{\rm ue}_{\rm tx}$, where a trade-off between users in outage, interference and energy inefficiency exists. We study the effect of multiple parameters in cell-free massive MIMO on the sum SE and user outage, as well as the performance of different levels of RU antenna distribution. We provide extensive numerical investigations to illuminate the behavior of the system SE with respect to the various parameters, including the effect of the system load, i.e., the number of active users to be served on any RB. The results show that in general a system with many RUs and few RU antennas yields the largest sum SE, where the benefits of distributed antennas reduce in very dense networks.
End-to-End (E2E) network slicing enables wireless networks to provide diverse services on a common infrastructure. Each E2E slice, including resources of radio access network (RAN) and core network, is rented to mobile virtual network operators (MVNOs) to provide a specific service to end-users. RAN slicing, which is realized through wireless network virtualization, involves sharing the frequency spectrum and base station antennas in RAN. Similarly, in core slicing, which is achieved by network function virtualization, data center resources such as commodity servers and physical links are shared between users of different MVNOs. In this paper, we study E2E slicing with the aim of minimizing the total energy consumption. The stated optimization problem is non-convex that is solved by a sub-optimal algorithm proposed here. The simulation results show that our proposed joint power control, server and link allocation (JPSLA) algorithm achieves 30% improvement compared to the disjoint scheme, where RAN and core are sliced separately.
Heart Disease has become one of the most serious diseases that has a significant impact on human life. It has emerged as one of the leading causes of mortality among the people across the globe during the last decade. In order to prevent patients from further damage, an accurate diagnosis of heart disease on time is an essential factor. Recently we have seen the usage of non-invasive medical procedures, such as artificial intelligence-based techniques in the field of medical. Specially machine learning employs several algorithms and techniques that are widely used and are highly useful in accurately diagnosing the heart disease with less amount of time. However, the prediction of heart disease is not an easy task. The increasing size of medical datasets has made it a complicated task for practitioners to understand the complex feature relations and make disease predictions. Accordingly, the aim of this research is to identify the most important risk-factors from a highly dimensional dataset which helps in the accurate classification of heart disease with less complications. For a broader analysis, we have used two heart disease datasets with various medical features. The classification results of the benchmarked models proved that there is a high impact of relevant features on the classification accuracy. Even with a reduced number of features, the performance of the classification models improved significantly with a reduced training time as compared with models trained on full feature set.
Since real-world objects and their interactions are often multi-modal and multi-typed, heterogeneous networks have been widely used as a more powerful, realistic, and generic superclass of traditional homogeneous networks (graphs). Meanwhile, representation learning (\aka~embedding) has recently been intensively studied and shown effective for various network mining and analytical tasks. In this work, we aim to provide a unified framework to deeply summarize and evaluate existing research on heterogeneous network embedding (HNE), which includes but goes beyond a normal survey. Since there has already been a broad body of HNE algorithms, as the first contribution of this work, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms. Moreover, existing HNE algorithms, though mostly claimed generic, are often evaluated on different datasets. Understandable due to the application favor of HNE, such indirect comparisons largely hinder the proper attribution of improved task performance towards effective data preprocessing and novel technical design, especially considering the various ways possible to construct a heterogeneous network from real-world application data. Therefore, as the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and \etc.~from different sources, towards handy and fair evaluations of HNE algorithms. As the third contribution, we carefully refactor and amend the implementations and create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.
Knowledge graph completion aims to predict missing relations between entities in a knowledge graph. While many different methods have been proposed, there is a lack of a unifying framework that would lead to state-of-the-art results. Here we develop PathCon, a knowledge graph completion method that harnesses four novel insights to outperform existing methods. PathCon predicts relations between a pair of entities by: (1) Considering the Relational Context of each entity by capturing the relation types adjacent to the entity and modeled through a novel edge-based message passing scheme; (2) Considering the Relational Paths capturing all paths between the two entities; And, (3) adaptively integrating the Relational Context and Relational Path through a learnable attention mechanism. Importantly, (4) in contrast to conventional node-based representations, PathCon represents context and path only using the relation types, which makes it applicable in an inductive setting. Experimental results on knowledge graph benchmarks as well as our newly proposed dataset show that PathCon outperforms state-of-the-art knowledge graph completion methods by a large margin. Finally, PathCon is able to provide interpretable explanations by identifying relations that provide the context and paths that are important for a given predicted relation.
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at //github.com/uclnlp/gntp.