The impact and originality are two critical dimensions for evaluating scientific publications, measured by citation and disruption metrics respectively. Despite the extensive effort made to understand the statistical properties and evolution of each of these metrics, the relations between the two remain unclear. In this paper, we study the evolution during last 70 years of the correlation between scientific papers' citation and disruption, finding surprisingly a decreasing trend from positive to negative correlations over the years. Consequently, during the years, there are fewer and fewer disruptive works among the highly cited papers. These results suggest that highly disruptive studies nowadays attract less attention from the scientific community. The analysis on papers' references supports this trend, showing that papers citing older references, less popular references and diverse references become to have less citations. Possible explanations for the less attention phenomenon could be due to the increasing information overload in science, and citations become more and more prominent for impact. This is supported by the evidence that research fields with more papers have a more negative correlation between citation and disruption. Finally, we show the generality of our findings by analyzing and comparing six disciplines.
Adversarial attack research in natural language processing (NLP) has made significant progress in designing powerful attack methods and defence approaches. However, few efforts have sought to identify which source samples are the most attackable or robust, i.e. can we determine for an unseen target model, which samples are the most vulnerable to an adversarial attack. This work formally extends the definition of sample attackability/robustness for NLP attacks. Experiments on two popular NLP datasets, four state of the art models and four different NLP adversarial attack methods, demonstrate that sample uncertainty is insufficient for describing characteristics of attackable/robust samples and hence a deep learning based detector can perform much better at identifying the most attackable and robust samples for an unseen target model. Nevertheless, further analysis finds that there is little agreement in which samples are considered the most attackable/robust across different NLP attack methods, explaining a lack of portability of attackability detection methods across attack methods.
In this manuscript, we discuss the substantial importance of Bayesian reasoning in Social Science research. Particularly, we focus on foundational elements to fit models under the Bayesian paradigm. We aim to offer a frame of reference for a broad audience, not necessarily with specialized knowledge in Bayesian statistics, yet having interest in incorporating this kind of methods in studying social phenomena. We illustrate Bayesian methods through case studies regarding political surveys, population dynamics, and standardized educational testing. Specifically, we provide technical details on specific topics such as conjugate and non-conjugate modeling, hierarchical modeling, Bayesian computation, goodness of fit, and model testing.
Blockchain technology transformed the digital sphere by providing a transparent, secure, and decentralized platform for data security across a range of industries, including cryptocurrencies and supply chain management. Blockchain's integrity and dependability have been jeopardized by the rising number of security threats, which have attracted cybercriminals as a target. By summarizing suggested fixes, this research aims to offer a thorough analysis of mitigating blockchain attacks. The objectives of the paper include identifying weak blockchain attacks, evaluating various solutions, and determining how effective and effective they are at preventing these attacks. The study also highlights how crucial it is to take into account the particular needs of every blockchain application. This study provides beneficial perspectives and insights for blockchain researchers and practitioners, making it essential reading for those interested in current and future trends in blockchain security research.
In recent years, online social networks have been the target of adversaries who seek to introduce discord into societies, to undermine democracies and to destabilize communities. Often the goal is not to favor a certain side of a conflict but to increase disagreement and polarization. To get a mathematical understanding of such attacks, researchers use opinion-formation models from sociology, such as the Friedkin--Johnsen model, and formally study how much discord the adversary can produce when altering the opinions for only a small set of users. In this line of work, it is commonly assumed that the adversary has full knowledge about the network topology and the opinions of all users. However, the latter assumption is often unrealistic in practice, where user opinions are not available or simply difficult to estimate accurately. To address this concern, we raise the following question: Can an attacker sow discord in a social network, even when only the network topology is known? We answer this question affirmatively. We present approximation algorithms for detecting a small set of users who are highly influential for the disagreement and polarization in the network. We show that when the adversary radicalizes these users and if the initial disagreement/polarization in the network is not very high, then our method gives a constant-factor approximation on the setting when the user opinions are known. To find the set of influential users, we provide a novel approximation algorithm for a variant of MaxCut in graphs with positive and negative edge weights. We experimentally evaluate our methods, which have access only to the network topology, and we find that they have similar performance as methods that have access to the network topology and all user opinions. We further present an NP-hardness proof, which was an open question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].
Technical debt is a well-known challenge in software development, and its negative impact on software quality, maintainability, and performance is widely recognized. In recent years, artificial intelligence (AI) has proven to be a promising approach to assist in managing technical debt. This paper presents a comprehensive literature review of existing research on the use of AI powered tools for technical debt avoidance in software development. In this literature review we analyzed 15 related research papers which covers various AI-powered techniques, such as code analysis and review, automated testing, code refactoring, predictive maintenance, code generation, and code documentation, and explores their effectiveness in addressing technical debt. The review also discusses the benefits and challenges of using AI for technical debt management, provides insights into the current state of research, and highlights gaps and opportunities for future research. The findings of this review suggest that AI has the potential to significantly improve technical debt management in software development, and that existing research provides valuable insights into how AI can be leveraged to address technical debt effectively and efficiently. However, the review also highlights several challenges and limitations of current approaches, such as the need for high-quality data and ethical considerations and underscores the importance of further research to address these issues. The paper provides a comprehensive overview of the current state of research on AI for technical debt avoidance and offers practical guidance for software development teams seeking to leverage AI in their development processes to mitigate technical debt effectively
In recent years, journalists and other researchers have used web archives as an important resource for their study of disinformation. This paper provides several examples of this use and also brings together some of the work that the Old Dominion University Web Science and Digital Libraries (WS-DL) research group has done in this area. We will show how web archives have been used to investigate changes to webpages, study archived social media including deleted content, and study known disinformation that has been archived.
Digital twins (DT) are often defined as a pairing of a physical entity and a corresponding virtual entity mimicking certain aspects of the former depending on the use-case. In recent years, this concept has facilitated numerous use-cases ranging from design to validation and predictive maintenance of large and small high-tech systems. Although growing in popularity in both industry and academia, digital twins and the methodologies for developing and maintaining them differ vastly. To better understand these differences and similarities, we performed a semi-structured interview research study with 19 professionals from industry and academia who are closely associated with different lifecycle stages of the corresponding digital twins. In this paper, we present our analysis and findings from this study, which is based on eight research questions (RQ). We present our findings per research question. In general, we identified an overall lack of uniformity in terms of the understanding of digital twins and used tools, techniques, and methodologies for their development and maintenance. Furthermore, considering that digital twins are software intensive systems, we recognize a significant growth potential for adopting more software engineering practices, processes, and expertise in various stages of a digital twin's lifecycle.
As a unifying concept in economics, game theory, and operations research, even in the Robotics and AI field, the utility is used to evaluate the level of individual needs, preferences, and interests. Especially for decision-making and learning in multi-agent/robot systems (MAS/MRS), a suitable utility model can guide agents in choosing reasonable strategies to achieve their current needs and learning to cooperate and organize their behaviors, optimizing the system's utility, building stable and reliable relationships, and guaranteeing each group member's sustainable development, similar to the human society. Although these systems' complex, large-scale, and long-term behaviors are strongly determined by the fundamental characteristics of the underlying relationships, there has been less discussion on the theoretical aspects of mechanisms and the fields of applications in Robotics and AI. This paper introduces a utility-orient needs paradigm to describe and evaluate inter and outer relationships among agents' interactions. Then, we survey existing literature in relevant fields to support it and propose several promising research directions along with some open problems deemed necessary for further investigations.
This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade. We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively. Since the viewpoint of different methods is discrepant, we categorize them into groups for ease of comparison and discussion in the paper. Hopefully, our discussions and overview inspire new research work in the community that aim to improve the performance of DNNs, and we also point to directions where the large margin principle can be verified to provide theoretical evidence why certain regularizations for DNNs function well in practice. We managed to shorten the paper such that the crucial spirit of large margin learning and related methods are better emphasized.
Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.