Corruption has a huge impact on economic growth, democracy, inequality, and its consequences at the human level are incalculable. However, a government turnover may be expected to generate significant changes in the way public contracting is done, and thus, in the levels and types of corruption involved in public procurement. In this respect, M\'exico lived a historical government transition in 2018. In this work, we analyze data from more than 1.5 million contracts corresponding from 2013 to 2020, to study to what extent this change of government affected the characteristics of public contracting, and we try to determine whether these changes affect how corruption takes place. To do this, we propose a statistical framework to compare the characteristics of the contracting practices within each administration, separating the contracts in different classes depending on whether or not they were made with companies that have now been identified as being involved in corrupt practices. We found that, even when the total number of contracts and the amount of resources spent in contracts with corrupt companies decreased after the government transition, many of the patterns followed to contract suppliers labeled as corrupt were maintained, and those in which changes did occur, are suggestive of a larger risk of corruption.
In the 21st century, the industry of drones, also known as Unmanned Aerial Vehicles (UAVs), has witnessed a rapid increase with its large number of airspace users. The tremendous benefits of this technology in civilian applications such as hostage rescue and parcel delivery will integrate smart cities in the future. Nowadays, the affordability of commercial drones expands its usage at a large scale. However, the development of drone technology is associated with vulnerabilities and threats due to the lack of efficient security implementations. Moreover, the complexity of UAVs in software and hardware triggers potential security and privacy issues. Thus, posing significant challenges for the industry, academia, and governments. In this paper, we extensively survey the security and privacy issues of UAVs by providing a systematic classification at four levels: Hardware-level, Software-level, Communication-level, and Sensor-level. In particular, for each level, we thoroughly investigate (1) common vulnerabilities affecting UAVs for potential attacks from malicious actors, (2) existing threats that are jeopardizing the civilian application of UAVs, (3) active and passive attacks performed by the adversaries to compromise the security and privacy of UAVs, (4) possible countermeasures and mitigation techniques to protect UAVs from such malicious activities. In addition, we summarize the takeaways that highlight lessons learned about UAVs' security and privacy issues. Finally, we conclude our survey by presenting the critical pitfalls and suggesting promising future research directions for security and privacy of UAVs.
While the digitization of power distribution grids brings many benefits, it also introduces new vulnerabilities for cyber-attacks. To maintain secure operations in the emerging threat landscape, detecting and implementing countermeasures against cyber-attacks are paramount. However, due to the lack of publicly available attack data against Smart Grids (SGs) for countermeasure development, simulation-based data generation approaches offer the potential to provide the needed data foundation. Therefore, our proposed approach provides flexible and scalable replication of multi-staged cyber-attacks in an SG Co-Simulation Environment (COSE). The COSE consists of an energy grid simulator, simulators for Operation Technology (OT) devices, and a network emulator for realistic IT process networks. Focusing on defensive and offensive use cases in COSE, our simulated attacker can perform network scans, find vulnerabilities, exploit them, gain administrative privileges, and execute malicious commands on OT devices. As an exemplary countermeasure, we present a built-in Intrusion Detection System (IDS) that analyzes generated network traffic using anomaly detection with Machine Learning (ML) approaches. In this work, we provide an overview of the SG COSE, present a multi-stage attack model with the potential to disrupt grid operations, and show exemplary performance evaluations of the IDS in specific scenarios.
Security assurance provides the confidence that security features, practices, procedures, and architecture of software systems mediateand enforce the security policy and are resilient against security failure and attacks. Alongside the significant benefits of securityassurance, the evolution of new information and communication technology (ICT) introduces new challenges regarding informationprotection. Security assurance methods based on the traditional tools, techniques, and procedures may fail to account new challengesdue to poor requirement specifications, static nature, and poor development processes. The common criteria (CC) commonly used forsecurity evaluation and certification process also comes with many limitations and challenges. In this paper, extensive efforts havebeen made to study the state-of-the-art, limitations and future research directions for security assurance of the ICT and cyber-physicalsystems (CPS) in a wide range of domains. We systematically review the requirements, processes, and activities involved in systemsecurity assurance including security requirements, security metrics, system and environments and assurance methods. We shed lighton the challenges and gaps that have been identified by the existing literature related to system security assurance and correspondingsolutions. Finally, we discussed the limitations of the present methods and future research directions.
Social platforms are heavily used by individuals to share their thoughts and personal information. However, due to regret over time about posting inappropriate social content, embarrassment, or even life or relationship changes, some past posts might also pose serious privacy concerns for them. To cope with these privacy concerns, social platforms offer deletion mechanisms that allow users to remove their contents. Quite naturally, these deletion mechanisms are really useful for removing past posts as and when needed. However, these same mechanisms also leave the users potentially vulnerable to attacks by adversaries who specifically seek the users' damaging content and exploit the act of deletion as a strong signal for identifying such content. Unfortunately, today user experiences and contextual expectations regarding such attacks on deletion privacy and deletion privacy in general are not well understood. To that end, in this paper, we conduct a user survey-based exploration involving 191 participants to unpack their prior deletion experiences, their expectations of deletion privacy, and how effective they find the current deletion mechanisms. We find that more than 80% of the users have deleted at least a social media post, and users self-reported that, on average, around 35% of their deletions happened after a week of posting. While the participants identified the irrelevancy (due to time passing) as the main reason for content removal, most of them believed that deletions indicate that the deleted content includes some damaging information to the owner. Importantly, the participants are significantly more concerned about their deletions being noticed by large-scale data collectors (e.g., the government) than individuals from their social circle. Finally, the participants felt that popular deletion mechanisms are not very effective in protecting the privacy of those deletions.
Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.
This paper addresses the question of how and with what consequences anonymity is used as a creative element in experiments dealing with alternative forms of knowledge production. More specifically, the focus is on a particular current in contemporary dance, experimental dance. This has developed in the course of the 1970s. Its guiding idea is to shift the focus away from finished choreography and performance to the processuality, spontaneity, and experience of dance practice itself. Dance is seen as a laboratory in which to experiment with gravity, body boundaries, group dynamics and perception. The focus is on testing, touching, trying. The corresponding events are called either 'Dance Labs' or 'Movement Research Workshops', depending on the regularity. An instrument that is often used in the context of such events to create specific social situations and group dynamics or to channel perceptions in certain directions is anonymity. Anonymity is broadly understood as the capping of connections that results in a degree of unaccountability with respect to various aspects of a person, situation, or activity. For example, unaccountability can arise from the fact that people can no longer see each other, so that visual identification becomes impossible, even if only temporarily. By breaking familiar modes of identification, other forms of relating can sometimes be emphasized. This is what makes this form of anonymity so interesting for experimental dance.
Geographically dispersed teams often face challenges in coordination and collaboration, lowering their productivity. Understanding the relationship between team dispersion and productivity is critical for supporting such teams. Extensive prior research has studied these relations in lab settings or using qualitative measures. This paper extends prior work by contributing an empirical case study in a real-world organization, using quantitative measures. We studied 117 new research project teams within a company for 6 months. During this time, all teams shared one goal: submitting a research paper to a conference. We analyzed these teams' dispersion-related characteristics as well as team productivity. Interestingly, we found little statistical evidence that geographic and time differences relate to team productivity. However, organizational and functional distances are highly predictive of the productivity of the dispersed teams we studied. We discuss the open research questions these findings revealed and their implications for future research.
Among the various modes of communication in social media, the use of Internet memes has emerged as a powerful means to convey political, psychological, and socio-cultural opinions. Although memes are typically humorous in nature, recent days have witnessed a proliferation of harmful memes targeted to abuse various social entities. As most harmful memes are highly satirical and abstruse without appropriate contexts, off-the-shelf multimodal models may not be adequate to understand their underlying semantics. In this work, we propose two novel problem formulations: detecting harmful memes and the social entities that these harmful memes target. To this end, we present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19. Each meme went through a rigorous two-stage annotation process. In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to: individual, organization, community, or society/general public/other. The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks. We further discuss the limitations of these models and we argue that more research is needed to address these problems.
When we humans look at a video of human-object interaction, we can not only infer what is happening but we can even extract actionable information and imitate those interactions. On the other hand, current recognition or geometric approaches lack the physicality of action representation. In this paper, we take a step towards a more physical understanding of actions. We address the problem of inferring contact points and the physical forces from videos of humans interacting with objects. One of the main challenges in tackling this problem is obtaining ground-truth labels for forces. We sidestep this problem by instead using a physics simulator for supervision. Specifically, we use a simulator to predict effects and enforce that estimated forces must lead to the same effect as depicted in the video. Our quantitative and qualitative results show that (a) we can predict meaningful forces from videos whose effects lead to accurate imitation of the motions observed, (b) by jointly optimizing for contact point and force prediction, we can improve the performance on both tasks in comparison to independent training, and (c) we can learn a representation from this model that generalizes to novel objects using few shot examples.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.