The energy consumption and related carbon emissions of cryptocurrencies such as Bitcoin are subject to extensive discussion in public, academia, and industry. As cryptocurrencies continue their journey into mainstream finance, incentives to participate in the networks and consume energy to do so remain significant. First guidance on how to allocate the carbon footprint of the Bitcoin network to single investors exist, however a holistic framework capturing a wider range of cryptocurrencies and tokens remains absent. This white paper explores different approaches of how to allocate emissions caused by cryptocurrencies and tokens. Based on our analysis of the strengths and limitations of potential approaches, we propose a framework that combines key drivers of emissions in Proof of Work and Proof of Stake networks.
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 160 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to certain research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
This paper considers the phenomenon where a single probe to a target generates multiple, sometimes numerous, packets in response -- which we term "blowback". Understanding blowback is important because attackers can leverage it to launch amplified denial of service attacks by redirecting blowback towards a victim. Blowback also has serious implications for Internet researchers since their experimental setups must cope with bursts of blowback traffic. We find that tens of thousands, and in some protocols, hundreds of thousands, of hosts generate blowback, with orders of magnitude amplification on average. In fact, some prolific blowback generators produce millions of response packets in the aftermath of a single probe. We also find that blowback generators are fairly stable over periods of weeks, so once identified, many of these hosts can be exploited by attackers for a long time.
Since the inception of human research studies, researchers often need to interact with participants on a set schedule to collect data. While some human research is automated, most is not; which costs researchers both time and money. Usually, user-provided data collection consists of surveys administered via telephone or email. While these methods are simplest, they are tedious for the survey administrators, which could incur fatigue and potentially lead to collection mistakes. A solution to this was the creation of "chatbots". Early developments relied on mostly rule-based tactics (e.g. ELIZA), which were suitable for uniform input. However, as the complexity of interactions increases, rule-based systems begin breaking down since there exist a variety of ways for a user to express the same intention. This is especially true when tracking states within a research study (or protocol). Recently, natural language processing (NLP) models and, subsequently, virtual assistants have become increasingly more sophisticated when communicating with users. Examples of these efforts range from research studies to commercial health products. This project leverages recent advancements in conversational artificial intelligence (AI), speech-to-text, natural language understanding (NLU), and finite-state machines to automate protocols, specifically in research settings. This application must be generalized, fully customizable, and irrespective of any research study. These parameters allow new research protocols to be created quickly once envisioned. With this in mind, I present SmartState, a fully-customizable, state-driven protocol manager combined with supporting AI components to autonomously manage user data and intelligently determine the intention of users through chat and end device interactions to drive protocols.
Increased demand for less invasive procedures has accelerated the adoption of Intraluminal Procedures (IP) and Endovascular Interventions (EI) performed through body lumens and vessels. As navigation through lumens and vessels is quite complex, interest grows to establish autonomous navigation techniques for IP and EI for reaching the target area. Current research efforts are directed toward increasing the Level of Autonomy (LoA) during the navigation phase. One key ingredient for autonomous navigation is Motion Planning (MP) techniques. This paper provides an overview of MP techniques categorizing them based on LoA. Our analysis investigates advances for the different clinical scenarios. Through a systematic literature analysis using the PRISMA method, the study summarizes relevant works and investigates the clinical aim, LoA, adopted MP techniques, and validation types. We identify the limitations of the corresponding MP methods and provide directions to improve the robustness of the algorithms in dynamic intraluminal environments. MP for IP and EI can be classified into four subgroups: node, sampling, optimization, and learning-based techniques, with a notable rise in learning-based approaches in recent years. One of the review's contributions is the identification of the limiting factors in IP and EI robotic systems hindering higher levels of autonomous navigation. In the future, navigation is bound to become more autonomous, placing the clinician in a supervisory position to improve control precision and reduce workload.
In recent years, decentralized finance (DeFi) has experienced remarkable growth, with various protocols such as lending protocols and automated market makers (AMMs) emerging. Traditionally, these protocols employ off-chain governance, where token holders vote to modify parameters. However, manual parameter adjustment, often conducted by the protocol's core team, is vulnerable to collusion, compromising the integrity and security of the system. Furthermore, purely deterministic, algorithm-based approaches may expose the protocol to novel exploits and attacks. In this paper, we present "Auto.gov", a learning-based on-chain governance framework for DeFi that enhances security and reduces susceptibility to attacks. Our model leverages a deep Q- network (DQN) reinforcement learning approach to propose semi-automated, intuitive governance proposals with quantitative justifications. This methodology enables the system to efficiently adapt to and mitigate the negative impact of malicious behaviors, such as price oracle attacks, more effectively than benchmark models. Our evaluation demonstrates that Auto.gov offers a more reactive, objective, efficient, and resilient solution compared to existing manual processes, thereby significantly bolstering the security and, ultimately, enhancing the profitability of DeFi protocols.
Digital forensics and cloud forensics are increasingly important fields that face a range of challenges. This study aims to assess the general challenges faced in these fields. A literature review was conducted to identify the major challenges in digital and cloud forensics, including data acquisition, data analysis, data preservation, privacy concerns, and legal issues. The challenges were analyzed in detail, considering the reasons why they are challenges, the impact they have on digital and cloud forensics, and any potential solutions. The study concludes that the challenges faced in digital and cloud forensics are significant and varied, and that addressing these challenges is critical for the effective and efficient use of digital and cloud forensics in investigations. This study provides a valuable overview of the current state of digital and cloud forensic challenges and can help guide future research in this important field.
A triangulation of a polytope into simplices is refined recursively. In every refinement round, some simplices which have been marked by an external algorithm are bisected and some others around also must be bisected to retain regularity of the triangulation. The ratio of the total number of marked simplices and the total number of bisected simplices is bounded from above. Binev, Dahmen and DeVore proved under a certain initial condition a bound that depends only on the initial triangulation. This thesis proposes a new way to obtain a better bound in any dimension. Furthermore, the result is proven for a weaker initial condition, invented by Alk\"amper, Gaspoz and Kl\"ofkorn, who also found an algorithm to realise this condition for any regular initial triangulation. Supposably, it is the first proof for a Binev-Dahmen-DeVore theorem in any dimension with always practically realiseable initial conditions without an initial refinement. Additionally, the initialisation refinement proposed by Kossaczk\'y and Stevenson is generalised, and the number of recursive bisections of one single simplex in one refinement round is bounded from above by twice the dimension, sharpening a result of Gallistl, Schedensack and Stevenson.
Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy.
Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.