The properties of money commonly referenced in the economics literature were originally identified by Jevons (1876) and Menger (1892) in the late 1800s and were intended to describe physical currencies, such as commodity money, metallic coins, and paper bills. In the digital era, many non-physical currencies have either entered circulation or are under development, including demand deposits, cryptocurrencies, stablecoins, central bank digital currencies (CBDCs), in-game currencies, and quantum money. These forms of money have novel properties that have not been studied extensively within the economics literature, but may be important determinants of the monetary equilibrium that emerges in forthcoming era of heightened currency competition. This paper makes the first exhaustive attempt to identify and define the properties of all physical and digital forms of money. It reviews both the economics and computer science literatures and categorizes properties within an expanded version of the original functions-and-properties framework of money that includes societal and regulatory objectives.
While many works exploiting an existing Lie group structure have been proposed for state estimation, in particular the Invariant Extended Kalman Filter (IEKF), few papers address the construction of a group structure that allows casting a given system into the IEKF framework, namely making the dynamics group affine and the observations invariant. In this paper we introduce a large class of systems encompassing most problems involving a navigating vehicle encountered in practice. For those systems we introduce a novel methodology that systematically provides a group structure for the state space, including vectors of the body frame such as biases. We use it to derive observers having properties akin to those of linear observers or filters. The proposed unifying and versatile framework encompasses all systems where IEKF has proved successful, improves state-of-the art "imperfect" IEKF for inertial navigation with sensor biases, and allows addressing novel examples, like GNSS antenna lever arm estimation.
A software pattern is a reusable solution to address a commonly occurring problem within a given context when designing software. Using patterns is a common practice for software architects to ensure software quality. Many pattern collections have been proposed for a large number of application domains. However, because of the technology's recentness, there are only a few available collections with a lack of extensive testing in industrial blockchain applications. It is also difficult for software architects to adequately apply blockchain patterns in their applications, as it requires deep knowledge of blockchain technology. Through a systematic literature review, this paper has identified 120 unique blockchain-related patterns and proposes a pattern taxonomy composed of multiple categories, built from the extracted pattern collection. The purpose of this collection is to map, classify, and describe all available patterns across the literature to help readers make adequate decisions regarding blockchain pattern selection. This study also shows potential applications of those patterns and identifies the relationships between blockchain patterns and other non-blockchain software patterns.
Due to its critical role in cybersecurity, digital forensics has received significant attention from researchers and practitioners alike. The ever increasing sophistication of modern cyberattacks is directly related to the complexity of evidence acquisition, which often requires the use of several technologies. To date, researchers have presented many surveys and reviews on the field. However, such articles focused on the advances of each particular domain of digital forensics individually. Therefore, while each of these surveys facilitates researchers and practitioners to keep up with the latest advances in a particular domain of digital forensics, the global perspective is missing. Aiming to fill this gap, we performed a qualitative review of reviews in the field of digital forensics, determined the main topics on digital forensics topics and identified their main challenges. Our analysis provides enough evidence to prove that the digital forensics community could benefit from closer collaborations and cross-topic research, since it is apparent that researchers and practitioners are trying to find solutions to the same problems in parallel, sometimes without noticing it.
We revisit constructions based on triads of conics with foci at pairs of vertices of a reference triangle. We find that their 6 vertices lie on well-known conics, whose type we analyze. We give conditions for these to be circles and/or degenerate. In the latter case, we study the locus of their center.
The immersed boundary (IB) method is a non-body conforming approach to fluid-structure interaction (FSI) that uses an Eulerian description of the momentum, viscosity, and incompressibility of a coupled fluid-structure system and a Lagrangian description of the deformations, stresses, and resultant forces of the immersed structure. Integral transforms with Dirac delta function kernels couple Eulerian and Lagrangian variables. In practice, discretizations of these integral transforms use regularized delta function kernels, and although a number of different types of regularized delta functions have been proposed, there has been limited prior work to investigate the impact of the choice of kernel function on the accuracy of the methodology. This work systematically studies the effect of the choice of regularized delta function in several fluid-structure interaction benchmark tests using the immersed finite element/difference (IFED) method, which is an extension of the IB method that uses finite element structural discretizations combined with a Cartesian grid finite difference method for the incompressible Navier-Stokes equations. Further, many IB-type methods evaluate the delta functions at the nodes of the structural mesh, and this requires the Lagrangian mesh to be relatively fine compared to the background Eulerian grid to avoid leaks. The IFED formulation offers the possibility to avoid leaks with relatively coarse structural meshes by evaluating the delta function on a denser collection of interaction points. This study investigates the effect of varying the relative mesh widths of the Lagrangian and Eulerian discretizations. Although this study is done within the context of the IFED method, the effect of different kernels could be important not just for this method, but also for other IB-type methods more generally.
In recent years, optimization problems have become increasingly more prevalent due to the need for more powerful computational methods. With the more recent advent of technology such as artificial intelligence, new metaheuristics are needed that enhance the capabilities of classical algorithms. More recently, researchers have been looking at Charles Darwin's theory of natural selection and evolution as a means of enhancing current approaches using machine learning. In 1960, the first genetic algorithm was developed by John H. Holland and his student. We explore the mathematical intuition of the genetic algorithm in developing systems capable of evolving using Gaussian mutation, as well as its implications in solving optimization problems.
In this paper we examine the concept of complexity as it applies to generative and evolutionary art and design. Complexity has many different, discipline specific definitions, such as complexity in physical systems (entropy), algorithmic measures of information complexity and the field of "complex systems". We apply a series of different complexity measures to three different evolutionary art datasets and look at the correlations between complexity and individual aesthetic judgement by the artist (in the case of two datasets) or the physically measured complexity of generative 3D forms. Our results show that the degree of correlation is different for each set and measure, indicating that there is no overall "better" measure. However, specific measures do perform well on individual datasets, indicating that careful choice can increase the value of using such measures. We then assess the value of complexity measures for the audience by undertaking a large-scale survey on the perception of complexity and aesthetics. We conclude by discussing the value of direct measures in generative and evolutionary art, reinforcing recent findings from neuroimaging and psychology which suggest human aesthetic judgement is informed by many extrinsic factors beyond the measurable properties of the object being judged.
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d.~assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Since first introduced in 2011, research in DG has made great progresses. In particular, intensive research in this topic has led to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, just to name a few; and has covered various vision applications such as object recognition, segmentation, action recognition, and person re-identification. In this paper, for the first time a comprehensive literature review is provided to summarize the developments in DG for computer vision over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other research fields like domain adaptation and transfer learning. Second, we conduct a thorough review into existing methods and present a categorization based on their methodologies and motivations. Finally, we conclude this survey with insights and discussions on future research directions.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.