The properties of money commonly referenced in the economics literature were originally identified by Jevons (1876) and Menger (1892) in the late 1800s and were intended to describe physical currencies, such as commodity money, metallic coins, and paper bills. In the digital era, many non-physical currencies have either entered circulation or are under development, including demand deposits, cryptocurrencies, stablecoins, central bank digital currencies (CBDCs), in-game currencies, and quantum money. These forms of money have novel properties that have not been studied extensively within the economics literature, but may be important determinants of the monetary equilibrium that emerges in the forthcoming era of heightened currency competition. This paper makes the first exhaustive attempt to identify and define the properties of all physical and digital forms of money. It reviews both the economics and computer science literatures and categorizes properties within an expanded version of the original functions-and-properties framework of money that includes societal and regulatory objectives.
While many works exploiting an existing Lie group structure have been proposed for state estimation, in particular the Invariant Extended Kalman Filter (IEKF), few papers address the construction of a group structure that allows casting a given system into the IEKF framework, namely making the dynamics group affine and the observations invariant. In this paper we introduce a large class of systems encompassing most problems involving a navigating vehicle encountered in practice. For those systems we introduce a novel methodology that systematically provides a group structure for the state space, including vectors of the body frame such as biases. We use it to derive observers having properties akin to those of linear observers or filters. The proposed unifying and versatile framework encompasses all systems where IEKF has proved successful, improves state-of-the art "imperfect" IEKF for inertial navigation with sensor biases, and allows addressing novel examples, like GNSS antenna lever arm estimation.
A software pattern is a reusable solution to address a commonly occurring problem within a given context when designing software. Using patterns is a common practice for software architects to ensure software quality. Many pattern collections have been proposed for a large number of application domains. However, because of the technology's recentness, there are only a few available collections with a lack of extensive testing in industrial blockchain applications. It is also difficult for software architects to adequately apply blockchain patterns in their applications, as it requires deep knowledge of blockchain technology. Through a systematic literature review, this paper has identified 120 unique blockchain-related patterns and proposes a pattern taxonomy composed of multiple categories, built from the extracted pattern collection. The purpose of this collection is to map, classify, and describe all available patterns across the literature to help readers make adequate decisions regarding blockchain pattern selection. This study also shows potential applications of those patterns and identifies the relationships between blockchain patterns and other non-blockchain software patterns.
In materials science and many other application domains, 3D information can often only be extrapolated by taking 2D slices. In topological data analysis, persistence vineyards have emerged as a powerful tool to take into account topological features stretching over several slices. In the present paper, we illustrate how persistence vineyards can be used to design rigorous statistical hypothesis tests for 3D microstructure models based on data from 2D slices. More precisely, by establishing the asymptotic normality of suitable longitudinal and cross-sectional summary statistics, we devise goodness-of-fit tests that become asymptotically exact in large sampling windows. We illustrate the testing methodology through a detailed simulation study and provide a prototypical example from materials science.
Due to its critical role in cybersecurity, digital forensics has received significant attention from researchers and practitioners alike. The ever increasing sophistication of modern cyberattacks is directly related to the complexity of evidence acquisition, which often requires the use of several technologies. To date, researchers have presented many surveys and reviews on the field. However, such articles focused on the advances of each particular domain of digital forensics individually. Therefore, while each of these surveys facilitates researchers and practitioners to keep up with the latest advances in a particular domain of digital forensics, the global perspective is missing. Aiming to fill this gap, we performed a qualitative review of reviews in the field of digital forensics, determined the main topics on digital forensics topics and identified their main challenges. Our analysis provides enough evidence to prove that the digital forensics community could benefit from closer collaborations and cross-topic research, since it is apparent that researchers and practitioners are trying to find solutions to the same problems in parallel, sometimes without noticing it.
We revisit constructions based on triads of conics with foci at pairs of vertices of a reference triangle. We find that their 6 vertices lie on well-known conics, whose type we analyze. We give conditions for these to be circles and/or degenerate. In the latter case, we study the locus of their center.
In recent years, optimization problems have become increasingly more prevalent due to the need for more powerful computational methods. With the more recent advent of technology such as artificial intelligence, new metaheuristics are needed that enhance the capabilities of classical algorithms. More recently, researchers have been looking at Charles Darwin's theory of natural selection and evolution as a means of enhancing current approaches using machine learning. In 1960, the first genetic algorithm was developed by John H. Holland and his student. We explore the mathematical intuition of the genetic algorithm in developing systems capable of evolving using Gaussian mutation, as well as its implications in solving optimization problems.
In this paper we examine the concept of complexity as it applies to generative and evolutionary art and design. Complexity has many different, discipline specific definitions, such as complexity in physical systems (entropy), algorithmic measures of information complexity and the field of "complex systems". We apply a series of different complexity measures to three different evolutionary art datasets and look at the correlations between complexity and individual aesthetic judgement by the artist (in the case of two datasets) or the physically measured complexity of generative 3D forms. Our results show that the degree of correlation is different for each set and measure, indicating that there is no overall "better" measure. However, specific measures do perform well on individual datasets, indicating that careful choice can increase the value of using such measures. We then assess the value of complexity measures for the audience by undertaking a large-scale survey on the perception of complexity and aesthetics. We conclude by discussing the value of direct measures in generative and evolutionary art, reinforcing recent findings from neuroimaging and psychology which suggest human aesthetic judgement is informed by many extrinsic factors beyond the measurable properties of the object being judged.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.
The focus of Part I of this monograph has been on both the fundamental properties, graph topologies, and spectral representations of graphs. Part II embarks on these concepts to address the algorithmic and practical issues centered round data/signal processing on graphs, that is, the focus is on the analysis and estimation of both deterministic and random data on graphs. The fundamental ideas related to graph signals are introduced through a simple and intuitive, yet illustrative and general enough case study of multisensor temperature field estimation. The concept of systems on graph is defined using graph signal shift operators, which generalize the corresponding principles from traditional learning systems. At the core of the spectral domain representation of graph signals and systems is the Graph Discrete Fourier Transform (GDFT). The spectral domain representations are then used as the basis to introduce graph signal filtering concepts and address their design, including Chebyshev polynomial approximation series. Ideas related to the sampling of graph signals are presented and further linked with compressive sensing. Localized graph signal analysis in the joint vertex-spectral domain is referred to as the vertex-frequency analysis, since it can be considered as an extension of classical time-frequency analysis to the graph domain of a signal. Important topics related to the local graph Fourier transform (LGFT) are covered, together with its various forms including the graph spectral and vertex domain windows and the inversion conditions and relations. A link between the LGFT with spectral varying window and the spectral graph wavelet transform (SGWT) is also established. Realizations of the LGFT and SGWT using polynomial (Chebyshev) approximations of the spectral functions are further considered. Finally, energy versions of the vertex-frequency representations are introduced.