Once you have invented digital money, you may need a ledger to track who owns what -- and an interface to that ledger so that users of your money can transact. On the Tezos blockchain this implies: a smart contract (distributed program), storing in its state a ledger to map owner addresses to token quantities, and standardised entrypoints to transact on accounts. A bank does a similar job -- it maps account numbers to account quantities and permits users to transact -- but in return the bank demands trust, it incurs expense to maintain a centralised server and staff, it uses a proprietary interface ... and it may speculate using your money and/or display rent-seeking behaviour. A blockchain ledger is by design decentralised, inexpensive, open, and it won't just bet your tokens on risky derivatives (unless you ask). The FA1.2 standard is an open standard for ledger-keeping smart contracts on the Tezos blockchain. Several FA1.2 implementations already exist. Or do they? Is the standard sensible and complete? Are the implementations correct? And what are they implementations \emph{of}? The FA1.2 standard is written in English, a specification language favoured by wet human brains but notorious for its incompleteness and ambiguity when rendered into dry and unforgiving code. In this paper we report on a formalisation of the FA1.2 standard as a Coq specification, and on a formal verification of three FA1.2-compliant smart contracts with respect to that specification. Errors were found and ambiguities were resolved; but also, there now exists a \emph{mathematically precise} and battle-tested specification of the FA1.2 ledger standard. We will describe FA1.2 itself, outline the structure of the Coq theories -- which in itself captures some non-trivial and novel design decisions of the development -- and review the detailed verification of the implementations.
Created by volunteers since 2004, OpenStreetMap (OSM) is a global geographic database available under an open access license and currently used by a multitude of actors worldwide. This chapter describes the role played by OSM during the early months (from January to July 2020) of the ongoing COVID-19 pandemic, which - in contrast to past disasters and epidemics - is a global event impacting both developed and developing countries. A large number of COVID-19-related OSM use cases were collected and grouped into a number of research frameworks which are analyzed separately: dashboards and services simply using OSM as a basemap, applications using raw OSM data, initiatives to collect new OSM data, imports of authoritative data into OSM, and traditional academic research on OSM in the COVID-19 response. The wealth of examples provided in the chapter, including an analysis of OSM tile usage in two countries (Italy and China) deeply affected in the earliest months of 2020, prove that OSM has been and still is heavily used to address the COVID-19 crisis, although with types and mechanisms that are often different depending on the affected area or country and the related communities.
It is known for decades that computer-based systems cannot be understood without a concept of modularization and decomposition. We suggest a universal, expressive, intuitively attractive composition operator for Petri nets, combined with a refinement concept and an algebraic representation of nets and their composition. Case studies show exemplarily, how large systems can be composed from tiny net snippets. In the future, more field studies are needed to better understand the consequences of the proposed ideas in the real world.
In recent years, public discourse has blamed digital technologies for making people feel "alone together," distracting us from engaging with one another, even when we are interacting in-person. We argue that in order to design technologies that foster and augment co-located interactions, we need to first understand the context in which enjoyable co-located socialization takes place. We address this gap by surveying and interviewing over 1,000 U.S.-based participants to understand what, where, with whom, how, and why people enjoy spending time in-person. Our findings suggest that people enjoy engaging in everyday activities with individuals with whom they have strong social ties because it helps enable nonverbal cues, facilitate spontaneity, support authenticity, encourage undivided attention, and leverage the physicality of their bodies and the environment. We conclude by providing a set of recommendations for designers interested in creating co-located technologies that encourage social engagement and relationship building.
Presently, the practice of distributed computing is such that problems exist in a mathematical realm different from their solutions: a problem is presented as a set of requirements on possible process or system behaviors, and its solution is presented as algorithmic pseudocode satisfying the requirements. Here, we present a novel mathematical realm, termed \emph{multiagent transition systems with faults}, that aims to accommodate both distributed computing problems and their solutions. A problem is presented as a specification -- a multiagent transition system -- and a solution as an implementation of the specification by another, lower-level multiagent transition systems, which may be proven to be resilient to a given set of faults. This duality of roles of a multiagent transition system can be exploited all the way from a high-level distributed computing problem description down to an agreed-upon base layer, say TCP/IP, resulting in a mathematical protocol stack where each protocol in the stack both implements the protocol above it and serves as a specification for the protocol below it. Correct implementations are compositional and thus provide also an implementation of the protocol stack as a whole. The framework also offers a formal -- yet natural and expressive -- notions of faults, fault-resilient implementations, and their composition.
This EPTCS volume contains the proceedings of the ThEdu'21 workshop, promoted on 11 July 2021, as a satellite event of CADE-28. Due to the COVID-19 pandemic, CADE-28 and all its co-located events happened as virtual events. ThEdu'21 was a vibrant workshop, with an invited talk by Gilles Dowek (ENS Paris-Saclay), eleven contributions, and one demonstration. After the workshop an open call for papers was issued and attracted 10 submissions, 7 of which have been accepted by the reviewers, and collected in the present post-proceedings volume. The ThEdu series pursues the smooth transition from an intuitive way of doing mathematics at secondary school to a more formal approach to the subject in STEM education, while favouring software support for this transition by exploiting the power of theorem-proving technologies. The volume editors hope that this collection of papers will further promote the development of theorem-proving based software, and that it will collaborate on improving mutual understanding between computer scientists, mathematicians and stakeholders in education.
To reduce the spread of misinformation, social media platforms may take enforcement actions against offending content, such as adding informational warning labels, reducing distribution, or removing content entirely. However, both their actions and their inactions have been controversial and plagued by allegations of partisan bias. The controversy in part can be explained by a lack of clarity around what actions should be taken, as they may not neatly reduce to questions of factual accuracy. When decisions are contested, the legitimacy of decision-making processes becomes crucial to public acceptance. Platforms have tried to legitimize their decisions by following well-defined procedures through rules and codebooks. In this paper, we consider an alternate source of legitimacy -- the will of the people. Surprisingly little is known about what ordinary people want the platforms to do about specific content. We provide empirical evidence about lay raters' preferences for platform actions on 368 news articles. Our results confirm that on many items there is no clear consensus on which actions to take. There is no partisan difference in terms of how many items deserve platform actions but liberals do prefer somewhat more action on content from conservative sources, and vice versa. We find a clear hierarchy of perceived severity, with inform being the least severe action, followed by reduce, and then remove. We also find that judgments about two holistic properties, misleadingness and harm, could serve as an effective proxy to determine what actions would be approved by a majority of raters. We conclude with the promise of the will of the people while acknowledging the practical details that would have to be worked out.
Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.
Topic models are among the most widely used methods in natural language processing, allowing researchers to estimate the underlying themes in a collection of documents. Most topic models use unsupervised methods and hence require the additional step of attaching meaningful labels to estimated topics. This process of manual labeling is not scalable and often problematic because it depends on the domain expertise of the researcher and may be affected by cardinality in human decision making. As a consequence, insights drawn from a topic model are difficult to replicate. We present a semi-automatic transfer topic labeling method that seeks to remedy some of these problems. We take advantage of the fact that domain-specific codebooks exist in many areas of research that can be exploited for automated topic labeling. We demonstrate our approach with a dynamic topic model analysis of the complete corpus of UK House of Commons speeches from 1935 to 2014, using the coding instructions of the Comparative Agendas Project to label topics. We show that our method works well for a majority of the topics we estimate, but we also find institution-specific topics, in particular on subnational governance, that require manual input. The method proposed in the paper can be easily extended to other areas with existing domain-specific knowledge bases, such as party manifestos, open-ended survey questions, social media data, and legal documents, in ways that can add knowledge to research programs.
Being intensively studied, visual object tracking has witnessed great advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing ideas from the success of parallel tracking and mapping in visual SLAM. The proposed PTAV framework is typically composed of two components, a (base) tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V validates the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. Meanwhile, to adapt V to object appearance changes over time, we maintain a dynamic target template pool for adaptive verification, resulting in further performance improvements. In our extensive experiments on popular benchmarks including OTB2015, TC128, UAV20L and VOT2016, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact even outperforms many deep learning based algorithms. Moreover, as a general framework, PTAV is very flexible with great potentials for future improvement and generalization.