Algorithms have permeated throughout civil government and society, where they are being used to make high-stakes decisions about human lives. In this paper, we first develop a cohesive framework of algorithmic decision-making adapted for the public sector (ADMAPS) that reflects the complex socio-technical interactions between \textit{human discretion}, \textit{bureaucratic processes}, and \textit{algorithmic decision-making} by synthesizing disparate bodies of work in the fields of Human-Computer Interaction (HCI), Science and Technology Studies (STS), and Public Administration (PA). We then applied the ADMAPS framework to conduct a qualitative analysis of an in-depth, eight-month ethnographic case study of the algorithms in daily use within a child-welfare agency that serves approximately 900 families and 1300 children in the mid-western United States. Overall, we found there is a need to focus on strength-based algorithmic outcomes centered in social ecological frameworks. In addition, algorithmic systems need to support existing bureaucratic processes and augment human discretion, rather than replace it. Finally, collective buy-in in algorithmic systems requires trust in the target outcomes at both the practitioner and bureaucratic levels. As a result of our study, we propose guidelines for the design of high-stakes algorithmic decision-making tools in the child-welfare system, and more generally, in the public sector. We empirically validate the theoretically derived ADMAPS framework to demonstrate how it can be useful for systematically making pragmatic decisions about the design of algorithms for the public sector.
In this paper, we provide a new theoretical framework of pyramid Markov processes to solve some open and fundamental problems of blockchain selfish mining under a rigorous mathematical setting. We first describe a more general model of blockchain selfish mining with both a two-block leading competitive criterion and a new economic incentive mechanism. Then we establish a pyramid Markov process and show that it is irreducible and positive recurrent, and its stationary probability vector is matrix-geometric with an explicitly representable rate matrix. Also, we use the stationary probability vector to study the influence of many orphan blocks on the waste of computing resource. Next, we set up a pyramid Markov reward process to investigate the long-run average profits of the honest and dishonest mining pools, respectively. As a by-product, we build three approximative Markov processes and provide some new interesting interpretation on the Markov chain and the revenue analysis reported in the seminal work by Eyal and Sirer (2014). Note that the pyramid Markov (reward) processes can open up a new avenue in the study of blockchain selfish mining. Thus we hope that the methodology and results developed in this paper shed light on the blockchain selfish mining such that a series of promising research can be developed potentially.
Blockchains are gaining momentum due to the interest of industries and people in \emph{decentralized applications} (Dapps), particularly in those for trading assets through digital certificates secured on blockchain, called tokens. As a consequence, providing a clear unambiguous description of any activities carried out on blockchains has become crucial, and we feel the urgency to achieve that description at least for trading. This paper reports on how to leverage the \emph{Ontology for Agents, Systems, and Integration of Services} ("\ONT{}") as a general means for the semantic representation of smart contracts stored on blockchain as software agents. Special attention is paid to non-fungible tokens (NFTs), whose management through the ERC721 standard is presented as a case study.
In this contribution we extend an ontology for modelling agents and their interactions, called Ontology for Agents, Systems, and Integration of Services (in short, OASIS), with conditionals and ontological smart contracts (in short, OSCs). OSCs are ontological representations of smart contracts that allow to establish responsibilities and authorizations among agents and set agreements, whereas conditionals allow one to restrict and limit agent interactions, define activation mechanisms that trigger agent actions, and define constraints and contract terms on OSCs. Conditionals and OSCs, as defined in OASIS, are applied to extend with ontological capabilities digital public ledgers such as the blockchain and smart contracts implemented on it. We will also sketch the architecture of a framework based on the OASIS definition of OSCs that exploits the Ethereum platform and the Interplanetary File System.
We study the emergence of cooperation in large spatial public goods games. Without employing severe social-pressure against "defectors", or alternatively, significantly rewarding "cooperators", theoretical models typically predict a system collapse in a way that is reminiscent of the "tragedy-of-the-commons" metaphor. Drawing on a dynamic network model, this paper demonstrates how cooperation can emerge when the social-pressure is mild. This is achieved with the aid of an additional behavior called "hypocritical", which appears to be cooperative from the external observer's perspective but in fact hardly contributes to the social-welfare. Our model assumes that social-pressure is induced over both defectors and hypocritical players, but the extent of which may differ. Our main result indicates that the emergence of cooperation highly depends on the extent of social-pressure applied against hypocritical players. Setting it to be at some intermediate range below the one employed against defectors allows a system composed almost exclusively of defectors to transform into a fully cooperative one quickly. Conversely, when the social-pressure against hypocritical players is either too low or too high, the system remains locked in a degenerate configuration.
The rapid finding of effective therapeutics requires the efficient use of available resources in clinical trials. The use of covariate adjustment can yield statistical estimates with improved precision, resulting in a reduction in the number of participants required to draw futility or efficacy conclusions. We focus on time-to-event and ordinal outcomes. A key question for covariate adjustment in randomized studies is how to fit a model relating the outcome and the baseline covariates to maximize precision. We present a novel theoretical result establishing conditions for asymptotic normality of a variety of covariate-adjusted estimators that rely on machine learning (e.g., l1-regularization, Random Forests, XGBoost, and Multivariate Adaptive Regression Splines), under the assumption that outcome data is missing completely at random. We further present a consistent estimator of the asymptotic variance. Importantly, the conditions do not require the machine learning methods to converge to the true outcome distribution conditional on baseline variables, as long as they converge to some (possibly incorrect) limit. We conducted a simulation study to evaluate the performance of the aforementioned prediction methods in COVID-19 trials using longitudinal data from over 1,500 patients hospitalized with COVID-19 at Weill Cornell Medicine New York Presbyterian Hospital. We found that using l1-regularization led to estimators and corresponding hypothesis tests that control type 1 error and are more precise than an unadjusted estimator across all sample sizes tested. We also show that when covariates are not prognostic of the outcome, l1-regularization remains as precise as the unadjusted estimator, even at small sample sizes (n = 100). We give an R package adjrct that performs model-robust covariate adjustment for ordinal and time-to-event outcomes.
Spatial drawing using ruled-surface brush strokes is a popular mode of content creation in immersive VR, yet little is known about the usability of existing spatial drawing interfaces or potential improvements. We address these questions in a three-phase study. (1) Our exploratory need-finding study (N=8) indicates that popular spatial brushes require users to perform large wrist motions, causing physical strain. We speculate that this is partly due to constraining users to align their 3D controllers with their intended stroke normal orientation. (2) We designed and implemented a new brush interface that significantly reduces the physical effort and wrist motion involved in VR drawing, with the additional benefit of increasing drawing accuracy. We achieve this by relaxing the normal alignment constraints, allowing users to control stroke rulings, and estimating normals from them instead. (3) Our comparative evaluation of StripBrush (N=17) against the traditional brush shows that StripBrush requires significantly less physical effort and allows users to more accurately depict their intended shapes while offering competitive ease-of-use and speed.
In engineering practice, it is often necessary to increase the effectiveness of existing protective constructions for ports and coasts (i. e. breakwaters) by extending their configuration, because existing configurations don't provide the appropriate environmental conditions. That extension task can be considered as an optimisation problem. In the paper, the multi-objective evolutionary approach for the breakwaters optimisation is proposed. Also, a greedy heuristic is implemented and included to algorithm, that allows achieving the appropriate solution faster. The task of the identification of the attached breakwaters optimal variant that provides the safe ship parking and manoeuvring in large Black Sea Port of Sochi has been used as a case study. The results of the experiments demonstrated the possibility to apply the proposed multi-objective evolutionary approach in real-world engineering problems. It allows identifying the Pareto-optimal set of the possible configuration, which can be analysed by decision makers and used for final construction
The emergence of Industry 4.0 is making production systems more flexible and also more dynamic. In these settings, schedules often need to be adapted in real-time by dispatching rules. Although substantial progress was made until the '90s, the performance of these rules is still rather limited. The machine learning literature is developing a variety of methods to improve them, but the resulting rules are difficult to interpret and do not generalise well for a wide range of settings. This paper is the first major attempt at combining machine learning with domain problem reasoning for scheduling. The idea consists of using the insights obtained with the latter to guide the empirical search of the former. Our hypothesis is that this guided empirical learning process should result in dispatching rules that are effective and interpretable and which generalise well to different instance classes. We test our approach in the classical dynamic job shop scheduling problem minimising tardiness, which is one of the most well-studied scheduling problems. Nonetheless, results suggest that our approach was able to find new state-of-the-art rules, which significantly outperform the existing literature in the vast majority of settings, from loose to tight due dates and from low utilisation conditions to congested shops. Overall, the average improvement is 19%. Moreover, the rules are compact, interpretable, and generalise well to extreme, unseen scenarios.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
The novel coronavirus disease (COVID-19) has crushed daily routines and is still rampaging through the world. Existing solution for nonpharmaceutical interventions usually needs to timely and precisely select a subset of residential urban areas for containment or even quarantine, where the spatial distribution of confirmed cases has been considered as a key criterion for the subset selection. While such containment measure has successfully stopped or slowed down the spread of COVID-19 in some countries, it is criticized for being inefficient or ineffective, as the statistics of confirmed cases are usually time-delayed and coarse-grained. To tackle the issues, we propose C-Watcher, a novel data-driven framework that aims at screening every neighborhood in a target city and predicting infection risks, prior to the spread of COVID-19 from epicenters to the city. In terms of design, C-Watcher collects large-scale long-term human mobility data from Baidu Maps, then characterizes every residential neighborhood in the city using a set of features based on urban mobility patterns. Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city. We carried out extensive experiments on C-Watcher using the real-data records in the early stage of COVID-19 outbreaks, where the results demonstrate the efficiency and effectiveness of C-Watcher for early detection of high-risk neighborhoods from a large number of cities.