Little or no research has been directed to analysis and researching forensic analysis of the Bitcoin mixing or 'tumbling' service themselves. This work is intended to examine effective tooling and methodology for recovering forensic artifacts from two privacy focused mixing services namely Obscuro which uses the secure enclave on intel chips to provide enhanced confidentiality and Wasabi wallet which uses CoinJoin to mix and obfuscate crypto currencies. These wallets were set up on VMs and then several forensic tools used to examine these VM images for relevant forensic artifacts. These forensic tools were able to recover a broad range of forensic artifacts and found both network forensics and logging files to be a useful source of artifacts to deanonymize these mixing services.
The Internet of Things (IoT) comprises of a heterogeneous mix of smart devices which vary widely in their size, usage, energy capacity, computational power etc. IoT devices are typically connected to the Cloud via Fog nodes for fast processing and response times. In a rush to deploy devices quickly into the real-world and to maximize market share, the issue of security is often considered as an afterthought by the manufacturers of such devices. Some well-known security concerns of IoT are - data confidentiality, authentication of devices, location privacy, device integrity etc. We believe that the majority of security schemes proposed to date are too heavyweight for them to be of any practical value for the IoT. In this paper we propose a lightweight encryption scheme loosely based on the classic one-time pad, and make use of hash functions for the generation and management of keys. Our scheme imposes minimal computational and storage requirements on the network nodes, which makes it a viable candidate for the encryption of data transmitted by IoT devices in the Fog.
Contact patterns play a key role in the spread of respiratory infectious diseases in human populations. During the COVID-19 pandemic the regular contact patterns of the population has been disrupted due to social distancing both imposed by the authorities and individual choices. Here we present the results of a contact survey conducted in Chinese provinces outside Hubei in March 2020, right after lockdowns were lifted. We then leveraged the estimated mixing patterns to calibrate a model of SARS-CoV-2 transmission, which was used to estimate different metrics of COVID-19 burden by age. Study participants reported 2.3 contacts per day (IQR: 1.0-3.0) and the mean per-contact duration was 7.0 hours (IQR: 1.0-10.0). No significant differences were observed between provinces, the number of recorded contacts did not show a clear-cut trend by age, and most of the recorded contacts occurred with family members (about 78%). Our findings suggest that, despite the lockdown was no longer in place at the time of the survey, people were still heavily limiting their contacts as compared to the pre-pandemic situation. Moreover, the obtained modeling results highlight the importance of considering age-contact patterns to estimate COVID-19 burden.
Popular cryptocurrencies continue to face serious scalability issues due to their ever-growing blockchains. Thus, modern blockchain designs began to prune old blocks and rely on recent snapshots for their bootstrapping processes instead. Unfortunately, established systems are often considered incapable of adopting these improvements. In this work, we present CoinPrune, our block-pruning scheme with full Bitcoin compatibility, to revise this popular belief. CoinPrune bootstraps joining nodes via snapshots that are periodically created from Bitcoin's set of unspent transaction outputs (UTXO set). Our scheme establishes trust in these snapshots by relying on CoinPrune-supporting miners to mutually reaffirm a snapshot's correctness on the blockchain. This way, snapshots remain trustworthy even if adversaries attempt to tamper with them. Our scheme maintains its retrospective deployability by relying on positive feedback only, i.e., blocks containing invalid reaffirmations are not rejected, but invalid reaffirmations are outpaced by the benign ones created by an honest majority among CoinPrune-supporting miners. Already today, CoinPrune reduces the storage requirements for Bitcoin nodes by two orders of magnitude, as joining nodes need to fetch and process only 6 GiB instead of 271 GiB of data in our evaluation, reducing the synchronization time of powerful devices from currently 7 h to 51 min, with even larger potential drops for less powerful devices. CoinPrune is further aware of higher-level application data, i.e., it conserves otherwise pruned application data and allows nodes to obfuscate objectionable and potentially illegal blockchain content from their UTXO set and the snapshots they distribute.
Object-oriented programming (OOP) is one of the most popular paradigms used for building software systems. However, despite its industrial and academic popularity, OOP is still missing a formal apparatus similar to lambda-calculus, which functional programming is based on. There were a number of attempts to formalize OOP, but none of them managed to cover all the features available in modern OO programming languages, such as C++ or Java. We have made yet another attempt and created phi-calculus. We also created EOLANG (also called EO), an experimental programming language based on phi-calculus.
Meta-analyses of survival studies aim to reveal the variation of an effect measure of interest over different studies and present a meaningful summary. They must address between study heterogeneity in several dimensions and eliminate spurious sources of variation. Forest plots of the usual (adjusted) hazard ratios are fraught with difficulties from this perspective since both the magnitude and interpretation of these hazard ratios depend on factors ancillary to the true study-specific exposure effect. These factors generally include the study duration, the censoring patterns within studies, the covariates adjusted for and their distribution over exposure groups. Ignoring these mentioned features and accepting implausible hidden assumptions may critically affect interpretation of the pooled effect measure. Risk differences or restricted mean effects over a common follow-up interval and balanced distribution of a covariate set are natural candidates for exposure evaluation and possible treatment choice. In this paper, we propose differently standardized survival curves over a fitting time horizon, targeting various estimands with their own transportability. With each type of standardization comes a given interpretation within studies and overall, under stated assumptions. These curves can in turn be summarized by standardized study-specific contrasts, including hazard ratios with more consistent meaning. We prefer forest plots of risk differences at well chosen time points. Our case study examines overall survival among anal squamous cell carcinoma patients, expressing the tumor marker $p16^{INK4a}$ or not, based on the individual patient data of six studies.
Blockchain has been widely deployed in various sectors, such as finance, education, and public services. Since blockchain runs as an immutable distributed ledger, it has decentralized mechanisms with persistency, anonymity, and auditability, where transactions are jointly performed through cryptocurrency-based consensus algorithms by worldwide distributed nodes. There have been many survey papers reviewing the blockchain technologies from different perspectives, e.g., digital currencies, consensus algorithms, and smart contracts. However, none of them have focused on the blockchain data management systems. To fill in this gap, we have conducted a comprehensive survey on the data management systems, based on three typical types of blockchain, i.e., standard blockchain, hybrid blockchain, and DAG (Directed Acyclic Graph)-based blockchain. We categorize their data management mechanisms into three layers: blockchain architecture, blockchain data structure, and blockchain storage engine, where block architecture indicates how to record transactions on a distributed ledger, blockchain data structure refers to the internal structure of each block, and blockchain storage engine specifies the storage form of data on the blockchain system. For each layer, the works advancing the state-of-the-art are discussed together with technical challenges. Furthermore, we lay out the future research directions for the blockchain data management systems.
In the era of big data, standard analysis tools may be inadequate for making inference and there is a growing need for more efficient and innovative ways to collect, process, analyze and interpret the massive and complex data. We provide an overview of challenges in big data problems and describe how innovative analytical methods, machine learning tools and metaheuristics can tackle general healthcare problems with a focus on the current pandemic. In particular, we give applications of modern digital technology, statistical methods, data platforms and data integration systems to improve diagnosis and treatment of diseases in clinical research and novel epidemiologic tools to tackle infection source problems, such as finding Patient Zero in the spread of epidemics. We make the case that analyzing and interpreting big data is a very challenging task that requires a multi-disciplinary effort to continuously create more effective methodologies and powerful tools to transfer data information into knowledge that enables informed decision making.
In recent years, misinformation on the Web has become increasingly rampant. The research community has responded by proposing systems and challenges, which are beginning to be useful for (various subtasks of) detecting misinformation. However, most proposed systems are based on deep learning techniques which are fine-tuned to specific domains, are difficult to interpret and produce results which are not machine readable. This limits their applicability and adoption as they can only be used by a select expert audience in very specific settings. In this paper we propose an architecture based on a core concept of Credibility Reviews (CRs) that can be used to build networks of distributed bots that collaborate for misinformation detection. The CRs serve as building blocks to compose graphs of (i) web content, (ii) existing credibility signals --fact-checked claims and reputation reviews of websites--, and (iii) automatically computed reviews. We implement this architecture on top of lightweight extensions to Schema.org and services providing generic NLP tasks for semantic similarity and stance detection. Evaluations on existing datasets of social-media posts, fake news and political speeches demonstrates several advantages over existing systems: extensibility, domain-independence, composability, explainability and transparency via provenance. Furthermore, we obtain competitive results without requiring finetuning and establish a new state of the art on the Clef'18 CheckThat! Factuality task.
Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.
In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media. The wide spread of disinformation and fake news can cause detrimental societal effects. Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation. The goal of this chapter is to pave the way for appreciating the challenges and advancements via: (1) introducing the types of information disorder on social media and examine their differences and connections; (2) describing important and emerging tasks to combat disinformation for characterization, detection and attribution; and (3) discussing a weak supervision approach to detect disinformation with limited labeled data. We then provide an overview of the chapters in this book that represent the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. We hope this book to be a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains.