Cross-Blockchain communication has gained traction due to the increasing fragmentation of blockchain networks and scalability solutions such as side-chaining and sharding. With SmartSync, we propose a novel concept for cross-blockchain smart contract interactions that creates client contracts on arbitrary blockchain networks supporting the same execution environment. Client contracts mirror the logic and state of the original instance and enable seamless on-chain function executions providing recent states. Synchronized contracts supply instant read-only function calls to other applications hosted on the target blockchain. Hereby, current limitations in cross-chain communication are alleviated and new forms of contract interactions are enabled. State updates are transmitted in a verifiable manner using Merkle proofs and do not require trusted intermediaries. To permit lightweight synchronizations, we introduce transition confirmations that facilitate the application of verifiable state transitions without re-executing transactions of the source blockchain. We prove the concept's soundness by providing a prototypical implementation that enables smart contract forks, state synchronizations, and on-chain validation on EVM-compatible blockchains. Our evaluation demonstrates SmartSync's applicability for presented use cases providing access to recent states to third-party contracts on the target blockchain. Execution costs scale sub-linearly with the number of value updates and depend on the depth and index of corresponding Merkle proofs.
The blockchain-based smart contract lacks privacy since the contract state and instruction code are exposed to the public. Combining smart-contract execution with Trusted Execution Environments (TEEs) provides an efficient solution, called TEE-assisted smart contracts, for protecting the confidentiality of contract states. However, the combination approaches are varied, and a systematic study is absent. Newly released systems may fail to draw upon the experience learned from existing protocols, such as repeating known design mistakes or applying TEE technology in insecure ways. In this paper, we first investigate and categorize the existing systems into two types: the layer-one solution and layer-two solution. Then, we establish an analysis framework to capture their common lights, covering the desired properties (for contract services), threat models, and security considerations (for underlying systems). Based on our taxonomy, we identify their ideal functionalities and uncover the fundamental flaws and reasons for the challenges in each specification design. We believe that this work would provide a guide for the development of TEE-assisted smart contracts, as well as a framework to evaluate future TEE-assisted confidential contract systems.
Emerging distributed cloud architectures, e.g., fog and mobile edge computing, are playing an increasingly important role in the efficient delivery of real-time stream-processing applications such as augmented reality, multiplayer gaming, and industrial automation. While such applications require processed streams to be shared and simultaneously consumed by multiple users/devices, existing technologies lack efficient mechanisms to deal with their inherent multicast nature, leading to unnecessary traffic redundancy and network congestion. In this paper, we establish a unified framework for distributed cloud network control with generalized (mixed-cast) traffic flows that allows optimizing the distributed execution of the required packet processing, forwarding, and replication operations. We first characterize the enlarged multicast network stability region under the new control framework (with respect to its unicast counterpart). We then design a novel queuing system that allows scheduling data packets according to their current destination sets, and leverage Lyapunov drift-plus-penalty theory to develop the first fully decentralized, throughput- and cost-optimal algorithm for multicast cloud network flow control. Numerical experiments validate analytical results and demonstrate the performance gain of the proposed design over existing cloud network control techniques.
The fundamental tradeoff between transaction per second (TPS) and security in blockchain systems persists despite numerous prior attempts to boost TPS. To increase TPS without compromising security, we propose a bodyless block propagation (BBP) scheme for which the block body is not validated and transmitted during the block propagation process. Rather, the nodes in the blockchain network anticipate the transactions and their ordering in the next upcoming block so that these transactions can be pre-executed and pre-validated before the birth of the block. It is critical, however, all nodes have a consensus on the transaction content of the next block. This paper puts forth a transaction selection, ordering, and synchronization algorithm to drive the nodes to reach such a consensus. Yet, the coinbase address of the miner of the next block cannot be anticipated, and therefore transactions that depend on the coinbase address cannot be pre-executed and pre-validated. This paper further puts forth an algorithm to deal with such unresolvable transactions for an overall consistent and TPS-efficient scheme. With our scheme, most transactions do not need to be validated and transmitted during block propagation, ridding the dependence of propagation time on the number of transactions in the block, and making the system fully TPS scalable. Experimental results show that our protocol can reduce propagation time by 4x with respect to the current Ethereum blockchain, and its TPS performance is limited by the node hardware performance rather than block propagation.
Numerical solution of heterogeneous Helmholtz problems presents various computational challenges, with descriptive theory remaining out of reach for many popular approaches. Robustness and scalability are key for practical and reliable solvers in large-scale applications, especially for large wave number problems. In this work we explore the use of a GenEO-type coarse space to build a two-level additive Schwarz method applicable to highly indefinite Helmholtz problems. Through a range of numerical tests on a 2D model problem, discretised by finite elements on pollution-free meshes, we observe robust convergence, iteration counts that do not increase with the wave number, and good scalability of our approach. We further provide results showing a favourable comparison with the DtN coarse space. Our numerical study shows promise that our solver methodology can be effective for challenging heterogeneous applications.
In today's computing environment, where Artificial Intelligence (AI) and data processing are moving toward the Internet of Things (IoT) and the Edge computing paradigm, benchmarking resource-constrained devices is a critical task to evaluate their suitability and performance. The literature has extensively explored the performance of IoT devices when running high-level benchmarks specialized in particular application scenarios, such as AI or medical applications. However, lower-level benchmarking applications and datasets that analyze the hardware components of each device are needed. This low-level device understanding enables new AI solutions for network, system and service management based on device performance, such as individual device identification, so it is an area worth exploring more in detail. In this paper, we present LwHBench, a low-level hardware benchmarking application for Single-Board Computers that measures the performance of CPU, GPU, Memory and Storage taking into account the component constraints in these types of devices. LwHBench has been implemented for Raspberry Pi devices and run for 100 days on a set of 45 devices to generate an extensive dataset that allows the usage of AI techniques in different application scenarios. Finally, to demonstrate the inter-scenario capability of the created dataset, a series of AI-enabled use cases about device identification and context impact on performance are presented as examples and exploration of the published data.
Linear mixed models (LMMs) are instrumental for regression analysis with structured dependence, such as grouped, clustered, or multilevel data. However, selection among the covariates--while accounting for this structured dependence--remains a challenge. We introduce a Bayesian decision analysis for subset selection with LMMs. Using a Mahalanobis loss function that incorporates the structured dependence, we derive optimal linear coefficients for (i) any given subset of variables and (ii) all subsets of variables that satisfy a cardinality constraint. Crucially, these estimates inherit shrinkage or regularization and uncertainty quantification from the underlying Bayesian model, and apply for any well-specified Bayesian LMM. More broadly, our decision analysis strategy deemphasizes the role of a single "best" subset, which is often unstable and limited in its information content, and instead favors a collection of near-optimal subsets. This collection is summarized by key member subsets and variable-specific importance metrics. Customized subset search and out-of-sample approximation algorithms are provided for more scalable computing. These tools are applied to simulated data and a longitudinal physical activity dataset, and demonstrate excellent prediction, estimation, and selection ability.
Fog computing is introduced by shifting cloud resources towards the users' proximity to mitigate the limitations possessed by cloud computing. Fog environment made its limited resource available to a large number of users to deploy their serverless applications, composed of several serverless functions. One of the primary intentions behind introducing the fog environment is to fulfil the demand of latency and location-sensitive serverless applications through its limited resources. The recent research mainly focuses on assigning maximum resources to such applications from the fog node and not taking full advantage of the cloud environment. This introduces a negative impact in providing the resources to a maximum number of connected users. To address this issue, in this paper, we investigated the optimum percentage of a user's request that should be fulfilled by fog and cloud. As a result, we proposed DeF-DReL, a Systematic Deployment of Serverless Functions in Fog and Cloud environments using Deep Reinforcement Learning, using several real-life parameters, such as distance and latency of the users from nearby fog node, user's priority, the priority of the serverless applications and their resource demand, etc. The performance of the DeF-DReL algorithm is further compared with recent related algorithms. From the simulation and comparison results, its superiority over other algorithms and its applicability to the real-life scenario can be clearly observed.
Blockchain and smart contract technology are novel approaches to data and code management that facilitate trusted computing by allowing for development in a distributed and decentralized manner. Testing smart contracts comes with its own set of challenges which have not yet been fully identified and explored. Although existing tools can identify and discover known vulnerabilities and their interactions on the Ethereum blockchain through random search or symbolic execution, these tools generally do not produce test suites suitable for human oracles. In this paper, we present AGSOLT (Automated Generator of Solidity Test Suites). We demonstrate its efficiency by implementing two search algorithms to automatically generate test suites for stand-alone Solidity smart contracts, taking into account some of the blockchain-specific challenges. To test AGSOLT, we compared a random search algorithm and a genetic algorithm on a set of 36 real-world smart contracts. We found that AGSOLT is capable of achieving high branch coverage with both approaches and even discovered some errors in some of the most popular Solidity smart contracts on Github.
The Accumulate Protocol ("Accumulate") is an identity-based, Delegated Proof of Stake (DPoS) blockchain designed to power the digital economy through interoperability with Layer-1 blockchains, integration with enterprise tech stacks, and interfacing with the World Wide Web. Accumulate bypasses the trilemma of security, scalability, and decentralization by implementing a chain-of-chains architecture in which digital identities with the ability to manage keys, tokens, data, and other identities are treated as their own independent blockchains. This architecture allows these identities, known as Accumulate Digital Identifiers (ADIs), to be processed and validated in parallel over the Accumulate network. Each ADI also possesses a hierarchical set of keys with different priority levels that allow users to manage their security over time and create complex signature authorization schemes that expand the utility of multi-signature transactions. A two token system provides predictable costs for enterprise users, while anchoring all transactions to Layer-1 blockchains provides enterprise-grade security to everyone.
The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience.