Blockchain protocols typically aspire to run in the permissionless setting, in which nodes are owned and operated by a large number of diverse and unknown entities, with each node free to start or stop running the protocol at any time. This setting is more challenging than the traditional permissioned setting, in which the set of nodes that will be running the protocol is fixed and known at the time of protocol deployment. The goal of this paper is to provide a framework for reasoning about the rich design space of blockchain protocols and their capabilities and limitations in the permissionless setting. This paper offers a hierarchy of settings with different "degrees of permissionlessness", specified by the amount of knowledge that a protocol has about the current participants: These are the fully permissionless, dynamically available and quasi-permissionless settings. The paper also proves several results illustrating the utility of our analysis framework for reasoning about blockchain protocols in these settings. For example: (1) In the fully permissionless setting, even with synchronous communication and with severe restrictions on the total size of the Byzantine players, every deterministic protocol for Byzantine agreement has an infinite execution. (2) In the dynamically available and partially synchronous setting, no protocol can solve the Byzantine agreement problem with high probability, even if there are no Byzantine players at all. (3) In the quasi-permissionless and partially synchronous setting, by contrast, assuming a bound on the total size of the Byzantine players, there is a deterministic protocol guaranteed to solve the Byzantine agreement problem in a finite amount of time. (4) In the quasi-permissionless and synchronous setting, every proof-of-stake protocol that does not use advanced cryptography is vulnerable to long-range attacks.
It is challenging to guide neural network (NN) learning with prior knowledge. In contrast, many known properties, such as spatial smoothness or seasonality, are straightforward to model by choosing an appropriate kernel in a Gaussian process (GP). Many deep learning applications could be enhanced by modeling such known properties. For example, convolutional neural networks (CNNs) are frequently used in remote sensing, which is subject to strong seasonal effects. We propose to blend the strengths of deep learning and the clear modeling capabilities of GPs by using a composite kernel that combines a kernel implicitly defined by a neural network with a second kernel function chosen to model known properties (e.g., seasonality). We implement this idea by combining a deep network and an efficient mapping based on the Nystrom approximation, which we call Implicit Composite Kernel (ICK). We then adopt a sample-then-optimize approach to approximate the full GP posterior distribution. We demonstrate that ICK has superior performance and flexibility on both synthetic and real-world data sets. We believe that ICK framework can be used to include prior information into neural networks in many applications.
This paper presents BBCA-LEDGER, a Byzantine log replication technology for partially synchronous networks enabling blocks to be broadcast in parallel, such that each broadcast is finalized independently and instantaneously into an individual slot in the log. Every finalized broadcast is eventually committed to the total ordering, so that all network bandwidth has utility in disseminating blocks. Finalizing log slots in parallel achieves both high throughput and low latency. BBCA-LEDGER is composed of two principal protocols that interweave together, a low-latency/high-throughput happy path, and a high-throughput DAG-based fallback path. The happy path employs a novel primitive called BBCA, a consistent broadcast enforcing unique slot numbering. In steady state, BBCA ensures that a transaction can be committed with low latency, in just 3 network steps. Under network partitions or faults, we harness recent advances in BFT and build a fallback mechanism on a direct acyclic graph (DAG) created by BBCA broadcasts. In this manner, BBCA-LEDGER exhibits the throughput benefits of DAG-based BFT in face of gaps.
In this paper, we develop a hybrid multiple access (MA) protocol for an intelligent reflecting surface (IRS) aided uplink transmission network by incorporating the IRS-aided time-division MA (I-TDMA) protocol and the IRS-aided non-orthogonal MA (I-NOMA) protocol as special cases. Two typical communication scenarios, namely the transmit power limited case and the transmit energy limited case are considered, where the device's rearranged order, time and power allocation, as well as dynamic IRS beamforming patterns over time are jointly optimized to minimize the sum transmission delay. To shed light on the superiority of the proposed IRS-aided hybrid MA (I-HMA) protocol over conventional protocols, the conditions under which I-HMA outperforms I-TDMA and I-NOMA are revealed by characterizing their corresponding optimal solution. Then, a computationally efficient algorithm is proposed to obtain the high-quality solution to the corresponding optimization problems. Simulation results validate our theoretical findings, demonstrate the superiority of the proposed design, and draw some useful insights. Specifically, it is found that the proposed protocol can significantly reduce the sum transmission delay by combining the additional gain of dynamic IRS beamforming with the high spectral efficiency of NOMA, which thus reveals that integrating IRS into the proposed HMA protocol is an effective solution for delay-aware optimization. Furthermore, it reveals that the proposed design reduces the time consumption not only from the system-centric view, but also from the device-centric view.
The Byzantine consensus problem involves $n$ processes, out of which t < n could be faulty and behave arbitrarily. Three properties characterize consensus: (1) termination, requiring correct (non-faulty) processes to eventually reach a decision, (2) agreement, preventing them from deciding different values, and (3) validity, precluding ``unreasonable'' decisions. But, what is a reasonable decision? Strong validity, a classical property, stipulates that, if all correct processes propose the same value, only that value can be decided. Weak validity, another established property, stipulates that, if all processes are correct and they propose the same value, that value must be decided. The space of possible validity properties is vast. However, their impact on consensus remains unclear. This paper addresses the question of which validity properties allow Byzantine consensus to be solvable with partial synchrony, and at what cost. First, we determine necessary and sufficient conditions for a validity property to make the consensus problem solvable; we say that such validity properties are solvable. Notably, we prove that, if n <= 3t, all solvable validity properties are trivial (there exists an always-admissible decision). Furthermore, we show that, with any non-trivial (and solvable) validity property, consensus requires Omega(t^2) messages. This extends the seminal Dolev-Reischuk bound, originally proven for strong validity, to all non-trivial validity properties. Lastly, we give a general Byzantine consensus algorithm, we call Universal, for any solvable (and non-trivial) validity property. Importantly, Universal incurs O(n^2) message complexity. Thus, together with our lower bound, Universal implies a fundamental result in partial synchrony: with t \in Omega(n), the message complexity of all (non-trivial) consensus variants is Theta(n^2).
Optimal design is a critical yet challenging task within many applications. This challenge arises from the need for extensive trial and error, often done through simulations or running field experiments. Fortunately, sequential optimal design, also referred to as Bayesian optimization when using surrogates with a Bayesian flavor, has played a key role in accelerating the design process through efficient sequential sampling strategies. However, a key opportunity exists nowadays. The increased connectivity of edge devices sets forth a new collaborative paradigm for Bayesian optimization. A paradigm whereby different clients collaboratively borrow strength from each other by effectively distributing their experimentation efforts to improve and fast-track their optimal design process. To this end, we bring the notion of consensus to Bayesian optimization, where clients agree (i.e., reach a consensus) on their next-to-sample designs. Our approach provides a generic and flexible framework that can incorporate different collaboration mechanisms. In lieu of this, we propose transitional collaborative mechanisms where clients initially rely more on each other to maneuver through the early stages with scant data, then, at the late stages, focus on their own objectives to get client-specific solutions. Theoretically, we show the sub-linear growth in regret for our proposed framework. Empirically, through simulated datasets and a real-world collaborative material discovery experiment, we show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.
Quantum Key Distribution (QKD) enables secure communications via the exchange of cryptographic keys exploiting the properties of quantum mechanics. Nowadays the related technology is mature enough for production systems, thus field deployments of QKD networks are expected to appear in the near future, starting from local/metropolitan settings, where edge computing is already a thriving reality. In this paper, we investigate the interplay of resource allocation in the QKD network vs. edge nodes, which creates unique research challenges. After modeling mathematically the problem, we propose practical online policies for admitting edge application requests, which also select the edge node for processing and the path in the QKD network. Our simulation results provide initial insights into this emerging topic and lead the way to upcoming studies on the subject.
Offering a viable alternative architecture to centrally-controlled global digital platforms for social networking is an open challenge. Here we present a grassroots architecture for serverless, permissionless, peer-to-peer social networks termed grassroots social networking. The architecture is geared for roaming (address-changing) agents communicating over an unreliable network, e.g., smartphones communicating via UDP. The architecture incorporates (i) a decentralized social graph, where each member controls, maintains and stores only their local neighbourhood in the graph; (ii) member-created feeds, with authors and followers; and (iii) a novel grassroots dissemination protocol, in which communication occurs only along the edges of the social graph. The architecture realizes these components using the blocklace data structure -- a distributed partially-ordered counterpart of the replicated totally-ordered blockchain. We provide two example grassroots social networking protocols -- Twitter/LinkedIn-like and WhatsApp-like -- and address their safety, liveness, privacy, and spam/deep-fake resistance, demonstrating how centrally-controlled social networks could be supplanted by a grassroots architecture.
Byzantine Consensus is fundamental for building consistent and fault-tolerant distributed systems. In traditional quorum-based consensus protocols, quorums are defined using globally known assumptions shared among all participants. Motivated by decentralized applications on open networks, the Stellar blockchain relaxes these global assumptions by allowing each participant to define its quorums using local information. A similar model called Consensus with Unknown Participants (CUP) studies the minimal knowledge required to solve consensus in ad-hoc networks where each participant knows only a subset of other participants of the system. We prove that Stellar cannot solve consensus using the initial knowledge provided to participants in the CUP model, even though CUP can. We propose an oracle called sink detector that augments this knowledge, enabling Stellar participants to solve consensus.
We introduce a system called Amorphous Fortress -- an abstract, yet spatial, open-ended artificial life simulation. In this environment, the agents are represented as finite-state machines (FSMs) which allow for multi-agent interaction within a constrained space. These agents are created by randomly generating and evolving the FSMs; sampling from pre-defined states and transitions. This environment was designed to explore the emergent AI behaviors found implicitly in simulation games such as Dwarf Fortress or The Sims. We apply the hill-climber evolutionary search algorithm to this environment to explore the various levels of depth and interaction from the generated FSMs.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.