Decentralised organisations use blockchains as a basis for governance: they use on-chain transactions to allocate voting weight, publish proposals, cast votes, and enact the results. However, blockchain-based governance structures have challenges, mostly notably, the need to align the short-term outlook of pseudononymous voters with the long-term growth and success of the decentralised organisation. The Vote-Escrowed Token (veToken) model attempts to resolve this tension by requiring voters to escrow or lock tokens of value for an extended period in exchange for voting weight. In this paper, we describe the veToken model and analyse its emergent outcomes. We show that voting behaviour follows bribes set by higher-level protocols, and that the cost per vote varies depending on how it is acquired. We describe the implementation of the veToken model by Curve Finance, a popular automated market maker for stablecoins, and the ecosystem of protocols that has arisen on top of this implementation. We show that voting markets such as Votium largely determine the outcome of fortnightly votes held by Convex Finance, and we show that Frax Finance, a stablecoin issuer, plays a central role in the ecosystem even though they directly lock relatively few tokens with Curve. Instead, they indirectly lock tokens through yield aggregators such as Convex Finance and purchase voting weight through voting markets such as Votium. Although the veToken model in isolation is straight-forward and easily explained, it leads to many complex and emergent outcomes. Decentralised organisations should consider these outcomes before adopting the model.
We explore the fairness of a redistricting game introduced by Mixon and Villar, which provides a two-party protocol for dividing a state into electoral districts, without the participation of an independent authority. We analyze the game in an abstract setting that ignores the geographic distribution of voters and assumes that voter preferences are fixed and known. We show that the minority player can always win at least $p-1$ districts, where $p$ is proportional to the percentage of minority voters. We give an upper bound on the number of districts won by the minority based on a "cracking" strategy for the majority.
Blockchains are decentralized systems that provide trustable execution guarantees. Smart contracts are programs written in specialized programming languages running on blockchains that govern how tokens and cryptocurrency are sent and received. Smart contracts can invoke other smart contracts during the execution of transactions always initiated by external users. Once deployed, smart contracts cannot be modified, so techniques like runtime verification are very appealing for improving their reliability. However, the conventional model of computation of smart contracts is transactional: once operations commit, their effects are permanent and cannot be undone. In this paper, we proposed the concept of future monitors which allows monitors to remain waiting for future transactions to occur before committing or aborting. This is inspired by optimistic rollups, which are modern blockchain implementations that increase efficiency (and reduce cost) by delaying transaction effects. We exploit this delay to propose a model of computation that allows (bounded) future monitors. We show our monitors correct respect of legacy transactions, how they implement future bounded monitors and how they guarantee progress. We illustrate the use of future bounded monitors to implement correctly multi-transaction flash loans.
Memory bandwidth is known to be a performance bottleneck for FPGA accelerators, especially when they deal with large multi-dimensional data-sets. A large body of work focuses on reducing of off-chip transfers, but few authors try to improve the efficiency of transfers. This paper addresses the later issue by proposing (i) a compiler-based approach to accelerator's data layout to maximize contiguous access to off-chip memory, and (ii) data packing and runtime compression techniques that take advantage of this layout to further improve memory performance. We show that our approach can decrease the I/O cycles up to $7\times$ compared to un-optimized memory accesses.
We propose a novel stereo-confidence that can be measured externally to various stereo-matching networks, offering an alternative input modality choice of the cost volume for learning-based approaches, especially in safety-critical systems. Grounded in the foundational concepts of disparity definition and the disparity plane sweep, the proposed stereo-confidence method is built upon the idea that any shift in a stereo-image pair should be updated in a corresponding amount shift in the disparity map. Based on this idea, the proposed stereo-confidence method can be summarized in three folds. 1) Using the disparity plane sweep, multiple disparity maps can be obtained and treated as a 3-D volume (predicted disparity volume), like the cost volume is constructed. 2) One of these disparity maps serves as an anchor, allowing us to define a desirable (or ideal) disparity profile at every spatial point. 3) By comparing the desirable and predicted disparity profiles, we can quantify the level of matching ambiguity between left and right images for confidence measurement. Extensive experimental results using various stereo-matching networks and datasets demonstrate that the proposed stereo-confidence method not only shows competitive performance on its own but also consistent performance improvements when it is used as an input modality for learning-based stereo-confidence methods.
We consider the message complexity of verifying whether a given subgraph of the communication network forms a tree with specific properties both in the KT-$\rho$ (nodes know their $\rho$-hop neighborhood, including node IDs) and the KT-$0$ (nodes do not have this knowledge) models. We develop a rather general framework that helps in establishing tight lower bounds for various tree verification problems. We also consider two different verification requirements: namely that every node detects in the case the input is incorrect, as well as the requirement that at least one node detects. The results are stronger than previous ones in the sense that we assume that each node knows the number $n$ of nodes in the graph (in some cases) or an $\alpha$ approximation of $n$ (in other cases). For spanning tree verification, we show that the message complexity inherently depends on the quality of the given approximation of $n$: We show a tight lower bound of $\Omega(n^2)$ for the case $\alpha \ge \sqrt{2}$ and a much better upper bound (i.e., $O(n \log n)$) when nodes are given a tighter approximation. On the other hand, our framework also yields an $\Omega(n^2)$ lower bound on the message complexity of verifying a minimum spanning tree (MST), which reveals a polynomial separation between ST verification and MST verification. This result holds for randomized algorithms with perfect knowledge of the network size, and even when just one node detects illegal inputs, thus improving over the work of Kor, Korman, and Peleg (2013). For verifying a $d$-approximate BFS tree, we show that the same lower bound holds even if nodes know $n$ exactly, however, the lower bound is sensitive to $d$, which is the stretch parameter.
The Pickands estimator for the extreme value index is beneficial due to its universal consistency, location, and scale invariance, which sets it apart from other types of estimators. However, similar to many extreme value index estimators, it is marked by poor asymptotic efficiency. Chen (2021) introduces a Conditional Value-at-Risk (CVaR)-based Pickands estimator, establishes its consistency, and demonstrates through simulations that this estimator significantly reduces mean squared error while preserving its location and scale invariance. The initial focus of this paper is on demonstrating the weak convergence of the empirical CVaR in functional space. Subsequently, based on the established weak convergence, the paper presents the asymptotic normality of the CVaR-based Pickands estimator. It further supports these theoretical findings with empirical evidence obtained through simulation studies.
The logic of information flows (LIF) has recently been proposed as a general framework in the field of knowledge representation. In this framework, tasks of procedural nature can still be modeled in a declarative, logic-based fashion. In this paper, we focus on the task of query processing under limited access patterns, a well-studied problem in the database literature. We show that LIF is well-suited for modeling this task. Toward this goal, we introduce a variant of LIF called "forward" LIF (FLIF), in a first-order setting. FLIF takes a novel graph-navigational approach; it is an XPath-like language that nevertheless turns out to be equivalent to the "executable" fragment of first-order logic defined by Nash and Lud\"ascher. One can also classify the variables in FLIF expressions as inputs and outputs. Expressions where inputs and outputs are disjoint, referred to as io-disjoint FLIF expressions, allow a particularly transparent translation into algebraic query plans that respect the access limitations. Finally, we show that general FLIF expressions can always be put into io-disjoint form.
We develop an analytical framework to characterize the set of optimal ReLU neural networks by reformulating the non-convex training problem as a convex program. We show that the global optima of the convex parameterization are given by a polyhedral set and then extend this characterization to the optimal set of the non-convex training objective. Since all stationary points of the ReLU training problem can be represented as optima of sub-sampled convex programs, our work provides a general expression for all critical points of the non-convex objective. We then leverage our results to provide an optimal pruning algorithm for computing minimal networks, establish conditions for the regularization path of ReLU networks to be continuous, and develop sensitivity results for minimal ReLU networks.
Objective: The objective of this work is to introduce and demonstrate the effectiveness of a novel sensing modality for contact detection between an off-the-shelf aspiration catheter and a thrombus. Methods: A custom robotic actuator with a pressure sensor was used to generate an oscillatory vacuum excitation and sense the pressure inside the extracorporeal portion of the catheter. Vacuum pressure profiles and robotic motion data were used to train a support vector machine (SVM) classification model to detect contact between the aspiration catheter tip and a mock thrombus. Validation consisted of benchtop accuracy verification, as well as user study comparison to the current standard of angiographic presentation. Results: Benchtop accuracy of the sensing modality was shown to be 99.67%. The user study demonstrated statistically significant improvement in identifying catheter-thrombus contact compared to the current standard. The odds ratio of successful detection of clot contact was 2.86 (p=0.03) when using the proposed sensory method compared to without it. Conclusion: The results of this work indicate that the proposed sensing modality can offer intraoperative feedback to interventionalists that can improve their ability to detect contact between the distal tip of a catheter and a thrombus. Significance: By offering a relatively low-cost technology that affords off-the-shelf aspiration catheters as clot-detecting sensors, interventionalists can improve the first-pass effect of the mechanical thrombectomy procedure while reducing procedural times and mental burden.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.