Permissionless blockchains such as Bitcoin have excelled at financial services. Yet, opportunistic traders extract monetary value from the mesh of decentralized finance (DeFi) smart contracts through so-called blockchain extractable value (BEV). The recent emergence of centralized BEV relayer portrays BEV as a positive additional revenue source. Because BEV was quantitatively shown to deteriorate the blockchain's consensus security, BEV relayers endanger the ledger security by incentivizing rational miners to fork the chain. For example, a rational miner with a 10% hashrate will fork Ethereum if a BEV opportunity exceeds 4x the block reward. However, related work is currently missing quantitative insights on past BEV extraction to assess the practical risks of BEV objectively. In this work, we allow to quantify the BEV danger by deriving the USD extracted from sandwich attacks, liquidations, and decentralized exchange arbitrage. We estimate that over 32 months, BEV yielded 540.54M USD in profit, divided among 11,289 addresses when capturing 49,691 cryptocurrencies and 60,830 on-chain markets. The highest BEV instance we find amounts to 4.1M USD, 616.6x the Ethereum block reward. Moreover, while the practitioner's community has discussed the existence of generalized trading bots, we are, to our knowledge, the first to provide a concrete algorithm. Our algorithm can replace unconfirmed transactions without the need to understand the victim transactions' underlying logic, which we estimate to have yielded a profit of 57,037.32 ETH (35.37M USD) over 32 months of past blockchain data. Finally, we formalize and analyze emerging BEV relay systems, where miners accept BEV transactions from a centralized relay server instead of the peer-to-peer (P2P) network. We find that such relay systems aggravate the consensus layer attacks and therefore further endanger blockchain security.
The concept of blockchain has emerged as an effective solution for data-sensitive domains, such as healthcare, financial services, etc., due to its various attributes like immutability, non-repudiation, and availability. Thus, implementation of this technology in various domains rose exponentially; one of such fields is the healthcare supply chain. Managing healthcare supply chain processes effectively is very crucial for the healthcare system. Despite various innovations in the method of treatment methodologies, the healthcare supply chain management system is not up to the mark and lacks efficiency. The traditional healthcare supply chain system is time-consuming and lacks the work synergy among the various stakeholders of the supply chain. Thus, In this paper, we propose a framework based on blockchain smart contracts and decentralized storage to connect all the supply chain stakeholders. Smart contracts in the framework enforce and depict various interactions and transactions among the stakeholders, thus helping to automate these processes, promote transparency, improve efficiency, and minimize service time. The preliminary results show that the proposed framework is more efficient, secure, and economically feasible.
Smart Cities are happening everywhere around us and yet they are still incomprehensibly far from directly impacting everyday life. What needs to happen to make cities really smart? Digital Twins (DTs) represent their Physical Twin (PT) in the real world through models, sensed data, context awareness, and interactions. A Digital Twin of a city appears to offer the right combination to make the Smart City accessible and thus usable. However, without appropriate interfaces, the complexity of a city cannot be represented. Ultimately, fully leveraging the potential of Smart Cities requires going beyond the Digital Twin. Can this issue be addressed? I advance embedding the Digital Twin into the Physical Twin, i.e. Fused Twins. Thus, this fusion allows access to data where it is generated in a context that can make it easily understandable. The Fused Twins paradigm is the formalization of this vision. Prototypes of Fused Twins are appearing at an neck-break speed from different domains but Smart Cities will be the context where Fused Twins will predominantly be seen in the future. This paper reviews Digital Twins to understand how Fused Twins can be constructed from Augmented Reality, Geographic Information Systems, Building/City Information Models and Digital Twins and provides an overview of current research and future directions.
In today's digital world, interaction with online platforms is ubiquitous, and thus content moderation is important for protecting users from content that do not comply with pre-established community guidelines. Having a robust content moderation system throughout every stage of planning is particularly important. We study the short-term planning problem of allocating human content reviewers to different harmful content categories. We use tools from fair division and study the application of competitive equilibrium and leximin allocation rules. Furthermore, we incorporate, to the traditional Fisher market setup, novel aspects that are of practical importance. The first aspect is the forecasted workload of different content categories. We show how a formulation that is inspired by the celebrated Eisenberg-Gale program allows us to find an allocation that not only satisfies the forecasted workload, but also fairly allocates the remaining reviewing hours among all content categories. The resulting allocation is also robust as the additional allocation provides a guardrail in cases where the actual workload deviates from the predicted workload. The second practical consideration is time dependent allocation that is motivated by the fact that partners need scheduling guidance for the reviewers across days to achieve efficiency. To address the time component, we introduce new extensions of the various fair allocation approaches for the single-time period setting, and we show that many properties extend in essence, albeit with some modifications. Related to the time component, we additionally investigate how to satisfy markets' desire for smooth allocation (e.g., partners for content reviewers prefer an allocation that does not vary much from time to time, to minimize staffing switch). We demonstrate the performance of our proposed approaches through real-world data obtained from Meta.
A recent line of work in mechanism design has focused on guaranteeing incentive compatibility for agents without contingent reasoning skills: obviously strategyproof mechanisms guarantee that it is "obvious" for these imperfectly rational agents to behave honestly, whereas non-obviously manipulable (NOM) mechanisms take a more optimistic view and ensure that these agents will only misbehave when it is "obvious" for them to do so. Technically, obviousness requires comparing certain extrema (defined over the actions of the other agents) of an agent's utilities for honest behaviour against dishonest behaviour. We present a technique for designing NOM mechanisms in settings where monetary transfers are allowed based on cycle monotonicity, which allows us to disentangle the specification of the mechanism's allocation from the payments. By leveraging this framework, we completely characterise both allocation and payment functions of NOM mechanisms for single-parameter agents. We then look at the classical setting of bilateral trade and study whether and how much subsidy is needed to guarantee NOM, efficiency, and individual rationality. We prove a stark dichotomy; no finite subsidy suffices if agents look only at best-case extremes, whereas no subsidy at all is required when agents focus on worst-case extremes. We conclude the paper by characterising the NOM mechanisms that require no subsidies whilst satisfying individual rationality.
Energy forecasting has attracted enormous attention over the last few decades, with novel proposals related to the use of heterogeneous data sources, probabilistic forecasting, online learn-ing, etc. A key aspect that emerged is that learning and forecasting may highly benefit from distributed data, though not only in the geographical sense. That is, various agents collect and own data that may be useful to others. In contrast to recent proposals that look into distributed and privacy-preserving learning (incentive-free), we explore here a framework called regression markets. There, agents aiming to improve their forecasts post a regression task, for which other agents may contribute by sharing their data for their features and get monetarily rewarded for it.The market design is for regression models that are linear in their parameters, and possibly sep-arable, with estimation performed based on either batch or online learning. Both in-sample and out-of-sample aspects are considered, with markets for fitting models in-sample, and then for improving genuine forecasts out-of-sample. Such regression markets rely on recent concepts within interpretability of machine learning approaches and cooperative game theory, with Shapley additive explanations. Besides introducing the market design and proving its desirable properties, application results are shown based on simulation studies (to highlight the salient features of the proposal) and with real-world case studies.
An idealised decentralised exchange (DEX) provides a medium in which players wishing to exchange one token for another can interact with other such players and liquidity providers at a price which reflects the true exchange rate, without the need for a trusted third-party. Unfortunately, extractable value is an inherent flaw in existing blockchain-based DEX implementations. This extractable value takes the form of monetizable opportunities that allow blockchain participants to extract money from a DEX without adding demand or liquidity to the DEX, the two functions for which DEXs are intended. This money is taken directly from the intended DEX participants. As a result, the cost of participation in existing DEXs is much larger than the upfront fees required to post a transaction on a blockchain and/or into a smart contract. We present FairTraDEX, a decentralised variant of a frequent batch auction (FBA), a DEX protocol which provides formal game-theoretic guarantees against extractable value. FBAs when run by a trusted third-party provide unique game-theoretic optimal strategies which ensure players are shown prices equal to the liquidity provider's fair price, excluding explicit, pre-determined fees. FairTraDEX replicates the key features of an FBA that provide these game-theoretic guarantees using a combination of set-membership in zero-knowledge protocols and an escrow-enforced commit-reveal protocol. We extend the results of FBAs to handle monopolistic and/or malicious liquidity providers, and provide a detailed pseudo-code implementation of FairTraDEX based on existing mainstream blockchain protocols.
This paper deals with design of the alternative secure Blockchain network framework to prevent damages from an attacker. The concept of the strategic alliance of the management is applied on the top of the recent developed stochastic game framework. This new enhanced hybrid theoretical model has been developed based on the combination of the conventional game theory, the fluctuation theory and the Blockchain Governance Game to find best strategies towards preparation for preventing a network malfunction from an attacker by making the strategic alliance with other genuine miners. Analytically tractable results for decision making parameters are fully obtained which enable to predict the moment for operations and deliver the optimal number of the alliance with other nodes to protect the Blockchain network. This research helps for whom considers the initial coin offering or launching new blockchain based services with enhancing the security features by alliance with the trusted miners within the decentralized network.
Securing safe driving for connected and autonomous vehicles (CAVs) continues to be a widespread concern, despite various sophisticated functions delivered by artificial intelligence for in-vehicle devices. Diverse malicious network attacks are ubiquitous, along with the worldwide implementation of the Internet of Vehicles, which exposes a range of reliability and privacy threats for managing data in CAV networks. Combined with the fact that the capability of existing CAVs in handling intensive computation tasks is limited, this implies a need for designing an efficient assessment system to guarantee autonomous driving safety without compromising data security. In this article we propose a novel framework, namely Blockchain-enabled intElligent Safe-driving assessmenT (BEST), which offers a smart and reliable approach for conducting safe driving supervision while protecting vehicular information. Specifically, a promising solution that exploits a long short-term memory model is introduced to assess the safety level of the moving CAVs. Then we investigate how a distributed blockchain obtains adequate trustworthiness and robustness for CAV data by adopting a byzantine fault tolerance-based delegated proof-of-stake consensus mechanism. Simulation results demonstrate that our presented BEST gains better data credibility with a higher prediction accuracy for vehicular safety assessment when compared with existing schemes. Finally, we discuss several open challenges that need to be addressed in future CAV networks.
Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.
Contrastive learning relies on constructing a collection of negative examples that are sufficiently hard to discriminate against positive queries when their representations are self-trained. Existing contrastive learning methods either maintain a queue of negative samples over minibatches while only a small portion of them are updated in an iteration, or only use the other examples from the current minibatch as negatives. They could not closely track the change of the learned representation over iterations by updating the entire queue as a whole, or discard the useful information from the past minibatches. Alternatively, we present to directly learn a set of negative adversaries playing against the self-trained representation. Two players, the representation network and negative adversaries, are alternately updated to obtain the most challenging negative examples against which the representation of positive queries will be trained to discriminate. We further show that the negative adversaries are updated towards a weighted combination of positive queries by maximizing the adversarial contrastive loss, thereby allowing them to closely track the change of representations over time. Experiment results demonstrate the proposed Adversarial Contrastive (AdCo) model not only achieves superior performances (a top-1 accuracy of 73.2\% over 200 epochs and 75.7\% over 800 epochs with linear evaluation on ImageNet), but also can be pre-trained more efficiently with fewer epochs.