亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Off-chain protocols constitute one of the most promising approaches to solve the inherent scalability issue of blockchain technologies. The core idea is to let parties transact on-chain only once to establish a channel between them, leveraging later on the resulting channel paths to perform arbitrarily many peer-to-peer transactions off-chain. While significant progress has been made in terms of proof techniques for off-chain protocols, existing approaches do not capture the game-theoretic incentives at the core of their design, which led to overlooking significant attack vectors like the Wormhole attack in the past. In this work we take a first step towards a principled game-theoretic security analysis of off-chain protocols by introducing the first game-theoretic model that is expressive enough to reason about their security. We advocate the use of Extensive Form Games (EFGs) and introduce two instances of EFGs to capture security properties of the closing and the routing of the Lightning Network. Specifically, we model the closing protocol, which relies on punishment mechanisms to disincentivize parties to upload old channel states on-chain. Moreover, we model the routing protocol, thereby formally characterizing the Wormhole attack, a vulnerability that undermines the fee-based incentive mechanism underlying the Lightning Network.

相關內容

DeFi, or Decentralized Finance, is based on a distributed ledger called blockchain technology. Using blockchain, DeFi may customize the execution of predetermined operations between parties. The DeFi system use blockchain technology to execute user transactions, such as lending and exchanging. The total value locked in DeFi decreased from \$200 billion in April 2022 to \$80 billion in July 2022, indicating that security in this area remained problematic. In this paper, we address the deficiency in DeFi security studies. To our best knowledge, our paper is the first to make a systematic analysis of DeFi security. First, we summarize the DeFi-related vulnerabilities in each blockchain layer. Additionally, application-level vulnerabilities are also analyzed. Then we classify and analyze real-world DeFi attacks based on the principles that correlate to the vulnerabilities. In addition, we collect optimization strategies from the data, network, consensus, smart contract, and application layers. And then, we describe the weaknesses and technical approaches they address. On the basis of this comprehensive analysis, we summarize several challenges and possible future directions in DeFi to offer ideas for further research.

The utilization of renewable energy technologies, particularly hydrogen, has seen a boom in interest and has spread throughout the world. Ethanol steam reformation is one of the primary methods capable of producing hydrogen efficiently and reliably. This paper provides an in-depth study of the reformulated system both theoretically and numerically, as well as a plan to explore the possibility of converting the system into its conservation form. Lastly, we offer an overview of several numerical approaches for solving the general first-order quasi-linear hyperbolic equation to the particular model for ethanol steam reforming (ESR). We conclude by presenting some results that would enable the usage of these ODE/PDE solvers to be used in non-linear model predictive control (NMPC) algorithms and discuss the limitations of our approach and directions for future work.

A large spectrum of technologies are collectively dubbed as physical layer security (PLS), ranging from wiretap coding, secret key generation (SKG), authentication using physical unclonable functions (PUFs), localization / RF fingerprinting, anomaly detection monitoring the physical layer (PHY) and hardware. Despite the fact that the fundamental limits of PLS have long been characterized, incorporating PLS in future wireless security standards requires further steps in terms of channel engineering and pre-processing. Reflecting upon the growing discussion in our community, in this critical review paper, we ask some important questions with respect to the key hurdles in the practical deployment of PLS in 6G, but also present some research directions and possible solutions, in particular our vision for context-aware 6G security that incorporates PLS.

The Smart Grid (SG) is a cornerstone of modern society, providing the energy required to sustain billions of lives and thousands of industries. Unfortunately, as one of the most critical infrastructures of our World, the SG is an attractive target for attackers. The problem is aggravated by the increasing adoption of digitalisation, which further increases the SG's exposure to cyberthreats. Successful exploitation of such exposure leads to entire countries being paralysed, which is an unacceptable -- but ultimately inescapable -- risk. This paper aims to mitigate this risk by elucidating the perspective of real practitioners on the cybersecurity of the SG. We interviewed 18 entities, operating in diverse countries in Europe and covering all domains of the SG -- from energy generation, to its delivery. Our analysis highlights a stark contrast between (a)research and practice, but also between (b) public and private entities. For instance: some threats appear to be much less dangerous than what is claimed in related papers; some technological paradigms have dubious utility for practitioners, but are actively promoted by literature; finally, practitioners may either under- or over-estimate their own cybersecurity capabilities. We derive four takeaways that enable future endeavours to improve the overall cybersecurity in the SG. We conjecture that most of the problems are due to an improper communication between researchers, practitioners and regulatory bodies -- which, despite sharing a common goal, tend to neglect the viewpoint of the other `spheres'.

We analyze the utilization of publish-subscribe protocols in IoT and Fog Computing and challenges around security configuration, performance, and qualitative characteristics. Such problems with security configuration lead to significant disruptions and high operation costs. Yet, These issues can be prevented by selecting the appropriate transmission technology for each configuration, considering the variations in sizing, installation, sensor profile, distribution, security, networking, and locality. This work aims to present a comparative qualitative and quantitative analysis around diverse configurations, focusing on Smart Agriculture's scenario and specifically the case of fish-farming. As result, we applied a data generation workbench to create datasets of relevant research data and compared the results in terms of performance, resource utilization, security, and resilience. Also, we provide a qualitative analysis of use case scenarios for the quantitative data produced. As a contribution, this robust analysis provides a blueprint to decision support for Fog Computing engineers analyzing the best protocol to apply in various configurations.

Extreme valuation and volatility of cryptocurrencies require investors to diversify often which demands secure exchange protocols. A cross-chain swap protocol allows distrusting parties to securely exchange their assets. However, the current models and protocols assume predefined user preferences for acceptable outcomes. This paper presents a generalized model of swaps that allows each party to specify its preferences on the subsets of its incoming and outgoing assets. It shows that the existing swap protocols are not necessarily a strong Nash equilibrium in this model. It characterizes the class of swap graphs that have protocols that are safe, live and a strong Nash equilibrium, and presents such a protocol for this class. Further, it shows that deciding whether a swap is in this class is NP-hard through a reduction from 3SAT, and further is $\Sigma_2^{\mathsf{P}}$-complete through a reduction from $\exists\forall\mathsf{DNF}$.

This paper presents a succinct review of attempts in the literature to use game theory to model decision making scenarios relevant to defence applications. Game theory has been proven as a very effective tool in modelling decision making processes of intelligent agents, entities, and players. It has been used to model scenarios from diverse fields such as economics, evolutionary biology, and computer science. In defence applications, there is often a need to model and predict actions of hostile actors, and players who try to evade or out-smart each other. Modelling how the actions of competitive players shape the decision making of each other is the forte of game theory. In past decades, there have been several studies which applied different branches of game theory to model a range of defence-related scenarios. This paper provides a structured review of such attempts, and classifies existing literature in terms of the kind of warfare modelled, the types of game used, and the players involved. The presented analysis provides a concise summary about the state-of-the-art with regards to the use of game theory in defence applications, and highlights the benefits and limitations of game theory in the considered scenarios.

Meta-learning has gained wide popularity as a training framework that is more data-efficient than traditional machine learning methods. However, its generalization ability in complex task distributions, such as multimodal tasks, has not been thoroughly studied. Recently, some studies on multimodality-based meta-learning have emerged. This survey provides a comprehensive overview of the multimodality-based meta-learning landscape in terms of the methodologies and applications. We first formalize the definition of meta-learning and multimodality, along with the research challenges in this growing field, such as how to enrich the input in few-shot or zero-shot scenarios and how to generalize the models to new tasks. We then propose a new taxonomy to systematically discuss typical meta-learning algorithms combined with multimodal tasks. We investigate the contributions of related papers and summarize them by our taxonomy. Finally, we propose potential research directions for this promising field.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司