亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Smart contracts are self-executing programs on a blockchain to ensure immutable and transparent agreements without the involvement of intermediaries. Despite the growing popularity of smart contracts for many blockchain platforms like Ethereum, smart contract developers cannot prevent copying their smart contracts from competitors due to the absence of technical means available. However, applying existing software watermarking techniques is challenging because of the unique properties of smart contracts, such as a code size constraint, non-free execution cost, and no support for dynamic allocation under a virtual machine environment. This paper introduces a novel software watermarking scheme, dubbed SmartMark, aiming to protect the piracy of smart contracts. SmartMark builds the control flow graph of a target contract runtime bytecode and locates a series of bytes randomly selected from a collection of opcodes to represent a watermark. We implement a full-fledged prototype for Ethereum, applying SmartMark to 27,824 unique smart contract bytecodes. Our empirical results demonstrate that SmartMark can effectively embed a watermark into smart contracts and verify its presence, meeting the requirements of credibility and imperceptibility while incurring a slight performance degradation. Furthermore, our security analysis shows that SmartMark is resilient against foreseeable watermarking corruption attacks; e.g., a large number of dummy opcodes are needed to disable a watermark effectively, resulting in producing illegitimate smart contract clones that are not economical.

相關內容

The widespread adoption of Internet of Things (IoT) devices in smart cities, intelligent healthcare systems, and various real-world applications have resulted in the generation of vast amounts of data, often analyzed using different Machine Learning (ML) models. Federated learning (FL) has been acknowledged as a privacy-preserving machine learning technology, where multiple parties cooperatively train ML models without exchanging raw data. However, the current FL architecture does not allow for an audit of the training process due to the various data-protection policies implemented by each FL participant. Furthermore, there is no global model verifiability available in the current architecture. This paper proposes a smart contract-based policy control for securing the Federated Learning (FL) management system. First, we develop and deploy a smart contract-based local training policy control on the FL participants' side. This policy control is used to verify the training process, ensuring that the evaluation process follows the same rules for all FL participants. We then enforce a smart contract-based aggregation policy to manage the global model aggregation process. Upon completion, the aggregated model and policy are stored on blockchain-based storage. Subsequently, we distribute the aggregated global model and the smart contract to all FL participants. Our proposed method uses smart policy control to manage access and verify the integrity of machine learning models. We conducted multiple experiments with various machine learning architectures and datasets to evaluate our proposed framework, such as MNIST and CIFAR-10.

The Smart Contract Weakness Classification Registry (SWC Registry) is a widely recognized list of smart contract weaknesses specific to the Ethereum platform. In recent years, significant research efforts have been dedicated to building tools to detect SWC weaknesses. However, evaluating these tools has proven challenging due to the absence of a large, unbiased, real-world dataset. To address this issue, we recruited 22 participants and spent 44 person-months analyzing 1,322 open-source audit reports from 30 security teams. In total, we identified 10,016 weaknesses and developed two distinct datasets, i.e., DAppSCAN-Source and DAppSCAN-Bytecode. The DAppSCAN-Source dataset comprises 25,077 Solidity files, featuring 1,689 SWC vulnerabilities sourced from 1,139 real-world DApp projects. The Solidity files in this dataset may not be directly compilable. To enable the dataset to be compilable, we developed a tool capable of automatically identifying dependency relationships within DApps and completing missing public libraries. By utilizing this tool, we created our DAPPSCAN-Bytecode dataset, which consists of 8,167 compiled smart contract bytecode with 895 SWC weaknesses. Based on the second dataset, we conducted an empirical study to assess the performance of five state-of-the-art smart contract vulnerability detection tools. The evaluation results revealed subpar performance for these tools in terms of both effectiveness and success detection rate, indicating that future development should prioritize real-world datasets over simplistic toy contracts.

Market research surveys are a powerful methodology for understanding consumer perspectives at scale, but are limited by depth of understanding and insights. A virtual moderator can introduce elements of qualitative research into surveys, developing a rapport with survey participants and dynamically asking probing questions, ultimately to elicit more useful information for market researchers. In this work, we introduce ${\tt SmartProbe}$, an API which leverages the adaptive capabilities of large language models (LLMs), and incorporates domain knowledge from market research, in order to generate effective probing questions in any market research survey. We outline the modular processing flow of $\tt SmartProbe$, and evaluate the quality and effectiveness of its generated probing questions. We believe our efforts will inspire industry practitioners to build real-world applications based on the latest advances in LLMs. Our demo is publicly available at //nexxt.in/smartprobe-demo

Local search is a powerful heuristic in optimization and computer science, the complexity of which was studied in the white box and black box models. In the black box model, we are given a graph $G = (V,E)$ and oracle access to a function $f : V \to \mathbb{R}$. The local search problem is to find a vertex $v$ that is a local minimum, i.e. with $f(v) \leq f(u)$ for all $(u,v) \in E$, using as few queries as possible. The query complexity is well understood on the grid and the hypercube, but much less is known beyond. We show the query complexity of local search on $d$-regular expanders with constant degree is $\Omega\left(\frac{\sqrt{n}}{\log{n}}\right)$, where $n$ is the number of vertices. This matches within a logarithmic factor the upper bound of $O(\sqrt{n})$ for constant degree graphs from Aldous (1983), implying that steepest descent with a warm start is an essentially optimal algorithm for expanders. The best lower bound known from prior work was $\Omega\left(\frac{\sqrt[8]{n}}{\log{n}}\right)$, shown by Santha and Szegedy (2004) for quantum and randomized algorithms. We obtain this result by considering a broader framework of graph features such as vertex congestion and separation number. We show that for each graph, the randomized query complexity of local search is $\Omega\left(\frac{n^{1.5}}{g}\right)$, where $g$ is the vertex congestion of the graph; and $\Omega\left(\sqrt[4]{\frac{s}{\Delta}}\right)$, where $s$ is the separation number and $\Delta$ is the maximum degree. For separation number the previous bound was $\Omega\left(\sqrt[8]{\frac{s}{\Delta}} /\log{n}\right)$, given by Santha and Szegedy for quantum and randomized algorithms. We also show a variant of the relational adversary method from Aaronson (2006), which is asymptotically at least as strong as the version in Aaronson (2006) for all randomized algorithms and strictly stronger for some problems.

In recent decades, due to the emerging requirements of computation acceleration, cloud FPGAs have become popular in public clouds. Major cloud service providers, e.g. AWS and Microsoft Azure have provided FPGA computing resources in their infrastructure and have enabled users to design and deploy their own accelerators on these FPGAs. Multi-tenancy FPGAs, where multiple users can share the same FPGA fabric with certain types of isolation to improve resource efficiency, have already been proved feasible. However, this also raises security concerns. Various types of side-channel attacks targeting multi-tenancy FPGAs have been proposed and validated. The awareness of security vulnerabilities in the cloud has motivated cloud providers to take action to enhance the security of their cloud environments. In FPGA security research papers, researchers always perform attacks under the assumption that attackers successfully co-locate with victims and are aware of the existence of victims on the same FPGA board. However, the way to reach this point, i.e., how attackers secretly obtain information regarding accelerators on the same fabric, is constantly ignored despite the fact that it is non-trivial and important for attackers. In this paper, we present a novel fingerprinting attack to gain the types of co-located FPGA accelerators. We utilize a seemingly non-malicious benchmark accelerator to sniff the communication link and collect performance traces of the FPGA-host communication link. By analyzing these traces, we are able to achieve high classification accuracy for fingerprinting co-located accelerators, which proves that attackers can use our method to perform cloud FPGA accelerator fingerprinting with a high success rate. As far as we know, this is the first paper targeting multi-tenant FPGA accelerator fingerprinting with the communication side-channel.

Millions of smart contracts have been deployed onto Ethereum for providing various services, whose functions can be invoked. For this purpose, the caller needs to know the function signature of a callee, which includes its function id and parameter types. Such signatures are critical to many applications focusing on smart contracts, e.g., reverse engineering, fuzzing, attack detection, and profiling. Unfortunately, it is challenging to recover the function signatures from contract bytecode, since neither debug information nor type information is present in the bytecode. To address this issue, prior approaches rely on source code, or a collection of known signatures from incomplete databases or incomplete heuristic rules, which, however, are far from adequate and cannot cope with the rapid growth of new contracts. In this paper, we propose a novel solution that leverages how functions are handled by Ethereum virtual machine (EVM) to automatically recover function signatures. In particular, we exploit how smart contracts determine the functions to be invoked to locate and extract function ids, and propose a new approach named type-aware symbolic execution (TASE) that utilizes the semantics of EVM operations on parameters to identify the number and the types of parameters. Moreover, we develop SigRec, a new tool for recovering function signatures from contract bytecode without the need of source code and function signature databases. The extensive experimental results show that SigRec outperforms all existing tools, achieving an unprecedented 98.7 percent accuracy within 0.074 seconds. We further demonstrate that the recovered function signatures are useful in attack detection, fuzzing and reverse engineering of EVM bytecode.

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.

There is a recent large and growing interest in generative adversarial networks (GANs), which offer powerful features for generative modeling, density estimation, and energy function learning. GANs are difficult to train and evaluate but are capable of creating amazingly realistic, though synthetic, image data. Ideas stemming from GANs such as adversarial losses are creating research opportunities for other challenges such as domain adaptation. In this paper, we look at the field of GANs with emphasis on these areas of emerging research. To provide background for adversarial techniques, we survey the field of GANs, looking at the original formulation, training variants, evaluation methods, and extensions. Then we survey recent work on transfer learning, focusing on comparing different adversarial domain adaptation methods. Finally, we take a look forward to identify open research directions for GANs and domain adaptation, including some promising applications such as sensor-based human behavior modeling.

北京阿比特科技有限公司