亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Systems and blockchains often have security vulnerabilities and can be attacked by adversaries, with potentially significant negative consequences. Therefore, organizations and blockchain infrastructure providers increasingly rely on bug bounty programs, where external individuals probe the system and report any vulnerabilities (bugs) in exchange for monetary rewards (bounty). We develop a contest model for bug bounty programs with an arbitrary number of agents who decide whether to undertake a costly search for bugs or not. Search costs are private information. Besides characterizing the ensuing equilibria, we show that even inviting an unlimited crowd does not guarantee that bugs are found. Adding paid agents can increase the efficiency of the bug bounty scheme although the crowd that is attracted becomes smaller. Finally, adding (known) bugs increases the likelihood that unknown bugs are found, but to limit reward payments it may be optimal to add them only with some probability.

相關內容

互聯網

Recently, the birth of non-fungible tokens (NFTs) has attracted great attention. NFTs are capable of representing users' ownership on the blockchain and have experienced tremendous market sales due to their popularity. Unfortunately, the high value of NFTs also makes them a target for attackers. The defects in NFT smart contracts could be exploited by attackers to harm the security and reliability of the NFT ecosystem. Despite the significance of this issue, there is a lack of systematic work that focuses on analyzing NFT smart contracts, which may raise worries about the security of users' NFTs. To address this gap, in this paper, we introduce 5 defects in NFT smart contracts. Each defect is defined and illustrated with a code example highlighting its features and consequences, paired with possible solutions to fix it. Furthermore, we propose a tool named NFTGuard to detect our defined defects based on a symbolic execution framework. Specifically, NFTGuard extracts the information of the state variables from the contract abstract syntax tree (AST), which is critical for identifying variable-loading and storing operations during symbolic execution. Furthermore, NFTGuard recovers source-code-level features from the bytecode to effectively locate defects and report them based on predefined detection patterns. We run NFTGuard on 16,527 real-world smart contracts and perform an evaluation based on the manually labeled results. We find that 1,331 contracts contain at least one of the 5 defects, and the overall precision achieved by our tool is 92.6%.

Software design techniques are undoubtedly crucial in the process of designing good software. Over the years, a large number of design techniques have been proposed by both researchers and practitioners. Unfortunately, despite their uniqueness, it is not uncommon to find software products that make subpar design decisions, leading to design degradation challenges. One potential reason for this behavior is that developers do not have a clear vision of how much a code unit could grow; without this vision, a code unit can grow endlessly, even when developers are equipped with an arsenal of design practices. Different than other design techniques, Cognitive Driven Development (CDD for short) focuses on 1) defining and 2) limiting the number of coding elements that developers could use at a given code unit. In this paper, we report on the experiences of a software development team in using CDD for building from scratch a learning management tool at Zup Innovation, a Brazilian tech company. By curating commit traces left in the repositories, combined with the developers' perception, we organized a set of findings and lessons that could be useful for those interested in adopting CDD. For instance, we noticed that by using CDD, despite the evolution of the product, developers were able to keep the code units under a small amount of size (in terms of size). Furthermore, although limiting the complexity is at the heart of CDD, we also discovered that developers tend to relax this notion of limit so that they can cope with the different complexities of the software. Still, we noticed that CDD could also influence testing practices; limiting the code units' size makes testing easier to perform.

Competing strategies in an evolutionary game model, or species in a biosystem, can easily form a larger unit which protects them from the invasion of an external actor. Such a defensive alliance may have two, three, four or even more members. But how effective can be such formation against an alternative group composed by other competitors? To address this question we study a minimal model where a two-member and a four-member alliances fight in a symmetric and balanced way. By presenting representative phase diagrams, we systematically explore the whole parameter range which characterizes the inner dynamics of the alliances and the intensity of their interactions. The group formed by a pair, who can exchange their neighboring positions, prevail in the majority of the parameter region. The rival quartet can only win if their inner cyclic invasion rate is significant while the mixing rate of the pair is extremely low. At specific parameter values, when neither of the alliances is strong enough, new four-member solutions emerge where a rock-paper-scissors-like trio is extended by the other member of the pair. These new solutions coexist hence all six competitors can survive. The evolutionary process is accompanied by serious finite-size effects which can be mitigated by appropriately chosen prepared initial states.

Modern processors dynamically control their operating frequency to optimize resource utilization, maximize energy savings, and conform to system-defined constraints. If, during the execution of a software workload, the running average of any electrical or thermal parameter exceeds its corresponding predefined threshold value, the power management architecture will reactively adjust CPU frequency to ensure safe operating conditions. In this paper, we demonstrate how such power management-based frequency throttling activity forms a source of timing side-channel information leakage, which can be exploited by an attacker to infer secret data even from a constant-cycle victim workload. The proposed frequency throttling side-channel attack can be launched by both kernel-space and user-space attackers, thus compromising security guarantees provided by isolation boundaries. We validate our attack methodology across different systems and threat models by performing experiments on a constant-cycle implementation of AES algorithm based on AES-NI instructions. The results of our experimental evaluations demonstrate that the attacker can successfully recover all bytes of an AES key by measuring encryption execution times. Finally, we discuss different options to mitigate the threat posed by frequency throttling side-channel attacks, as well as their advantages and disadvantages.

Benchmarking and co-design are essential for driving optimizations and innovation around ML models, ML software, and next-generation hardware. Full workload benchmarks, e.g. MLPerf, play an essential role in enabling fair comparison across different software and hardware stacks especially once systems are fully designed and deployed. However, the pace of AI innovation demands a more agile methodology to benchmark creation and usage by simulators and emulators for future system co-design. We propose Chakra, an open graph schema for standardizing workload specification capturing key operations and dependencies, also known as Execution Trace (ET). In addition, we propose a complementary set of tools/capabilities to enable collection, generation, and adoption of Chakra ETs by a wide range of simulators, emulators, and benchmarks. For instance, we use generative AI models to learn latent statistical properties across thousands of Chakra ETs and use these models to synthesize Chakra ETs. These synthetic ETs can obfuscate key proprietary information and also target future what-if scenarios. As an example, we demonstrate an end-to-end proof-of-concept that converts PyTorch ETs to Chakra ETs and uses this to drive an open-source training system simulator (ASTRA-sim). Our end-goal is to build a vibrant industry-wide ecosystem of agile benchmarks and tools to drive future AI system co-design.

As large language models (LLMs) like ChatGPT exhibited unprecedented machine intelligence, it also shows great performance in assisting hardware engineers to realize higher-efficiency logic design via natural language interaction. To estimate the potential of the hardware design process assisted by LLMs, this work attempts to demonstrate an automated design environment that explores LLMs to generate hardware logic designs from natural language specifications. To realize a more accessible and efficient chip development flow, we present a scalable four-stage zero-code logic design framework based on LLMs without retraining or finetuning. At first, the demo, ChipGPT, begins by generating prompts for the LLM, which then produces initial Verilog programs. Second, an output manager corrects and optimizes these programs before collecting them into the final design space. Eventually, ChipGPT will search through this space to select the optimal design under the target metrics. The evaluation sheds some light on whether LLMs can generate correct and complete hardware logic designs described by natural language for some specifications. It is shown that ChipGPT improves programmability, and controllability, and shows broader design optimization space compared to prior work and native LLMs alone.

Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.

Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

We study how to generate captions that are not only accurate in describing an image but also discriminative across different images. The problem is both fundamental and interesting, as most machine-generated captions, despite phenomenal research progresses in the past several years, are expressed in a very monotonic and featureless format. While such captions are normally accurate, they often lack important characteristics in human languages - distinctiveness for each caption and diversity for different images. To address this problem, we propose a novel conditional generative adversarial network for generating diverse captions across images. Instead of estimating the quality of a caption solely on one image, the proposed comparative adversarial learning framework better assesses the quality of captions by comparing a set of captions within the image-caption joint space. By contrasting with human-written captions and image-mismatched captions, the caption generator effectively exploits the inherent characteristics of human languages, and generates more discriminative captions. We show that our proposed network is capable of producing accurate and diverse captions across images.

北京阿比特科技有限公司