亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Wasm runtime is a fundamental component in the Wasm ecosystem, as it directly impacts whether Wasm applications can be executed as expected. Bugs in Wasm runtime bugs are frequently reported, thus our research community has made a few attempts to design automated testing frameworks for detecting bugs in Wasm runtimes. However, existing testing frameworks are limited by the quality of test cases, i.e., they face challenges of generating both semantic-rich and syntactic-correct Wasm binaries, thus complicated bugs cannot be triggered. In this work, we present WRTester, a novel differential testing framework that can generated complicated Wasm test cases by disassembling and assembling of real-world Wasm binaries, which can trigger hidden inconsistencies among Wasm runtimes. For further pinpointing the root causes of unexpected behaviors, we design a runtime-agnostic root cause location method to accurately locate bugs. Extensive evaluation suggests that WRTester outperforms SOTA techniques in terms of both efficiency and effectiveness. We have uncovered 33 unique bugs in popular Wasm runtimes, among which 25 have been confirmed.

相關內容

Existing research on malware detection focuses almost exclusively on the detection rate. However, in some cases, it is also important to understand the results of our algorithm, or to obtain more information, such as where to investigate in the file for an analyst. In this aim, we propose a new model to analyze Portable Executable files. Our method consists in splitting the files in different sections, then transform each section into an image, in order to train convolutional neural networks to treat specifically each identified section. Then we use all these scores returned by CNNs to compute a final detection score, using models that enable us to improve our analysis of the importance of each section in the final score.

Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize this potential, it is essential to develop a principled way of designing and studying such structured interactions. For this purpose, we introduce the conceptual framework Flows. Flows are self-contained building blocks of computation, with an isolated state, communicating through a standardized message-based interface. This modular design simplifies the process of creating Flows by allowing them to be recursively composed into arbitrarily nested interactions and is inherently concurrency-friendly. Crucially, any interaction can be implemented using this framework, including prior work on AI-AI and human-AI interactions, prompt engineering schemes, and tool augmentation. We demonstrate the potential of Flows on competitive coding, a challenging task on which even GPT-4 struggles. Our results suggest that structured reasoning and collaboration substantially improve generalization, with AI-only Flows adding +21 and human-AI Flows adding +54 absolute points in terms of solve rate. To support rapid and rigorous research, we introduce the aiFlows library embodying Flows. The aiFlows library is available at //github.com/epfl-dlab/aiflows. Data and Flows for reproducing our experiments are available at //github.com/epfl-dlab/cc_flows.

Techniques of computer systems that have been successfully deployed for dense regular workloads fall short of achieving their goals of scalability and efficiency when applied to irregular and dynamic applications. This is primarily due to the discontent between the multiple layers of the system design from hardware architecture, execution model, programming model, to data-structure and application code. The paper approaches this issue by addressing all layers of the system design. It presents and argues key design principles needed for scalable and efficient dynamic graph processing, and from which it builds: 1) a fine-grain memory driven architecture that supports asynchronous active messages, 2) a programming and execution model that allows spawning tasks from within the data-parallelism, 3) and a data-structure that parallelizes vertex object across many compute cells and yet provides a single programming abstraction to the data object. Simulated experimental results show performance gain of geomean $2.38 \times$ against an state-of-the-art similar system for graph traversals and yet being able to natively support dynamic graph processing. It uses programming abstractions of actions, introduces new dynamic graph storage scheme, and message delivery mechanisms with continuations that contain post-completion actions. Continuations seamlessly adjusts, prior or running, execution to mutations in the input graph and enable dynamic graph processing.

Serving generative inference of the large-scale foundation model is a crucial component of contemporary AI applications. This paper focuses on deploying such services in a heterogeneous and decentralized setting to mitigate the substantial inference costs typically associated with centralized data centers. Towards this end, we propose HexGen, a flexible distributed inference engine that uniquely supports the asymmetric partition of generative inference computations over both tensor model parallelism and pipeline parallelism and allows for effective deployment across diverse GPUs interconnected by a fully heterogeneous network. We further propose a sophisticated scheduling algorithm grounded in constrained optimization that can adaptively assign asymmetric inference computation across the GPUs to fulfill inference requests while maintaining acceptable latency levels. We conduct an extensive evaluation to verify the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B) model. The results suggest that HexGen can choose to achieve up to 2.3 times lower latency deadlines or tolerate up to 4 times more request rates compared with the homogeneous baseline given the same budget.

The success of retrieval-augmented language models in various natural language processing (NLP) tasks has been constrained in automatic speech recognition (ASR) applications due to challenges in constructing fine-grained audio-text datastores. This paper presents kNN-CTC, a novel approach that overcomes these challenges by leveraging Connectionist Temporal Classification (CTC) pseudo labels to establish frame-level audio-text key-value pairs, circumventing the need for precise ground truth alignments. We further introduce a skip-blank strategy, which strategically ignores CTC blank frames, to reduce datastore size. kNN-CTC incorporates a k-nearest neighbors retrieval mechanism into pre-trained CTC ASR systems, achieving significant improvements in performance. By incorporating a k-nearest neighbors retrieval mechanism into pre-trained CTC ASR systems and leveraging a fine-grained, pruned datastore, kNN-CTC consistently achieves substantial improvements in performance under various experimental settings. Our code is available at //github.com/NKU-HLT/KNN-CTC.

The advances of deep learning (DL) have paved the way for automatic software vulnerability repair approaches, which effectively learn the mapping from the vulnerable code to the fixed code. Nevertheless, existing DL-based vulnerability repair methods face notable limitations: 1) they struggle to handle lengthy vulnerable code, 2) they treat code as natural language texts, neglecting its inherent structure, and 3) they do not tap into the valuable expert knowledge present in the expert system. To address this, we propose VulMaster, a Transformer-based neural network model that excels at generating vulnerability repairs by comprehensively understanding the entire vulnerable code, irrespective of its length. This model also integrates diverse information, encompassing vulnerable code structures and expert knowledge from the CWE system. We evaluated VulMaster on a real-world C/C++ vulnerability repair dataset comprising 1,754 projects with 5,800 vulnerable functions. The experimental results demonstrated that VulMaster exhibits substantial improvements compared to the learning-based state-of-the-art vulnerability repair approach. Specifically, VulMaster improves the EM, BLEU, and CodeBLEU scores from 10.2\% to 20.0\%, 21.3\% to 29.3\%, and 32.5\% to 40.9\%, respectively.

UMBRELLA is an open, large-scale IoT ecosystem deployed across South Gloucestershire, UK. It is intended to accelerate innovation across multiple technology domains. UMBRELLA is built to bridge the gap between existing specialised testbeds and address holistically real-world technological challenges in a System-of-Systems (SoS) fashion. UMBRELLA provides open access to real-world devices and infrastructure, enabling researchers and the industry to evaluate solutions for Smart Cities, Robotics, Wireless Communications, Edge Intelligence, and more. Key features include over 200 multi-sensor nodes installed on public infrastructure, a robotics arena with 20 mobile robots, a 5G network-in-a-box solution, and a unified backend platform for management, control and secure user access. The heterogeneity of hardware components, including diverse sensors, communication interfaces, and GPU-enabled edge devices, coupled with tools like digital twins, allows for comprehensive experimentation and benchmarking of innovative solutions not viable in lab environments. This paper provides a comprehensive overview of UMBRELLA's multi-domain architecture and capabilities, making it an ideal playground for Internet of Things (IoT) and Industrial IoT (IIoT) innovation. It discusses the challenges in designing, developing and operating UMBRELLA as an open, sustainable testbed and shares lessons learned to guide similar future initiatives. With its unique openness, heterogeneity, realism and tools, UMBRELLA aims to continue accelerating cutting-edge technology research, development and translation into real-world progress.

Reasoning about resources used during the execution of programs, such as time, is one of the fundamental questions in computer science. When programming with probabilistic primitives, however, different samples may result in different resource usage, making the cost of a program not a single number but a distribution instead. The expected cost is an important metric used to quantify the efficiency of probabilistic programs. In this work we introduce $\mathbf{cert}$, a call-by-push-value (CBPV) metalanguage extended with primitives for probability, cost and unbounded recursion, and give it denotational semantics for reasoning about the average cost of programs. We justify the validity of the semantics by presenting case-studies ranging from randomized algorithms to stochastic processes and showing how the semantics captures their intended cost.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司