Modern web applications often resort to application development frameworks such as React, Vue.js, and Angular. While the frameworks facilitate the development of web applications with several useful components, they are inevitably vulnerable to unmanaged memory consumption since the frameworks often produce Single Page Applications (SPAs). Web applications can be alive for hours and days with behavior loops, in such cases, even a single memory leak in a SPA app can cause performance degradation on the client side. However, recent debugging techniques for web applications still focus on memory leak detection, which requires manual tasks and produces imprecise results. We propose LeakPair, a technique to repair memory leaks in single page applications. Given the insight that memory leaks are mostly non-functional bugs and fixing them might not change the behavior of an application, the technique is designed to proactively generate patches to fix memory leaks, without leak detection, which is often heavy and tedious. To generate effective patches, LeakPair follows the idea of pattern-based program repair since the automated repair strategy shows successful results in many recent studies. We evaluate the technique on more than 20 open-source projects without using explicit leak detection. The patches generated by our technique are also submitted to the projects as pull requests. The results show that LeakPair can generate effective patches to reduce memory consumption that are acceptable to developers. In addition, we execute the test suites given by the projects after applying the patches, and it turns out that the patches do not cause any functionality breakage; this might imply that LeakPair can generate non-intrusive patches for memory leaks.
The widespread deployment of Consumer Internet of Things devices in proximity to human activities makes them digital observers of our daily actions. This has led to a new field of digital forensics, known as IoT Forensics, where digital traces generated by IoT devices can serve as key evidence for forensic investigations. Thus, there is a need to develop tools that can efficiently acquire and store network traces from IoT ecosystems. This paper presents IoTScent, an open-source IoT forensic tool that enables IoT gateways and Home Automation platforms to perform IoT traffic capture and analysis. Unlike other works focusing on IP-based protocols, IoTScent is specifically designed to operate over IEEE 802.15.4-based traffic, which is the basis for many IoT-specific protocols such as Zigbee, 6LoWPAN and Thread. IoTScent offers live traffic capture and feature extraction capabilities, providing a framework for forensic data collection that simplifies the task of setting up a data collection pipeline, automating the data collection process, and providing ready-made features that can be used for forensic evidence extraction. This work provides a comprehensive description of the IoTScent tool, including a practical use case that demonstrates the use of the tool to perform device identification from Zigbee traffic. The study presented here significantly contributes to the ongoing research in IoT Forensics by addressing the challenges faced in the field and publicly releasing the IoTScent tool.
Increasing the context length of large language models (LLMs) unlocks fundamentally new capabilities, but also significantly increases the memory footprints of training. Previous model-parallel systems such as Megatron-LM partition and compute different attention heads in parallel, resulting in large communication volumes, so they cannot scale beyond the number of attention heads, thereby hindering its adoption. In this paper, we introduce a new approach, LightSeq, for long-context LLMs training. LightSeq has many notable advantages. First, LightSeq partitions over the sequence dimension, hence is agnostic to model architectures and readily applicable for models with varying numbers of attention heads, such as Multi-Head, Multi-Query and Grouped-Query attention. Second, LightSeq not only requires up to 4.7x less communication than Megatron-LM on popular LLMs but also overlaps the communication with computation. To further reduce the training time, LightSeq features a novel gradient checkpointing scheme to bypass an forward computation for memory-efficient attention. We evaluate LightSeq on Llama-7B and its variants with sequence lengths from 32K to 512K. Through comprehensive experiments on single and cross-node training, we show that LightSeq achieves up to 1.24-2.01x end-to-end speedup, and a 2-8x longer sequence length on models with fewer heads, compared to Megatron-LM. Codes will be available at //github.com/RulinShao/LightSeq.
In this paper, we propose ``SimVLG'', a streamlined framework for the pre-training of computationally intensive vision-language generative models, leveraging frozen pre-trained large language models (LLMs). The prevailing paradigm in vision-language pre-training (VLP) typically involves a two-stage optimization process: an initial resource-intensive phase dedicated to general-purpose vision-language representation learning, aimed at extracting and consolidating pertinent visual features, followed by a subsequent phase focusing on end-to-end alignment between visual and linguistic modalities. Our one-stage, single-loss framework circumvents the aforementioned computationally demanding first stage of training by gradually merging similar visual tokens during training. This gradual merging process effectively compacts the visual information while preserving the richness of semantic content, leading to fast convergence without sacrificing performance. Our experiments show that our approach can speed up the training of vision-language models by a factor $\times 5$ without noticeable impact on the overall performance. Additionally, we show that our models can achieve comparable performance to current vision-language models with only $1/10$ of the data. Finally, we demonstrate how our image-text models can be easily adapted to video-language generative tasks through a novel soft attentive temporal token merging modules.
Domain Name System (DNS) is a critical component of the Internet. DNS resolvers, which act as the cache between DNS clients and DNS nameservers, are the central piece of the DNS infrastructure, essential to the scalability of DNS. However, finding the resolver vulnerabilities is non-trivial, and this problem is not well addressed by the existing tools. To list a few reasons, first, most of the known resolver vulnerabilities are non-crash bugs that cannot be directly detected by the existing oracles (or sanitizers). Second, there lacks rigorous specifications to be used as references to classify a test case as a resolver bug. Third, DNS resolvers are stateful, and stateful fuzzing is still challenging due to the large input space. In this paper, we present a new fuzzing system termed ResolverFuzz to address the aforementioned challenges related to DNS resolvers, with a suite of new techniques being developed. First, ResolverFuzz performs constrained stateful fuzzing by focusing on the short query-response sequence, which has been demonstrated as the most effective way to find resolver bugs, based on our study of the published DNS CVEs. Second, to generate test cases that are more likely to trigger resolver bugs, we combine probabilistic context-free grammar (PCFG) based input generation with byte-level mutation for both queries and responses. Third, we leverage differential testing and clustering to identify non-crash bugs like cache poisoning bugs. We evaluated ResolverFuzz against 6 mainstream DNS software under 4 resolver modes. Overall, we identify 23 vulnerabilities that can result in cache poisoning, resource consumption, and crash attacks. After responsible disclosure, 19 of them have been confirmed or fixed, and 15 CVE numbers have been assigned.
Tracking subjects in videos is one of the most widely used functions in camera-based IoT applications such as security surveillance, smart city traffic safety enhancement, vehicle to pedestrian communication and so on. In the computer vision domain, tracking is usually achieved by first detecting subjects with bounding boxes, then associating detected bounding boxes across video frames. For many IoT systems, images captured by cameras are usually sent over the network to be processed at a different site that has more powerful computing resources than edge devices. However, sending entire frames through the network causes significant bandwidth consumption that may exceed the system bandwidth constraints. To tackle this problem, we propose ViFiT, a transformer-based model that reconstructs vision bounding box trajectories from phone data (IMU and Fine Time Measurements). It leverages a transformer ability of better modeling long-term time series data. ViFiT is evaluated on Vi-Fi Dataset, a large-scale multimodal dataset in 5 diverse real world scenes, including indoor and outdoor environments. To fill the gap of proper metrics of jointly capturing the system characteristics of both tracking quality and video bandwidth reduction, we propose a novel evaluation framework dubbed Minimum Required Frames (MRF) and Minimum Required Frames Ratio (MRFR). ViFiT achieves an MRFR of 0.65 that outperforms the state-of-the-art approach for cross-modal reconstruction in LSTM Encoder-Decoder architecture X-Translator of 0.98, resulting in a high frame reduction rate as 97.76%.
Markov Decision Processes (MDPs) are a formal framework for modeling and solving sequential decision-making problems. In finite-time horizons such problems are relevant for instance for optimal stopping or specific supply chain problems, but also in the training of large language models. In contrast to infinite horizon MDPs optimal policies are not stationary, policies must be learned for every single epoch. In practice all parameters are often trained simultaneously, ignoring the inherent structure suggested by dynamic programming. This paper introduces a combination of dynamic programming and policy gradient called dynamic policy gradient, where the parameters are trained backwards in time. For the tabular softmax parametrisation we carry out the convergence analysis for simultaneous and dynamic policy gradient towards global optima, both in the exact and sampled gradient settings without regularisation. It turns out that the use of dynamic policy gradient training much better exploits the structure of finite-time problems which is reflected in improved convergence bounds.
Many software engineers develop, fine-tune, and deploy deep learning (DL) models. They use DL models in a variety of development frameworks and deploy to a range of runtime environments. In this diverse ecosystem, engineers use DL model converters to move models from frameworks to runtime environments. Conversion errors compromise model quality and disrupt deployment. However, failure modes and patterns of DL model converters are unknown. This knowledge gap adds engineering risk in DL interoperability technologies. In this paper, we conduct the first failure analysis on DL model converters. Specifically, we characterize failures in model converters associated with ONNX (Open Neural Network eXchange). We analyze failures in the ONNX converters for two major DL frameworks, PyTorch and TensorFlow. The symptoms, causes, and locations of failures are reported for N=200 issues. We also evaluate why models fail by converting 5,149 models, both real-world and synthetically generated instances. Through the course of our testing, we find 11 defects (5 new) across torch.onnx, tf2onnx, and the ONNXRuntime. We evaluated two hypotheses about the relationship between model operators and converter failures, falsifying one and with equivocal results on the other. We describe and note weaknesses in the current testing strategies for model converters. Our results motivate future research on making DL software simpler to maintain, extend, and validate.
In 2023, images on the web make up 41% of transmitted data, significantly impacting the performance of web apps. Fortunately, image formats like WEBP and AVIF could offer advanced compression and faster page loading, but may face performance disparities across browsers. Therefore, we conducted performance evaluations on five major browsers - Chrome, Edge, Safari, Opera, and Firefox - while comparing four image formats. The results indicate that the newer formats exhibited notable performance enhancements across all browsers, leading to shorter loading times. Compared to the compressed JPEG format, WEBP and AVIF improved the Page Load Time by 21% and 15%, respectively. However, web scraping revealed that JPEG and PNG still dominate web image choices, with WEBP at 4% as the most used new format. Through the web scraping and web performance evaluation, this research serves to (1) explore image format preferences in web applications and analyze distribution and characteristics across frequently-visited sites in 2023 and (2) assess the performance impact of distinct web image formats on application load times across popular web browsers.
Blind and low-vision (BLV) people rely on GPS-based systems for outdoor navigation. GPS's inaccuracy, however, causes them to veer off track, run into unexpected obstacles, and struggle to reach precise destinations. While prior work has made precise navigation possible indoors via additional hardware installations, enabling precise navigation outdoors remains a challenge. Ironically, many outdoor environments of interest such as downtown districts are already instrumented with hardware such as street cameras. In this work, we explore the idea of repurposing street cameras for outdoor navigation, and investigate the effectiveness of such an approach. Our resulting system, StreetNav, processes the cameras' video feeds using computer vision and gives BLV pedestrians real-time navigation assistance. Our user evaluations in the COSMOS testbed with eight BLV pedestrians show that StreetNav guides them more precisely than GPS, but its performance is sensitive to lighting conditions and environmental occlusions. We discuss future implications for deploying such systems at scale.
The Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A pretrained foundation model, such as BERT, GPT-3, MAE, DALLE-E, and ChatGPT, is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. The idea of pretraining behind PFMs plays an important role in the application of large models. Different from previous methods that apply convolution and recurrent modules for feature extractions, the generative pre-training (GPT) method applies Transformer as the feature extractor and is trained on large datasets with an autoregressive paradigm. Similarly, the BERT apples transformers to train on large datasets as a contextual language model. Recently, the ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few show prompting. With the extraordinary success of PFMs, AI has made waves in a variety of fields over the past few years. Considerable methods, datasets, and evaluation metrics have been proposed in the literature, the need is raising for an updated survey. This study provides a comprehensive review of recent research advancements, current and future challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. We first review the basic components and existing pretraining in natural language processing, computer vision, and graph learning. We then discuss other advanced PFMs for other data modalities and unified PFMs considering the data quality and quantity. Besides, we discuss relevant research about the fundamentals of the PFM, including model efficiency and compression, security, and privacy. Finally, we lay out key implications, future research directions, challenges, and open problems.