Distributed computing frameworks such as MapReduce and Spark are often used to process large-scale data computing jobs. In wireless scenarios, exchanging data among distributed nodes would seriously suffer from the communication bottleneck due to limited communication resources such as bandwidth and power. To address this problem, we propose a coded parallel computing (CPC) scheme for distributed computing systems where distributed nodes exchange information over a half-duplex wireless interference network. The CPC scheme achieves the multicast gain by utilizing coded computing to multicast coded symbols {intended to} multiple receiver nodes and the cooperative transmission gain by allowing multiple {transmitter} nodes to jointly deliver messages via interference alignment. To measure communication performance, we apply the widely used latency-oriented metric: \emph{normalized delivery time (NDT)}. It is shown that CPC can significantly reduce the NDT by jointly exploiting the parallel transmission and coded multicasting opportunities. Surprisingly, when $K$ tends to infinity and the computation load is fixed, CPC approaches zero NDT while all state-of-the-art schemes achieve positive values of NDT. Finally, we establish an information-theoretic lower bound for the NDT-computation load trade-off over \emph{half-duplex} network, and prove our scheme achieves the minimum NDT within a multiplicative gap of $3$, i.e., our scheme is order optimal.
AI-Generated Content (AIGC), as a novel manner of providing Metaverse services in the forthcoming Internet paradigm, can resolve the obstacles of immersion requirements. Concurrently, edge computing, as an evolutionary paradigm of computing in communication systems, effectively augments real-time interactive services. In pursuit of enhancing the accessibility of AIGC services, the deployment of AIGC models (e.g., diffusion models) to edge servers and local devices has become a prevailing trend. Nevertheless, this approach faces constraints imposed by battery life and computational resources when tasks are offloaded to local devices, limiting the capacity to deliver high-quality content to users while adhering to stringent latency requirements. So there will be a tradeoff between the utility of AIGC models and offloading decisions in the edge computing paradigm. This paper proposes a joint optimization algorithm for offloading decisions, computation time, and diffusion steps of the diffusion models in the reverse diffusion stage. Moreover, we take the average error into consideration as the metric for evaluating the quality of the generated results. Experimental results conclusively demonstrate that the proposed algorithm achieves superior joint optimization performance compared to the baselines.
In recent years, deep learning (DL)-based methods have been widely used in code vulnerability detection. The DL-based methods typically extract structural information from source code, e.g., code structure graph, and adopt neural networks such as Graph Neural Networks (GNNs) to learn the graph representations. However, these methods fail to consider the heterogeneous relations in the code structure graph, i.e., the heterogeneous relations mean that the different types of edges connect different types of nodes in the graph, which may obstruct the graph representation learning. Besides, these methods are limited in capturing long-range dependencies due to the deep levels in the code structure graph. In this paper, we propose a Meta-path based Attentional Graph learning model for code vulNErability deTection, called MAGNET. MAGNET constructs a multi-granularity meta-path graph for each code snippet, in which the heterogeneous relations are denoted as meta-paths to represent the structural information. A meta-path based hierarchical attentional graph neural network is also proposed to capture the relations between distant nodes in the graph. We evaluate MAGNET on three public datasets and the results show that MAGNET outperforms the best baseline method in terms of F1 score by 6.32%, 21.50%, and 25.40%, respectively. MAGNET also achieves the best performance among all the baseline methods in detecting Top-25 most dangerous Common Weakness Enumerations (CWEs), further demonstrating its effectiveness in vulnerability detection.
Collaborative edge computing (CEC) is an emerging paradigm for heterogeneous devices to collaborate on edge computation jobs. For congestible links and computing units, delay-optimal forwarding and offloading for service chain tasks (e.g., DNN with vertical split) in CEC remains an open problem. In this paper, we formulate the service chain forwarding and offloading in CEC with arbitrary topology and heterogeneous transmission/computation capability, and aim to minimize the network aggregated cost. We consider congestion-aware nonlinear cost functions that cover various performance metrics and constraints, such as average queueing delay with limited processor capacity. We solve the non-convex optimization problem globally by analyzing the KKT condition and proposing a sufficiency optimality condition. We propose a polynomial-time distributed algorithm that converges to the global optimum. The algorithm adapts to changes in input rates and network topology, and can be implemented as an online algorithm. Numerical evaluation shows that our method significantly outperforms baselines in multiple network instances, especially in congested scenarios.
Sustainability in high performance computing (HPC) is a major challenge not only for HPC centers and their users, but also for society as the climate goals become stricter. A lot of effort went into reducing the energy consumption of systems in general. Even though certain efforts to optimize the energy-efficiency of HPC workloads exist, most such efforts propose solutions targeting CPUs. As HPC systems shift more and more to GPU-centric architectures, simulation codes increasingly adopt GPU-programming models. This leads to an urgent need to increase the energy-efficiency of GPU-enabled codes. However, studies for reducing the energy consumption of large-scale simulations executing on CPUs and GPUs have received insufficient attention. In this work, we enable accurate power and energy measurements using an open-source toolkit across a range of CPU+GPU node architectures. We use this approach in SPH-EXA, an open-source GPU-centric astrophysical and cosmological simulation framework. We show that with simple code instrumentation, users can accurately measure power and energy related data about their application, beyond data provided by HPC systems alone. The accurate power and energy data provide significant insight to users for conducting energy-aware computational experiments and future energy-aware code development.
Modern workloads are demanding increasingly larger memory capacity. Compute Express Link (CXL)-based memory tiering has emerged as a promising solution for addressing this trend by utilizing traditional DRAM alongside slow-tier CXL-memory devices in the same system. Unfortunately, most prior tiering systems are recency-based, which cannot accurately identify hot and cold pages, since a recently accessed page is not necessarily a hot page. On the other hand, more accurate frequency-based systems suffer from high memory and runtime overhead as a result of tracking large memories. In this paper, we propose FreqTier, a fast and accurate frequency-based tiering system for CXL memory. We observe that memory tiering systems can tolerate a small amount of tracking inaccuracy without compromising the overall application performance. Based on this observation, FreqTier probabilistically tracks the access frequency of each page, enabling accurate identification of hot and cold pages while maintaining minimal memory overhead. Finally, FreqTier intelligently adjusts the intensity of tiering operations based on the application's memory access behavior, thereby significantly reducing the amount of migration traffic and application interference. We evaluate FreqTier on two emulated CXL memory devices with different bandwidths. On the high bandwidth CXL device, FreqTier can outperform state-of-the-art tiering systems while using 4$\times$ less local DRAM memory for in-memory caching workloads. On GAP graph analytics and XGBoost workloads with 1:32 local DRAM to CXL-memory ratio, FreqTier outperforms prior works by 1.04$-$2.04$\times$ (1.39$\times$ on average). Even on the low bandwidth CXL device, FreqTier outperforms AutoNUMA by 1.14$\times$ on average.
Various resources as the essential elements of data centers, and the completion time is vital to users. In terms of the persistence, the periodicity and the spatial-temporal dependence of stream workload, a new Storm scheduler with Advantage Actor-Critic is proposed to improve resource utilization for minimizing the completion time. A new weighted embedding with a Graph Neural Network is designed to depend on the features of a job comprehensively, which includes the dependence, the types and the positions of tasks in a job. An improved Advantage Actor-Critic integrating task chosen and executor assignment is proposed to schedule tasks to executors in order to better resource utilization. Then the status of tasks and executors are updated for the next scheduling. Compared to existing methods, experimental results show that the proposed Storm scheduler improves resource utilization. The completion time is reduced by almost 17\% on the TPC-H data set and reduced by almost 25\% on the Alibaba data set.
There are now over 20 commercial vector database management systems (VDBMSs), all produced within the past five years. But embedding-based retrieval has been studied for over ten years, and similarity search a staggering half century and more. Driving this shift from algorithms to systems are new data intensive applications, notably large language models, that demand vast stores of unstructured data coupled with reliable, secure, fast, and scalable query processing capability. A variety of new data management techniques now exist for addressing these needs, however there is no comprehensive survey to thoroughly review these techniques and systems. We start by identifying five main obstacles to vector data management, namely vagueness of semantic similarity, large size of vectors, high cost of similarity comparison, lack of natural partitioning that can be used for indexing, and difficulty of efficiently answering hybrid queries that require both attributes and vectors. Overcoming these obstacles has led to new approaches to query processing, storage and indexing, and query optimization and execution. For query processing, a variety of similarity scores and query types are now well understood; for storage and indexing, techniques include vector compression, namely quantization, and partitioning based on randomization, learning partitioning, and navigable partitioning; for query optimization and execution, we describe new operators for hybrid queries, as well as techniques for plan enumeration, plan selection, and hardware accelerated execution. These techniques lead to a variety of VDBMSs across a spectrum of design and runtime characteristics, including native systems specialized for vectors and extended systems that incorporate vector capabilities into existing systems. We then discuss benchmarks, and finally we outline research challenges and point the direction for future work.
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
For better user experience and business effectiveness, Click-Through Rate (CTR) prediction has been one of the most important tasks in E-commerce. Although extensive CTR prediction models have been proposed, learning good representation of items from multimodal features is still less investigated, considering an item in E-commerce usually contains multiple heterogeneous modalities. Previous works either concatenate the multiple modality features, that is equivalent to giving a fixed importance weight to each modality; or learn dynamic weights of different modalities for different items through technique like attention mechanism. However, a problem is that there usually exists common redundant information across multiple modalities. The dynamic weights of different modalities computed by using the redundant information may not correctly reflect the different importance of each modality. To address this, we explore the complementarity and redundancy of modalities by considering modality-specific and modality-invariant features differently. We propose a novel Multimodal Adversarial Representation Network (MARN) for the CTR prediction task. A multimodal attention network first calculates the weights of multiple modalities for each item according to its modality-specific features. Then a multimodal adversarial network learns modality-invariant representations where a double-discriminators strategy is introduced. Finally, we achieve the multimodal item representations by combining both modality-specific and modality-invariant representations. We conduct extensive experiments on both public and industrial datasets, and the proposed method consistently achieves remarkable improvements to the state-of-the-art methods. Moreover, the approach has been deployed in an operational E-commerce system and online A/B testing further demonstrates the effectiveness.