Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly. Additionally, the non-orthogonal multiple access (NOMA), which is the key supporting technologies of B5G/6G, can achieve massive connections and high spectrum efficiency. Motivated by the benefits of NOMA, integrating NOMA with model split in MEC to reduce the inference latency further becomes attractive. However, the NOMA based communication during split inference has not been properly considered in previous works. Therefore, in this paper, we integrate the NOMA into split inference in MEC, and propose the effective communication and computing resource allocation algorithm to accelerate the model inference at edge. Specifically, when the mobile user has a large model inference task needed to be calculated in the NOMA-based MEC, it will take the energy consumption of both device and edge server and the inference latency into account to find the optimal model split strategy, subchannel allocation strategy (uplink and downlink), and transmission power allocation strategy (uplink and downlink). Since the minimum inference delay and energy consumption cannot be satisfied simultaneously, and the variables of subchannel allocation and model split are discrete, the gradient descent (GD) algorithm is adopted to find the optimal tradeoff between them. Moreover, the loop iteration GD approach (Li-GD) is proposed to reduce the complexity of GD algorithm that caused by the parameter discrete. Additionally, the properties of the proposed algorithm are also investigated, which demonstrate the effectiveness of the proposed algorithms.
Customers' load profiles are critical resources to support data analytics applications in modern power systems. However, there are usually insufficient historical load profiles for data analysis, due to the collection cost and data privacy issues. To address such data shortage problems, load profiles synthesis is an effective technique that provides synthetic training data for customers to build high-performance data-driven models. Nonetheless, it is still challenging to synthesize high-quality load profiles for each customer using generation models trained by the respective customer's data owing to the high heterogeneity of customer load. In this paper, we propose a novel customized load profiles synthesis method based on conditional diffusion models for heterogeneous customers. Specifically, we first convert the customized synthesis into a conditional data generation issue. We then extend traditional diffusion models to conditional diffusion models to realize conditional data generation, which can synthesize exclusive load profiles for each customer according to the customer's load characteristics and application demands. In addition, to implement conditional diffusion models, we design a noise estimation model with stacked residual layers, which improves the generation performance by using skip connections. The attention mechanism is also utilized to better extract the complex temporal dependency of load profiles. Finally, numerical case studies based on a public dataset are conducted to validate the effectiveness and superiority of the proposed method.
Optimal allocation of agricultural water in the event of droughts is an important global problem. In addressing this problem, many aspects, including the welfare of farmers, the economy, and the environment, must be considered. Under this backdrop, our work focuses on several resource-matching problems accounting for agents with multi-crop portfolios, geographic constraints, and fairness. First, we address a matching problem where the goal is to maximize a welfare function in two-sided markets where buyers' requirements and sellers' supplies are represented by value functions that assign prices (or costs) to specified volumes of water. For the setting where the value functions satisfy certain monotonicity properties, we present an efficient algorithm that maximizes a social welfare function. When there are minimum water requirement constraints, we present a randomized algorithm which ensures that the constraints are satisfied in expectation. For a single seller--multiple buyers setting with fairness constraints, we design an efficient algorithm that maximizes the minimum level of satisfaction of any buyer. We also present computational complexity results that highlight the limits on the generalizability of our results. We evaluate the algorithms developed in our work with experiments on both real-world and synthetic data sets with respect to drought severity, value functions, and seniority of agents.
The universality of the point cloud format enables many 3D applications, making the compression of point clouds a critical phase in practice. Sampled as discrete 3D points, a point cloud approximates 2D surface(s) embedded in 3D with a finite bit-depth. However, the point distribution of a practical point cloud changes drastically as its bit-depth increases, requiring different methodologies for effective consumption/analysis. In this regard, a heterogeneous point cloud compression (PCC) framework is proposed. We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones under a learning-based framework to compress an input point cloud at different bit-depth levels. Having recognized the importance of voxel-domain processing, we augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation. Extensive experimentation demonstrates the state-of-the-art performance of our proposal on a wide range of point clouds.
Adaptivity, reconfigurability and intelligence are key features of the next-generation wireless networks to meet the increasingly diverse quality of service (QoS) requirements of the future applications. Conventional protocol designs, however, struggle to provide flexibility and agility to changing radio environments, traffic types and different user service requirements. In this paper, we explore the potential of deep reinforcement learning (DRL), in particular Proximal Policy Optimization (PPO), to design and configure intelligent and application-specific medium access control (MAC) protocols. We propose a framework that enables the addition, removal, or modification of protocol features to meet individual application needs. The DRL channel access policy design empowers the protocol to adapt and optimize in accordance with the network and radio environment. Through extensive simulations, we demonstrate the superior performance of the learned protocols over legacy IEEE 802.11ac in terms of throughput and latency.
The wide adoption of multimedia service capable mobile devices, the availability of better networks with higher bandwidths, and the availability of platforms offering digital content has led to an increasing popularity of multimedia streaming services. However, multimedia streaming services can be subject to different factors that affect the quality perceived by the users, such as service interruptions or quality oscillations due to changing network conditions, particularly in mobile networks. Dynamic Adaptive Streaming over HTTP (DASH), leverages the use of content-distribution networks and the capabilities of the multimedia devices to allow multimedia players to dynamically adapt the quality of the media streaming to the available bandwidth and the device characteristics. While many elements of DASH are standardized, the algorithms providing the dynamic adaptation of the streaming are not. The adaptation is often based on the estimation of the throughput or a buffer control mechanism. In this paper, we present a new throughput estimation adaptation algorithm based on a statistical method named Adaptive Forgetting Factor (AFF). Using this method, the adaptation logic is able to react appropriately to the different conditions of different types of networks. A set of experiments with different traffic profiles show that the proposed algorithm improves video quality performance in both wired and wireless environments.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
The task of detecting 3D objects in point cloud has a pivotal role in many real-world applications. However, 3D object detection performance is behind that of 2D object detection due to the lack of powerful 3D feature extraction methods. In order to address this issue, we propose to build a 3D backbone network to learn rich 3D feature maps by using sparse 3D CNN operations for 3D object detection in point cloud. The 3D backbone network can inherently learn 3D features from almost raw data without compressing point cloud into multiple 2D images and generate rich feature maps for object detection. The sparse 3D CNN takes full advantages of the sparsity in the 3D point cloud to accelerate computation and save memory, which makes the 3D backbone network achievable. Empirical experiments are conducted on the KITTI benchmark and results show that the proposed method can achieve state-of-the-art performance for 3D object detection.
Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedbacks. In particular, we introduce an online user-agent interacting environment simulator, which can pre-train and evaluate model parameters offline before applying the model online. Moreover, we validate the importance of list-wise recommendations during the interactions between users and agent, and develop a novel approach to incorporate them into the proposed framework LIRD for list-wide recommendations. The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.
Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at //github.com/2051/RSICD_optimal