Energy storage can play an important role in energy management of end users. To promote an efficient utilization of energy storage, we develop a novel business model to enable virtual storage sharing among a group of users. Specifically, a storage aggregator invests and operates the central physical storage unit, by virtualizing it into separable virtual capacities and selling to users. Each user purchases the virtual capacity, and utilize it to reduce the energy cost. We formulate the interaction between the aggregator and users as a two-stage optimization problem. In Stage 1, over the investment horizon, the aggregator determines the investment and pricing decisions. In Stage 2, in each operational horizon, each user decides the virtual capacity to purchase together with the operation of the virtual storage. We characterize a stepwise form of the optimal solution of Stage-2 Problem and a piecewise linear structure of the optimal profit of Stage-1 Problem, both with respect to the virtual capacity price. Based on the solution structure, we design an algorithm to attain the optimal solution of the two-stage problem. In our simulation results, the proposed storage virtualization model can reduce the physical energy storage investment of the aggregator by 54.3% and reduce the users' total costs by 34.7%, compared to the case where users acquire their own physical storage.
Dual-Functional Radar-Communication systems enhance the benefits of communications and radar sensing by jointly implementing these on the same hardware platform and using the common RF resources. An important and latest concern to be addressed in designing such Dual-Functional Radar-Communication systems is maximizing the energy-efficiency. In this paper, we consider a Dual-Functional Radar-Communication system performing simultaneous multi-user communications and radar sensing, and investigate the energy-efficiency behaviour with respect to active transmission elements. Specifically, we formulate a problem to find the optimal precoders and the number of active RF chains for maximum energy-efficiency by taking into consideration the power consumption of low-resolution Digital-to-Analog Converters on each RF chain under communications and radar performance constraints. We consider Rate-Splitting Multiple Access to perform multi-user communications with perfect and imperfect Channel State Information at Transmitter. The formulated non-convex optimization problem is solved by means of a novel algorithm. We demonstrate by numerical results that Rate Splitting Multiple Access achieves an improved energy-efficiency by employing a smaller number of RF chains compared to Space Division Multiple Access, owing to its generalized structure and improved interference management capabilities.
In this paper, we design a new smart softwaredefined radio access network (RAN) architecture with important properties like flexibility and traffic awareness for sixth generation (6G) wireless networks. In particular, we consider a hierarchical resource allocation framework for the proposed smart soft-RAN model, where the software-defined network (SDN) controller is the first and foremost layer of the framework. This unit dynamically monitors the network to select a network operation type on the basis of distributed or centralized resource allocation architectures to perform decision-making intelligently. In this paper, our aim is to make the network more scalable and more flexible in terms of achievable data rate, overhead, and complexity indicators. To this end, we introduce a new metric, throughput overhead complexity (TOC), for the proposed machine learning-based algorithm, which makes a trade-off between these performance indicators. In particular, the decision making based on TOC is solved via deep reinforcement learning (DRL), which determines an appropriate resource allocation policy. Furthermore, for the selected algorithm, we employ the soft actor-critic method, which is more accurate, scalable, and robust than other learning methods. Simulation results demonstrate that the proposed smart network achieves better performance in terms of TOC compared to fixed centralized or distributed resource management schemes that lack dynamism. Moreover, our proposed algorithm outperforms conventional learning methods employed in other state-of-the-art network designs.
Base stations in 5G and beyond use advanced antenna systems (AASs), where multiple passive antenna elements and radio units are integrated into a single box. A critical bottleneck of such a system is the digital fronthaul between the AAS and baseband unit (BBU), which has limited capacity. In this paper, we study an AAS used for precoded downlink transmission over a multi-user multiple-input multiple-output (MU-MIMO) channel. First, we present the baseline quantization-unaware precoding scheme created when a precoder is computed at the BBU and then quantized to be sent over the fronthaul. We propose a new precoding design that is aware of the fronthaul quantization. We formulate an optimization problem to minimize the mean squared error at the receiver side. We rewrite the problem to utilize mixed-integer programming to solve it. The numerical results manifest that our proposed precoding greatly outperforms quantization-unaware precoding in terms of sum rate.
This paper proposes a generalised propulsion energy consumption model (PECM) for rotary-wing ummanned aerial vehicles (UAVs) under the consideration of the practical thrust-to-weight ratio (TWR) with respect to the velocity, acceleration and direction change of the UAVs. To verify the effectiveness of the proposed PECM, we consider a UAV-enabled communication system, where a rotary-wing UAV serves multiple ground users as an aerial base station. We aim to maximize the energy efficiency (EE) of the UAV by jointly optimizing the user scheduling and UAV trajectory variables. However, the formulated problem is a non-convex fractional integer programming problem, which is challenging to obtain its optimal solution. To tackle this, we propose an efficient iterative algorithm by decomposing the original problem into two sub-problems to obtain a suboptimal solution based on the successive convex approximation technique. Simulation results show that the optimized UAV trajectory by applying the proposed PECM are smoother and the corresponding EE has significant improvement as compared to other benchmark schemes.
Security analyses for consensus protocols in blockchain research have primarily focused on the synchronous model, where point-to-point communication delays are upper bounded by a known finite constant. These models are unrealistic in noisy settings, where messages may be lost (i.e. incur infinite delay). In this work, we study the impact of message losses on the security of the proof-of-work longest-chain protocol. We introduce a new communication model to capture the impact of message loss called the $0-\infty$ model, and derive a region of tolerable adversarial power under which the consensus protocol is secure. The guarantees are derived as a simple bound for the probability that a transaction violates desired security properties. Specifically, we show that this violation probability decays almost exponentially in the security parameter. Our approach involves constructing combinatorial objects from blocktrees, and identifying random variables associated with them that are amenable to analysis. This approach improves existing bounds and extends the known regime for tolerable adversarial threshold in settings where messages may be lost.
Modern connected vehicles (CVs) frequently require diverse types of content for mission-critical decision-making and onboard users' entertainment. These contents are required to be fully delivered to the requester CVs within stringent deadlines that the existing radio access technology (RAT) solutions may fail to ensure. Motivated by the above consideration, this paper exploits content caching with a software-defined user-centric virtual cell (VC) based RAT solution for delivering the requested contents from a proximity edge server. Moreover, to capture the heterogeneous demands of the CVs, we introduce a preference-popularity tradeoff in their content request model. To that end, we formulate a joint optimization problem for content placement, CV scheduling, VC configuration, VC-CV association and radio resource allocation to minimize long-term content delivery delay. However, the joint problem is highly complex and cannot be solved efficiently in polynomial time. As such, we decompose the original problem into a cache placement problem and a content delivery delay minimization problem given the cache placement policy. We use deep reinforcement learning (DRL) as a learning solution for the first sub-problem. Furthermore, we transform the delay minimization problem into a priority-based weighted sum rate (WSR) maximization problem, which is solved leveraging maximum bipartite matching (MWBM) and a simple linear search algorithm. Our extensive simulation results demonstrate the effectiveness of the proposed method.
Network Function Virtualization (NFV) carries the potential for on-demand deployment of network algorithms in virtual machines (VMs). In large clouds, however, VM resource allocation incurs delays that hinder the dynamic scaling of such NFV deployment. Parallel resource management is a promising direction for boosting performance, but it may significantly increase the communication overhead and the decline ratio of deployment attempts. Our work analyzes the performance of various placement algorithms and provides empirical evidence that state-of-the-art parallel resource management dramatically increases the decline ratio of deterministic algorithms but hardly affects randomized algorithms. We, therefore, introduce APSR -- an efficient parallel random resource management algorithm that requires information only from a small number of hosts and dynamically adjusts the degree of parallelism to provide provable decline ratio guarantees. We formally analyze APSR, evaluate it on real workloads, and integrate it into the popular OpenStack cloud management platform. Our evaluation shows that APSR matches the throughput provided by other parallel schedulers, while achieving up to 13x lower decline ratio and a reduction of over 85% in communication overheads.
This paper studies the sample complexity of learning the $k$ unknown centers of a balanced Gaussian mixture model (GMM) in $\mathbb{R}^d$ with spherical covariance matrix $\sigma^2\mathbf{I}$. In particular, we are interested in the following question: what is the maximal noise level $\sigma^2$, for which the sample complexity is essentially the same as when estimating the centers from labeled measurements? To that end, we restrict attention to a Bayesian formulation of the problem, where the centers are uniformly distributed on the sphere $\sqrt{d}\mathcal{S}^{d-1}$. Our main results characterize the exact noise threshold $\sigma^2$ below which the GMM learning problem, in the large system limit $d,k\to\infty$, is as easy as learning from labeled observations, and above which it is substantially harder. The threshold occurs at $\frac{\log k}{d} = \frac12\log\left( 1+\frac{1}{\sigma^2} \right)$, which is the capacity of the additive white Gaussian noise (AWGN) channel. Thinking of the set of $k$ centers as a code, this noise threshold can be interpreted as the largest noise level for which the error probability of the code over the AWGN channel is small. Previous works on the GMM learning problem have identified the minimum distance between the centers as a key parameter in determining the statistical difficulty of learning the corresponding GMM. While our results are only proved for GMMs whose centers are uniformly distributed over the sphere, they hint that perhaps it is the decoding error probability associated with the center constellation as a channel code that determines the statistical difficulty of learning the corresponding GMM, rather than just the minimum distance.
Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.