Virtual cell optimization clusters cells into neighborhoods and performs optimized resource allocation over each neighborhood. In prior works we proposed resource allocation schemes to mitigate the interference caused by transmissions in the same virtual cell. This work aims at mitigating both the interference caused by the transmissions of users in the same virtual cell and the interference between transmissions in different virtual cells. We propose a resource allocation technique that reduces the number of users that cannot achieve their constant guaranteed bit rate, i.e., the "unsatisfied users", in an uplink virtual cell system with cooperative decoding. The proposed scheme requires only the knowledge of the number of users each base station serves and relies on creating the interference graph between base stations at the edges of virtual cells. Allocation of frequency bands to users is based on the number of users each base station would serve in a non cooperative setup. We evaluate the performance of our scheme for a mmWave system. Our numerical results show that our scheme decreases the number of users in the system whose rate falls below the guaranteed rate, set to $128$kbps, $256$kbps or $512$kbps, when compared with our previously proposed optimization methods.
Network management is a fundamental ingredient for efficient operation of wireless networks. With increasing bandwidth, number of antennas and number of users, the amount of information required for network management increases significantly. Therefore, distributed network management is a key to efficient operation of future networks. This paper focuses on the problem of distributed joint beamforming control and power allocation in ad-hoc mmWave networks. Over the shared spectrum, a number of multi-input-multi-output links attempt to minimize their supply power by simultaneously finding the locally optimal power allocation and beamformers in a self-organized manner. Our design considers a family of non-convex quality-of-service constraint and utility functions characterized by monotonicity in the strategies of the various users. We propose a two-stage, decentralized optimization scheme, where the adaptation of power levels and beamformer coefficients are iteratively performed by each link. We first prove that given a set of receive beamformers, the power allocation stage converges to an optimal generalized Nash equilibrium of the generalized power allocation game. Then we prove that iterative minimum-mean-square-error adaptation of the receive beamformer results in an overall converging scheme. Several transmit beamforming schemes requiring different levels of information exchange are also compared in the proposed allocation framework. Our simulation results show that allowing each link to optimize its transmit filters using the direct channel results in a near optimum performance with very low computational complexity, even though the problem is highly non-convex.
We consider online wireless network virtualization (WNV) in a multi-cell multiple-input multiple output (MIMO) system with delayed feedback of channel state information (CSI). Multiple service providers (SPs) simultaneously share the base station resources of an infrastructure provider (InP). We aim at minimizing the accumulated precoding deviation of the InP's actual precoder from the SPs' virtualization demands via managing both inter-SP and inter-cell interference, subject to both long-term and short-term per-cell transmit power constraints. We develop an online coordinated precoding solution and show that it provides provable performance bounds. Our precoding solution is fully distributed at each cell, based only on delayed local CSI. Furthermore, it has a closed-form expression with low computational complexity. Finally, simulation results demonstrate the substantial performance gain of our precoding solution over the current best alternative.
In this paper, we propose a cell-free scheme for unmanned aerial vehicle (UAV) base stations (BSs) to manage the severe intercell interference between terrestrial users and UAV-BSs of neighboring cells. Since the cell-free scheme requires enormous bandwidth for backhauling, we propose to use the sub-terahertz (sub-THz) band for the backhaul links between UAV-BSs and central processing unit (CPU). Also, because the sub-THz band requires a reliable line-of-sight link, we propose to use a high altitude platform station (HAPS) as a CPU. At the first time-slot of the proposed scheme, users send their messages to UAVs at the sub-6 GHz band. The UAVs then apply match-filtering and power allocation. At the second time-slot, at each UAV, orthogonal resource blocks are allocated for each user at the sub-THz band, and the signals are sent to the HAPS after analog beamforming. In the HAPS receiver, after analog beamforming, the message of each user is decoded. We formulate an optimization problem that maximizes the minimum signal-to-interference-plus-noise ratio of users by finding the optimum allocated power as well as the optimum locations of UAVs. Simulation results demonstrate the superiority of the proposed scheme compared with aerial cellular and terrestrial cell-free baseline schemes.
THz communication is regarded as one of the potential key enablers for next-generation wireless systems. While THz frequency bands provide abundant bandwidths and extremely high data rates, the operation at THz bands is mandated by short communication ranges and narrow pencil beams, which are highly susceptible to user mobility and beam misalignment as well as channel blockages. This raises the need for novel beam tracking methods that take into account the tradeoff between enhancing the received signal strength by increasing beam directivity, and increasing the coverage probability by widening the beam. To address these challenges, a multi-objective optimization problem is formulated with the goal of jointly maximizing the ergodic rate and minimizing the outage probability subject to transmit power and average overhead constraints. Then, a novel parameterized beamformer with dynamic beamwidth adaptation is proposed. In addition to the precoder, an event-based beam tracking approach is introduced that enables reacting to outages caused by beam misalignment and dynamic blockage while maintaining a low pilot overhead. Simulation results show that our proposed beamforming scheme improves average rate performance and reduces the amount of communication outages caused by beam misalignment. Moreover, the proposed event-triggered channel estimation approach enables low-overhead yet reliable communication.
We consider the problem of service hosting where an application provider can dynamically rent edge computing resources and serve user requests from the edge to deliver a better quality of service. A key novelty of this work is that we allow the service to be hosted partially at the edge which enables a fraction of the user query to be served by the edge. We model the total cost for (partially) hosting a service at the edge as a combination of the latency in serving requests, the bandwidth consumption, and the time-varying cost for renting edge resources. We propose an online policy called $\alpha$-RetroRenting ($\alpha$-RR) which dynamically determines the fraction of the service to be hosted at the edge in any time-slot, based on the history of the request arrivals and the rent cost sequence. As our main result, we derive an upper bound on $\alpha$-RR's competitive ratio with respect to the offline optimal policy that knows the entire request arrival and rent cost sequence in advance. We conduct extensive numerical evaluations to compare the performance of $\alpha$-RR with various benchmarks for synthetic and trace-based request arrival and rent cost processes, and find several parameter regimes where $\alpha$-RR's ability to store the service partially greatly improves cost-efficiency.
Client selection strategies are widely adopted to handle the communication-efficient problem in recent studies of Federated Learning (FL). However, due to the large variance of the selected subset's update, prior selection approaches with a limited sampling ratio cannot perform well on convergence and accuracy in heterogeneous FL. To address this problem, in this paper, we propose a novel stratified client selection scheme to reduce the variance for the pursuit of better convergence and higher accuracy. Specifically, to mitigate the impact of heterogeneity, we develop stratification based on clients' local data distribution to derive approximate homogeneous strata for better selection in each stratum. Concentrating on a limited sampling ratio scenario, we next present an optimized sample size allocation scheme by considering the diversity of stratum's variability, with the promise of further variance reduction. Theoretically, we elaborate the explicit relation among different selection schemes with regard to variance, under heterogeneous settings, we demonstrate the effectiveness of our selection scheme. Experimental results confirm that our approach not only allows for better performance relative to state-of-the-art methods but also is compatible with prevalent FL algorithms.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.
In this paper, an interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. In particular, each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference level caused on the ground network along its path. The problem is cast as a dynamic game among UAVs. To solve this game, a deep reinforcement learning algorithm, based on echo state network (ESN) cells, is proposed. The introduced deep ESN architecture is trained to allow each UAV to map each observation of the network state to an action, with the goal of minimizing a sequence of time-dependent utility functions. Each UAV uses ESN to learn its optimal path, transmission power level, and cell association vector at different locations along its path. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, an upper and lower bound for the altitude of the UAVs is derived thus reducing the computational complexity of the proposed algorithm. Simulation results show that the proposed scheme achieves better wireless latency per UAV and rate per ground user (UE) while requiring a number of steps that is comparable to a heuristic baseline that considers moving via the shortest distance towards the corresponding destinations. The results also show that the optimal altitude of the UAVs varies based on the ground network density and the UE data rate requirements and plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV.
When deploying resource-intensive signal processing applications in wireless sensor or mesh networks, distributing processing blocks over multiple nodes becomes promising. Such distributed applications need to solve the placement problem (which block to run on which node), the routing problem (which link between blocks to map on which path between nodes), and the scheduling problem (which transmission is active when). We investigate a variant where the application graph may contain feedback loops and we exploit wireless networks? inherent multicast advantage. Thus, we propose Multicast-Aware Routing for Virtual network Embedding with Loops in Overlays (MARVELO) to find efficient solutions for scheduling and routing under a detailed interference model. We cast this as a mixed integer quadratically constrained optimisation problem and provide an efficient heuristic. Simulations show that our approach handles complex scenarios quickly.