Internet-of-Things (IoT) technology is envisioned to enable a variety of real-time applications by interconnecting billions of sensors/devices deployed to observe some random physical processes. These IoT devices rely on low-power wide-area wireless connectivity for transmitting, mostly fixed- but small-size, status updates of their associated random processes. The cellular networks are seen as a natural candidate for providing reliable wireless connectivity to IoT devices. However, the conventional orthogonal multiple access (OMA) to these massive number of devices is expected to degrade the spectral efficiency. As a promising alternative to OMA, the cellular base stations (BSs) can employ non-orthogonal multiple access (NOMA) for the uplink transmissions of mobile users and IoT devices. In particular, the uplink NOMA can be configured such that the mobile user can adapt transmission rate based on its channel condition while the IoT device transmits at a fixed rate. For this setting, we analyze the ergodic capacity of mobile users and the mean local delay of IoT devices using stochastic geometry. Our analysis demonstrates that the above NOMA configuration can provide better ergodic capacity for mobile users compare to OMA when IoT devices' delay constraint is strict. Furthermore, we also show that NOMA can support a larger packet size for IoT devices than OMA under the same delay constraint.
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing. For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data to a centralized location from distinct ones. This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices and the learning algorithm run on-device, with the aim of relaxing the burden at a central entity/server. Although gossip-based approaches have been used for this purpose in different use cases, they suffer from high communication costs, especially when the number of devices is large. To mitigate this, incremental-based methods are proposed. We first introduce incremental block-coordinate descent (I-BCD) for the decentralized ML, which can reduce communication costs at the expense of running time. To accelerate the convergence speed, an asynchronous parallel incremental BCD (API-BCD) method is proposed, where multiple devices/agents are active in an asynchronous fashion. We derive convergence properties for the proposed methods. Simulation results also show that our API-BCD method outperforms state of the art in terms of running time and communication costs.
The Internet of Things (IoT) brings connectivity to a massive number of devices that demand energy-efficient solutions to deal with limited battery capacities, uplink-dominant traffic, and channel impairments. In this work, we explore the use of Unmanned Aerial Vehicles (UAVs) equipped with configurable antennas as a flexible solution for serving low-power IoT networks. We formulate an optimization problem to set the position and antenna beamwidth of the UAV, and the transmit power of the IoT devices subject to average-Signal-to-average-Interference-plus-Noise Ratio ($\bar{\text{S}}\overline{\text{IN}}\text{R}$) Quality of Service (QoS) constraints. We minimize the worst-case average energy consumption of the latter, thus, targeting the fairest allocation of the energy resources. The problem is non-convex and highly non-linear; therefore, we re-formulate it as a series of three geometric programs that can be solved iteratively. Results reveal the benefits of planning the network compared to a random deployment in terms of reducing the worst-case average energy consumption. Furthermore, we show that the target $\bar{\text{S}}\overline{\text{IN}}\text{R}$ is limited by the number of IoT devices, and highlight the dominant impact of the UAV hovering height when serving wider areas. Our proposed algorithm outperforms other optimization benchmarks in terms of minimizing the average energy consumption at the most energy-demanding IoT device, and convergence time.
This paper concerns a convex, stochastic zeroth-order optimization (S-ZOO) problem. The objective is to minimize the expectation of a cost function whose gradient is not directly accessible. For this problem, traditional optimization algorithms mostly yield query complexities that grow polynomially with dimensionality (the number of decision variables). Consequently, these methods may not perform well in solving massive-dimensional problems arising in many modern applications. Although more recent methods can be provably dimension-insensitive, almost all of them require arguably more stringent conditions such as everywhere sparse or compressible gradient. In this paper, we propose a sparsity-inducing stochastic gradient-free (SI-SGF) algorithm, which provably yields a dimension-free (up to a logarithmic term) query complexity in both convex and strongly convex cases. Such insensitivity to the dimensionality growth is proven, for the first time, to be achievable when neither gradient sparsity nor gradient compressibility is satisfied. Our numerical results demonstrate a consistency between our theoretical prediction and the empirical performance.
With the increasing number of wireless communication systems and the demand for bandwidth, the wireless medium has become a congested and contested environment. Operating under such an environment brings several challenges, especially for military communication systems, which need to guarantee reliable communication while avoiding interfering with other friendly or neutral systems and denying the enemy systems of service. In this work, we investigate a novel application of Rate-Splitting Multiple Access(RSMA) for joint communications and jamming with a Multi-Carrier(MC) waveform in a multiantenna Cognitive Radio(CR) system. RSMA is a robust multiple access scheme for downlink multi-antenna wireless networks. RSMA relies on multi-antenna Rate-Splitting (RS) at the transmitter and Successive Interference Cancellation (SIC) at the receivers. Our aim is to simultaneously communicate with Secondary Users(SUs) and jam Adversarial Users(AUs) to disrupt their communications while limiting the interference to Primary Users(PUs) in a setting where all users perform broadband communications by MC waveforms in their respective networks. We consider the practical setting of imperfect CSI at transmitter(CSIT) for the SUs and PUs, and statistical CSIT for AUs. We formulate a problem to obtain optimal precoders which maximize the mutual information under interference and jamming power constraints. We propose an Alternating Optimization-Alternating Direction Method of Multipliers(AOADMM) based algorithm for solving the resulting non-convex problem. We perform an analysis based on Karush-Kuhn-Tucker conditions to determine the optimal jamming and interference power thresholds that guarantee the feasibility of problem and propose a practical algorithm to calculate the interference power threshold. By simulations, we show that RSMA achieves a higher sum-rate than Space Division Multiple Access(SDMA).
Due to the increasing interest in blockchain technology for fostering secure, auditable, decentralized applications, a set of challenges associated with this technology need to be addressed. In this letter, we focus on the delay associated with Proof-of-Work (PoW)-based blockchain networks, whereby participants validate the new information to be appended to a distributed ledger via consensus to confirm transactions. We propose a novel end-to-end latency model based on batch-service queuing theory that characterizes timers and forks for the first time. Furthermore, we derive an estimation of optimum block size analytically. Endorsed by simulation results, we show that the optimal block size approximation is a consistent method that leads to close-to-optimal performance by significantly reducing the overheads associated with blockchain applications.
The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual network environment is facing severe challenges. On the one hand, the Internet of things (IoT) based on ICPSs construction needs a large amount of reasonable network resources support. On the other hand, ICPSs are facing severe network security problems. The integration of ICPSs and network virtualization (NV) can provide more efficient network resource support and security guarantees for IoT users. Based on the above two problems faced by ICPSs, we propose a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs. In particular, we use reinforcement learning (RL) method as a means to improve algorithm performance. We extract the important attribute characteristics of underlying network as the training environment of RL agent. Agent can derive the optimal node embedding strategy through training, so as to meet the requirements of ICPSs for resource management and security. The embedding of virtual links is based on the breadth first search (BFS) strategy. Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the constraints of computing, storage and security three-dimensional resources. Finally, we design a large number of simulation experiments from the perspective of typical indicators of VNE algorithms. The experimental results effectively illustrate the effectiveness of the algorithm in the application of ICPSs.
Internet of Vehicles (IoV) over Vehicular Ad-hoc Networks (VANETS) is an emerging technology enabling the development of smart cities applications for safer, efficient, and pleasant travel. These applications have stringent requirements expressed in Service Level Agreements (SLAs). Considering vehicles limited computational and storage capabilities, applications requests are offloaded into an integrated edge-cloud computing system. Existing offloading solutions focus on optimizing applications Quality of Service (QoS) while respecting a single SLA constraint. They do not consider the impact of overlapped requests processing. Very few contemplate the varying speed of a vehicle. This paper proposes a novel Artificial Intelligence (AI) QoS-SLA-aware genetic algorithm (GA) for multi-request offloading in a heterogeneous edge-cloud computing system, considering the impact of overlapping requests processing and dynamic vehicle speed. The objective of the optimization algorithm is to improve the applications' Quality of Service (QoS) by minimizing the total execution time. The proposed algorithm integrates an adaptive penalty function to assimilate the SLAs constraints in terms of latency, processing time, deadline, CPU, and memory requirements. Numerical experiments and comparative analysis are achieved between our proposed QoS-SLA-aware GA, random, and GA baseline approaches. The results show that the proposed algorithm executes the requests 1.22 times faster on average compared to the random approach with 59.9% less SLA violations. While the GA baseline approach increases the performance of the requests by 1.14 times, it has 19.8% more SLA violations than our approach.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.