Time Slotted Channel Hopping (TSCH) is a Medium Access Control (MAC) protocol introduced in IEEE802.15.4e standard, addressing low power requirements of the Internet of Things (IoT) and Low Power Lossy Networks (LLNs). The 6TiSCH Operation sublayer (6top) of IEEE802.15.4e defines the schedule that includes sleep, transmit and receive routines of the nodes. However, the design of schedule is not specified by the standard. In this paper, we propose a contention based proportional fairness (CBPF) transmission scheme for TSCH networks to maximize the system throughput addressing fair allocation of resources to the nodes. We propose a convex programming based method to achieve the fairness and throughput objectives. We model TSCH MAC as a multichannel slotted aloha and analyse it for a schedule given by the 6top layer. Performance metrics like throughput, delay and energy spent per successful transmission are derived and validated through simulations. The proposed CBPF transmission scheme has been implemented in the IoT-LAB public testbed to evaluate its performance and to compare with the existing scheduling algorithms.
Network side channels (NSCs) leak secrets through packet timing and packet sizes. They are of particular concern in public IaaS Clouds, where any tenant may be able to colocate and indirectly observe a victim's traffic shape. We present Pacer, the first system that eliminates NSC leaks in public IaaS Clouds end-to-end. It builds on the principled technique of shaping guest traffic outside the guest to make the traffic shape independent of secrets by design. However, Pacer also addresses important concerns that have not been considered in prior work -- it prevents internal side-channel leaks from affecting reshaped traffic, and it respects network flow control, congestion control and loss recovery signals. Pacer is implemented as a paravirtualizing extension to the host hypervisor, requiring modest changes to the hypervisor and the guest kernel, and only optional, minimal changes to applications. We present Pacer's key abstraction of a cloaked tunnel, describe its design and implementation, prove the security of important design aspects through a formal model, and show through an experimental evaluation that Pacer imposes moderate overheads on bandwidth, client latency, and server throughput, while thwarting attacks based on state-of-the-art CNN classifiers.
Airborne diseases, including COVID-19, raise the question of transmission risk in public transportation systems. However, quantitative analysis of the effectiveness of transmission risk mitigation methods in public transportation is lacking. The paper develops a transmission risk modeling framework based on the Wells-Riley model using as inputs transit operating characteristics, schedule, Origin-Destination (OD) demand, and virus characteristics. The model is sensitive to various factors that operators can control, as well as external factors that may be subject of broader policy decisions (e.g. mask wearing). The model is utilized to assess transmission risk as a function of OD flows, planned operations, and factors such as mask-wearing, ventilation, and infection rates. Using actual data from the Massachusetts Bay Transportation Authority (MBTA) Red Line, the paper explores the transmission risk under different infection rate scenarios, both in magnitude and spatial characteristics. The paper assesses the combined impact from viral load related factors and passenger load factors. Increasing frequency can mitigate transmission risk, but cannot fully compensate for increases in infection rates. Imbalanced passenger distribution on different cars of a train is shown to increase the overall system-wide infection probability. Spatial infection rate patterns should also be taken into account during policymaking as it is shown to impact transmission risk. For lines with branches, demand distribution among the branches is important and headway allocation adjustment among branches to balance the load on trains to different branches can help reduce risk.
Serverless computing automates fine-grained resource scaling and simplifies the development and deployment of online services with stateless functions. However, it is still non-trivial for users to allocate appropriate resources due to various function types, dependencies, and input sizes. Misconfiguration of resource allocations leaves functions either under-provisioned or over-provisioned and leads to continuous low resource utilization. This paper presents Freyr, a new resource manager (RM) for serverless platforms that maximizes resource efficiency by dynamically harvesting idle resources from over-provisioned functions to under-provisioned functions. Freyr monitors each function's resource utilization in real-time, detects over-provisioning and under-provisioning, and learns to harvest idle resources safely and accelerates functions efficiently by applying deep reinforcement learning algorithms along with a safeguard mechanism. We have implemented and deployed a Freyr prototype in a 13-node Apache OpenWhisk cluster. Experimental results show that 38.8% of function invocations have idle resources harvested by Freyr, and 39.2% of invocations are accelerated by the harvested resources. Freyr reduces the 99th-percentile function response latency by 32.1% compared to the baseline RMs.
The accurate estimation of Channel State Information (CSI) is of crucial importance for the successful operation of Multiple-Input Multiple-Output (MIMO) communication systems, especially in a Multi-User (MU) time-varying environment and when employing the emerging technology of Reconfigurable Intelligent Surfaces (RISs). Their predominantly passive nature renders the estimation of the channels involved in the user-RIS-base station link a quite challenging problem. Moreover, the time-varying nature of most of the realistic wireless channels drives up the cost of real-time channel tracking significantly, especially when RISs of massive size are deployed. In this paper, we develop a channel tracking scheme for the uplink of RIS-enabled MU MIMO systems in the presence of channel fading. The starting point is a tensor representation of the received signal and we rely on its PARAllel FACtor (PARAFAC) analysis to both get the initial estimate and track the channel time variation. Simulation results for various system settings are reported, which validate the feasibility and effectiveness of the proposed channel tracking approach.
Machine learning algorithms have recently been considered for many tasks in the field of wireless communications. Previously, we have proposed the use of a deep fully convolutional neural network (CNN) for receiver processing and shown it to provide considerable performance gains. In this study, we focus on machine learning algorithms for the transmitter. In particular, we consider beamforming and propose a CNN which, for a given uplink channel estimate as input, outputs downlink channel information to be used for beamforming. The CNN is trained in a supervised manner considering both uplink and downlink transmissions with a loss function that is based on UE receiver performance. The main task of the neural network is to predict the channel evolution between uplink and downlink slots, but it can also learn to handle inefficiencies and errors in the whole chain, including the actual beamforming phase. The provided numerical experiments demonstrate the improved beamforming performance.
A noteworthy aspect in blood flow modeling is the definition of the mechanical interaction between the fluid flow and the biological structure that contains it, namely the vessel wall. It has been demonstrated that the addition of a viscous contribution to the mechanical characterization of vessels brings positive results when compared to in-vivo measurements. In this context, the numerical implementation of boundary conditions able to keep memory of the viscoelastic contribution of vessel walls assumes an important role, especially when dealing with large circulatory systems. In this work, viscoelasticity is taken into account in entire networks via the Standard Linear Solid Model. The implementation of the viscoelastic contribution at boundaries (inlet, outlet and junctions), is carried out considering the hyperbolic nature of the mathematical model. A non-linear system is established based on the definition of the Riemann Problem at junctions, characterized by rarefaction waves separated by contact discontinuities, among which the mass and the total energy are conserved. Basic junction tests are analyzed, such as a trivial 2-vessels junction, for both a generic artery and vein, and a simple 3-vessels junction, considering an aortic bifurcation scenario. The chosen asymptotic-preserving IMEX Runge-Kutta Finite Volume scheme is demonstrated to be second-order accurate in the whole domain and well-balanced, even when including junctions. Two different benchmark models of the arterial network are implemented, differing in number of vessels and in viscoelastic parameters. Comparison of the results obtained in the two networks underlines the high sensitivity of the model to the chosen viscoelastic parameters. The conservation of the contribution provided by the viscoelastic characterization of vessel walls is assessed in the whole network, including junctions and boundary conditions.
In this paper, a novel distributed scheduling algorithm is proposed, which aims to efficiently schedule both the uplink and downlink backhaul traffic in the relay-assisted mmWave backhaul network with a tree topology. The handshaking of control messages, calculation of local schedules, and the determination of final valid schedule are all discussed. Simulation results show that the performance of the distributed algorithm can reach very close to the maximum traffic demand of the backhaul network, and it can also adapt to the dynamic traffic with sharp traffic demand change of small-cell BSs quickly and accurately.
After the advent of the Internet of Things and 5G networks, edge computing became the center of attraction. The tasks demanding high computation are generally offloaded to the cloud since the edge is resource-limited. The Edge Cloud is a promising platform where the devices can offload delay-sensitive workloads. In this regard, scheduling holds great importance in offloading decisions in the Edge Cloud collaboration. The ultimate objectives of scheduling are the quality of experience, minimizing latency, and increasing performance. An abundance of efforts on scheduling has been done in the past. In this paper, we have surveyed proposed scheduling strategies in the context of edge cloud computing in various aspects such as advantages and demerits, QoS parameters, and fault tolerance. We have also surveyed such scheduling approaches to evaluate which one is feasible under what circumstances. We first classify all the algorithms into heuristic algorithms and meta-heuristics, and we subcategorize algorithms in each class further based on extracted attributes of algorithms. We hope that this survey will be very thoughtful in the development of new scheduling techniques. Issues, challenges, and future directions have also been examined.
This paper considers a mobile edge computing-enabled cell-free massive MIMO wireless network. An optimization problem for the joint allocation of uplink powers and remote computational resources is formulated, aimed at minimizing the total uplink power consumption under latency constraints, while simultaneously also maximizing the minimum SE throughout the network. Since the considered problem is non-convex, an iterative algorithm based on sequential convex programming is devised. A detailed performance comparison between the proposed distributed architecture and its co-located counterpart, based on a multi-cell massive MIMO deployment, is provided. Numerical results reveal the natural suitability of cell-free massive MIMO in supporting computation-offloading applications, with benefits over users' transmit power and energy consumption, the offloading latency experienced, and the total amount of allocated remote computational resources.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.