This paper investigates the throughput performance issue of the relay-assisted mmWave backhaul network. The maximum traffic demand of small-cell base stations (BSs) and the maximum throughput at the macro-cell BS have been found in a tree-style backhaul network through linear programming under different network settings, which concern both the number of radio chains available on BSs and the interference relationship between logical links in the backhaul network. A novel interference model for the relay-assisted mmWave backhaul network in the dense urban environment is proposed, which demonstrates the limited interference footprint of mmWave directional communications. Moreover, a scheduling algorithm is developed to find the optimal scheduling for tree-style mmWave backhaul networks. Extensive numerical analysis and simulations are conducted to show and validate the network throughput performance and the scheduling algorithm.
Coordinated Multiple views (CMVs) are a visualization technique that simultaneously presents multiple visualizations in separate but linked views. There are many studies that report the advantages (e.g., usefulness for finding hidden relationships) and disadvantages (e.g., cognitive load) of CMVs. But little empirical work exists on the impact of the number of views on visual anlaysis results and processes, which results in uncertainty in the relationship between the view number and visual anlaysis. In this work, we aim at investigating the relationship between the number of coordinated views and users analytic processes and results. To achieve the goal, we implemented a CMV tool for visual anlaysis. We also provided visualization duplication in the tool to help users easily create a desired number of visualization views on-the-fly. We conducted a between-subject study with 44 participants, where we asked participants to solve five analytic problems using the visual tool. Through quantitative and qualitative analysis, we discovered the positive correlation between the number of views and analytic results. We also found that visualization duplication encourages users to create more views and to take various analysis strategies. Based on the results, we provide implications and limitations of our study.
Several emerging non-volatile memory (NVM) technologies are rising as interesting alternatives to build the Last-Level Cache (LLC). Their advantages, compared to SRAM memory, are higher density and lower static power, but each write operation slightly wears out the bitcell, to the point of losing its storage capacity. In this context, this paper proposes a new LLC organization designed to extend the lifetime of the NVM data array and a methodological proposal to forecast in detail the capacity and performance of NVM caches over its lifetime. Data compression is one of the techniques dealing with the degradation of a NV-LLC, as it decreases the write bandwidth delivered to the cache. The proposed NV-LLC organization takes advantage of compression, but with a relevant contribution: as capacity is reduced by write wear, degraded cache frames can allocate blocks whose compressed size is adequate. From a methodological point of view, although different approaches are used in the literature to analyze the degradation of a NV-LLC, none of them allows to study in detail its temporal evolution. In this sense, this work proposes a forecast procedure that combines detailed simulation and prediction, allowing an accurate analysis of the impact of different cache control policies and mechanisms (replacement, wear leveling, compression, etc.) on the temporal evolution of the indices of interest, such as the effective capacity of the NV-LLC or the system IPC. The proposed NV-LLC organization has a small added cost compared to that of a baseline NV-LLC without compression in terms of area, latency and energy consumption, and increases up to 6-36 times the time required to reach 50% effective capacity in an STT-RAM-based NV-LLC.
Emerging distributed cloud architectures, e.g., fog and mobile edge computing, are playing an increasingly important role in the efficient delivery of real-time stream-processing applications such as augmented reality, multiplayer gaming, and industrial automation. While such applications require processed streams to be shared and simultaneously consumed by multiple users/devices, existing technologies lack efficient mechanisms to deal with their inherent multicast nature, leading to unnecessary traffic redundancy and network congestion. In this paper, we establish a unified framework for distributed cloud network control with generalized (mixed-cast) traffic flows that allows optimizing the distributed execution of the required packet processing, forwarding, and replication operations. We first characterize the enlarged multicast network stability region under the new control framework (with respect to its unicast counterpart). We then design a novel queuing system that allows scheduling data packets according to their current destination sets, and leverage Lyapunov drift-plus-penalty theory to develop the first fully decentralized, throughput- and cost-optimal algorithm for multicast cloud network flow control. Numerical experiments validate analytical results and demonstrate the performance gain of the proposed design over existing cloud network control techniques.
The idea of approximating the Shapley value of an n-person game by Monte Carlo simulation was first suggested by Mann and Shapley (1960) and they also introduced four different heuristical methods to reduce the estimation error. Since 1960, several statistical methods have been developed to reduce the standard deviation of the estimate. In this paper, we develop an algorithm that uses a pair of negatively correlated samples to reduce the variance of the estimate. Although the observations generated are not independent, the sample is ergodic (obeys the strong law of large numbers), hence the name "ergodic sampling". Unlike Shapley and Mann, we do not use heuristics, the algorithm uses a small sample to learn the best ergodic transformation for a given game. We illustrate the algorithm on eight games with different characteristics to test the performance and understand how the proposed algorithm works. The experiments show that this method has at least as low variance as an independent sample, and in five test games, it significantly improves the quality of the estimation, up to 75 percent.
Ethereum Improvement Proposal (EIP) 1559 was recently implemented to transform Ethereum's transaction fee market. EIP-1559 utilizes an algorithmic update rule with a constant learning rate to estimate a base fee. The base fee reflects prevailing network conditions and hence provides a more reliable oracle for current gas prices. Using on-chain data from the period after its launch, we evaluate the impact of EIP-1559 on the user experience and market performance. Our empirical findings suggest that although EIP-1559 achieves its goals on average, short-term behavior is marked by intense, chaotic oscillations in block sizes (as predicted by our recent theoretical dynamical system analysis [1]) and slow adjustments during periods of demand bursts (e.g., NFT drops). Both phenomena lead to unwanted inter-block variability in mining rewards. To address this issue, we propose an alternative base fee adjustment rule in which the learning rate varies according to an additive increase, multiplicative decrease (AIMD) update scheme. Our simulations show that the latter robustly outperforms the EIP-1559 protocol under various demand scenarios. These results provide evidence that variable learning rate mechanisms may constitute a promising alternative to the default EIP-1559-based format and contribute to the ongoing discussion on the design of more efficient transaction fee markets.
Unlike conventional cars, connected and autonomous vehicles (CAVs) can cross intersections in a lane-free order and utilise the whole area of intersections. This paper presents a minimum-time optimal control problem to centrally control the CAVs to simultaneously cross an intersection in the shortest possible time. Dual problem theory is employed to convexify the constraints of CAVs to avoid collision with each other and with road boundaries. The developed formulation is smooth and solvable by gradient-based algorithms. Simulation results show that the proposed strategy reduces the crossing time of intersections by an average of 52% and 54% as compared to, respectively, the state-of-the-art reservation-based and lane-free methods. Furthermore, the crossing time by the proposed strategy is fixed to a constant value for an intersection regardless of the number of CAVs.
Computing a maximum independent set (MaxIS) is a fundamental NP-hard problem in graph theory, which has important applications in a wide spectrum of fields. Since graphs in many applications are changing frequently over time, the problem of maintaining a MaxIS over dynamic graphs has attracted increasing attention over the past few years. Due to the intractability of maintaining an exact MaxIS, this paper aims to develop efficient algorithms that can maintain an approximate MaxIS with an accuracy guarantee theoretically. In particular, we propose a framework that maintains a $(\frac{\Delta}{2} + 1)$-approximate MaxIS over dynamic graphs and prove that it achieves a constant approximation ratio in many real-world networks. To the best of our knowledge, this is the first non-trivial approximability result for the dynamic MaxIS problem. Following the framework, we implement an efficient linear-time dynamic algorithm and a more effective dynamic algorithm with near-linear expected time complexity. Our thorough experiments over real and synthetic graphs demonstrate the effectiveness and efficiency of the proposed algorithms, especially when the graph is highly dynamic.
We study the performance of a phase-noise impaired double reconfigurable intelligent surface (RIS)-aided multiuser (MU) multiple-input single-output (MISO) system under spatial correlation at both RISs and base-station (BS). The downlink achievable rate is derived in closed-form under maximum ratio transmission (MRT) precoding. In addition, we obtain the optimal phase-shift design at both RISs in closed-form for the considered channel and phase-noise models. Numerical results validate the analytical expressions, and highlight the effects of different system parameters on the achievable rate. Our analysis shows that phase-noise can severely degrade the performance when users do not have direct links to both RISs, and can only be served via the double-reflection link. Also, we show that high spatial correlation at RISs is essential for high achievable rates.
We present DeepCSI, a novel approach to Wi-Fi radio fingerprinting (RFP) which leverages standard-compliant beamforming feedback matrices to authenticate MU-MIMO Wi-Fi devices on the move. By capturing unique imperfections in off-the-shelf radio circuitry, RFP techniques can identify wireless devices directly at the physical layer, allowing low-latency low-energy cryptography-free authentication. However, existing Wi-Fi RFP techniques are based on software-defined radio (SDRs), which may ultimately prevent their widespread adoption. Moreover, it is unclear whether existing strategies can work in the presence of MU-MIMO transmitters - a key technology in modern Wi-Fi standards. Conversely from prior work, DeepCSI does not require SDR technologies and can be run on any low-cost Wi-Fi device to authenticate MU-MIMO transmitters. Our key intuition is that imperfections in the transmitter's radio circuitry percolate onto the beamforming feedback matrix, and thus RFP can be performed without explicit channel state information (CSI) computation. DeepCSI is robust to inter-stream and inter-user interference being the beamforming feedback not affected by those phenomena. We extensively evaluate the performance of DeepCSI through a massive data collection campaign performed in the wild with off-the-shelf equipment, where 10 MU-MIMO Wi-Fi radios emit signals in different positions. Experimental results indicate that DeepCSI correctly identifies the transmitter with an accuracy of up to 98%. The identification accuracy remains above 82% when the device moves within the environment. To allow replicability and provide a performance benchmark, we pledge to share the 800 GB datasets - collected in static and, for the first time, dynamic conditions - and the code database with the community.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.