Nowadays, IEEE 802.11, i.e., Wi-Fi has emerged as a prevailing technology for broadband wireless networking. To meet the tremendous rise of demand for future generation wireless LANs, a robust and efficient MAC protocol is required for the Wi-Fi network. However, traditional MAC mechanisms are not suitable for next-generation communications due to some inherent constraints. In this regard, OFDMA technology could be adopted to design an efficient MAC protocol for the Wi-Fi network. The purpose of this research is to provide a high-speed network for Wi-Fi users. The thesis presents three MAC protocols, namely, HTFA (High Throughput and Fair Access), ERA (Efficient Resource Allocation), and PRS (Proportional Resource Scheduling), by employing the OFDMA technology. The novel protocols improve Wi-Fi communication using the latest IEEE 802.11ax standard, i.e., Wi-Fi 6. In particular, the protocols improve several performance parameters of the MAC protocol, such as increasing the throughput, goodput, fairness index, and reducing the packet retransmissions, collisions, etc. Simulation results validate that the new protocols are far better than the existing protocols. The protocols designed in this thesis are compliant with the latest IEEE 802.11ax standard that promises to enhance the throughput at least four times per user and support ten times users. Thus, the new protocols can ensure uninterrupted and smooth communication in highly dense environments. The thesis contains a lot of resources such as the state of the art of MAC protocols, analysis of contemporary protocols and their performance matrix; architecture of Wi-Fi system, OFDMA constraints and regulations; framework of protocols; analytical models; relevant data, theory, and methods; etc. that would be the valuable resources to the future researchers for the research on the Wi-Fi network.
The main contribution reported in the paper is a novel paradigm through which mobile cellular traffic forecasting is made substantially more accurate. Specifically, by incorporating freely available road metrics we characterise the data generation process and spatial dependencies. Therefore, this provides a means for improving the forecasting estimates. We employ highway flow and average speed variables together with a cellular network traffic metric in a light learning structure to predict the short-term future load on a cell covering a segment of a highway. This is in sharp contrast to prior art that mainly studies urban scenarios (with pedestrian and limited vehicular speeds) and develops machine learning approaches that use exclusively network metrics and meta information to make mid-term and long-term predictions. The learning structure can be used at a cell or edge level, and can find application in both federated and centralised learning.
Field-Programmable Gate Array (FPGA)-based Software-Defined Radio (SDR) is well-suited for experimenting with advanced wireless communication systems, as it allows to alter the architecture promptly while obtaining high performance. However, programming the FPGA using a Hardware Description Language (HDL) is a time-consuming task for FPGA developers and difficult for software developers, which limits the potential of SDR. High-Level Synthesis (HLS) tools aid the designers by allowing them to program on a higher layer of abstraction. However, if not carefully designed, it may lead to a degradation in computing performance or significant increase in resource utilization. This work shows that it is feasible to design modern Orthogonal Frequency Division Multiplex (OFDM) baseband processing modules like channel estimation and equalization using HLS without sacrificing performance and to integrate them in an HDL design to form a fully-operational FPGA-based Wi-Fi (IEEE 802.11a/g/n) transceiver. Starting from no HLS experience, a design with minor overhead in terms of latency and resource utilization as compared to the HDL approach was created in less than one month. We show the readability of the sequential logic as coded in HLS, and discuss the lessons learned from the approach taken and the benefits it brings for further design and experimentation. The FPGA design generated by HLS was verified to be bit-true with its MATLAB implementation in simulation. Furthermore, we show its practical performance when deployed on a System-on-Chip (SoC)-based SDR using a professional wireless connectivity tester.
The (modern) arbitrary derivative (ADER) approach is a popular technique for the numerical solution of differential problems based on iteratively solving an implicit discretization of their weak formulation. In this work, focusing on an ODE context, we investigate several strategies to improve this approach. Our initial emphasis is on the order of accuracy of the method in connection with the polynomial discretization of the weak formulation. We demonstrate that precise choices lead to higher-order convergences in comparison to the existing literature. Then, we put ADER methods into a Deferred Correction (DeC) formalism. This allows to determine the optimal number of iterations, which is equal to the formal order of accuracy of the method, and to introduce efficient $p$-adaptive modifications. These are defined by matching the order of accuracy achieved and the degree of the polynomial reconstruction at each iteration. We provide analytical and numerical results, including the stability analysis of the new modified methods, the investigation of the computational efficiency, an application to adaptivity and an application to hyperbolic PDEs with a Spectral Difference (SD) space discretization.
In the context of the upgrade of the Large Hadron Collider at CERN for high-luminosity operation, the particle detectors have to cope with much higher data rates and therefore need to upgrade their data acquisition systems. This upgrade is taken as an opportunity to exchange the currently used highly customized hardware by commercial solutions. Nevertheless, some part of the data processing still needs to be done within Field Programmable Gate Arrays (FPGA), requiring the transfer of data between the FPGAs and the commercial servers. This paper reports on a study of direct data transmission from FPGAs to servers via a commercial network. Large data buffers as required for reliable data-transmission protocols are avoided by using an emerging technique named eXpress Data Path (XDP). Based on XDP, the transmission of 5.2 PB (i.e. 2.92 * 10^{12} packets) was achieved within 168 h without a single missing packet.
Over the past few years, audio classification task on large-scale dataset such as AudioSet has been an important research area. Several deeper Convolution-based Neural networks have shown compelling performance notably Vggish, YAMNet, and Pretrained Audio Neural Network (PANN). These models are available as pretrained architecture for transfer learning as well as specific audio task adoption. In this paper, we propose a lightweight on-device deep learning-based model for audio classification, LEAN. LEAN consists of a raw waveform-based temporal feature extractor called as Wave Encoder and logmel-based Pretrained YAMNet. We show that using a combination of trainable wave encoder, Pretrained YAMNet along with cross attention-based temporal realignment, results in competitive performance on downstream audio classification tasks with lesser memory footprints and hence making it suitable for resource constraints devices such as mobile, edge devices, etc . Our proposed system achieves on-device mean average precision(mAP) of .445 with a memory footprint of a mere 4.5MB on the FSD50K dataset which is an improvement of 22% over baseline on-device mAP on same dataset.
Most networks are not static objects, but instead they change over time. This observation has sparked rigorous research on temporal graphs within the last years. In temporal graphs, we have a fixed set of nodes and the connections between them are only available at certain time steps. This gives rise to a plethora of algorithmic problems on such graphs, most prominently the problem of finding temporal spanners, i.e., the computation of subgraphs that guarantee all pairs reachability via temporal paths. To the best of our knowledge, only centralized approaches for the solution of this problem are known. However, many real-world networks are not shaped by a central designer but instead they emerge and evolve by the interaction of many strategic agents. This observation is the driving force of the recent intensive research on game-theoretic network formation models. In this work we bring together these two recent research directions: temporal graphs and game-theoretic network formation. As a first step into this new realm, we focus on a simplified setting where a complete temporal host graph is given and the agents, corresponding to its nodes, selfishly create incident edges to ensure that they can reach all other nodes via temporal paths in the created network. This yields temporal spanners as equilibria of our game. We prove results on the convergence to and the existence of equilibrium networks, on the complexity of finding best agent strategies, and on the quality of the equilibria. By taking these first important steps, we uncover challenging open problems that call for an in-depth exploration of the creation of temporal graphs by strategic agents.
The iterated learning model is an agent-based model of language evolution notable for demonstrating the emergence of compositional language. In its original form, it modelled language evolution along a single chain of teacher-pupil interactions; here we modify the model to allow more complex patterns of communication within a population and use the extended model to quantify the effect of within-community and between-community communication frequency on language development. We find that a small amount of between-community communication can lead to population-wide language convergence but that this global language amalgamation is more difficult to achieve when communities are spatially embedded.
Inferring protocol formats is critical for many security applications. However, existing format-inference techniques often miss many formats, because almost all of them are in a fashion of dynamic analysis and rely on a limited number of network packets to drive their analysis. If a feature is not present in the input packets, the feature will be missed in the resulting formats. We develop a novel static program analysis for format inference. It is well-known that static analysis does not rely on any input packets and can achieve high coverage by scanning every piece of code. However, for efficiency and precision, we have to address two challenges, namely path explosion and disordered path constraints. To this end, our approach uses abstract interpretation to produce a novel data structure called the abstract format graph. It delimits precise but costly operations to only small regions, thus ensuring precision and efficiency at the same time. Our inferred formats are of high coverage and precisely specify both field boundaries and semantic constraints among packet fields. Our evaluation shows that we can infer formats for a protocol in one minute with >95% precision and recall, much better than four baseline techniques. Our inferred formats can substantially enhance existing protocol fuzzers, improving the coverage by 20% to 260% and discovering 53 zero-days with 47 assigned CVEs. We also provide case studies of adopting our inferred formats in other security applications including traffic auditing and intrusion detection.
Normalization is known to help the optimization of deep neural networks. Curiously, different architectures require specialized normalization methods. In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). First, we adapt and evaluate the existing methods from other domains to GNNs. Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm. We provide an explanation by showing that InstanceNorm serves as a preconditioner for GNNs, but such preconditioning effect is weaker with BatchNorm due to the heavy batch noise in graph datasets. Second, we show that the shift operation in InstanceNorm results in an expressiveness degradation of GNNs for highly regular graphs. We address this issue by proposing GraphNorm with a learnable shift. Empirically, GNNs with GraphNorm converge faster compared to GNNs using other normalization. GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.
Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.