With the emergence of new technologies and a growing number of wireless networks, we face the problem of radio spectrum shortages. As a result, identifying the wireless channel spectrum to exploit the channel's idle state while also boosting network security is a pivotal issue. Detecting and classifying protocols in the MAC sublayer enables Cognitive Radio users to improve spectrum utilization and minimize potential interference. In this paper, we classify the Wi-Fi and Bluetooth protocols, which are the most widely used MAC sublayer protocols in the ISM radio band. With the advent of various wireless technologies, especially in the 2.4 GHz frequency band, the ISM frequency spectrum has become crowded and high-traffic, which faces a lack of spectrum resources and user interference. Therefore, identifying and classifying protocols is an effective and useful method. Leveraging machine learning and deep learning techniques, known for their advanced classification capabilities, we apply Support Vector Machine and K-Nearest Neighbors algorithms, which are machine learning algorithms, to classify protocols into three classes: Wi-Fi, Wi-Fi Beacon, and Bluetooth. To capture the signals, we use the USRP N210 Software Defined Radio device and sample the real data in the indoor environment in different conditions of the presence and absence of transmitters and receivers for these two protocols. By assembling this dataset and studying the time and frequency features of the protocols, we extract the frame width and the silence gap between the two frames as time features and the PAPR of each frame as a power feature. By comparing the output of the protocols classification in different conditions and also adding Gaussian noise, it was found that the samples in the nonlinear SVM method with RBF and KNN functions have the best performance, with 97.83% and 98.12% classification accuracy, respectively.
Motivated by a heat radiative transport equation, we consider a particle undergoing collisions in a space-time domain and propose a method to sample its escape time, space and direction from the domain. The first step of the procedure is an estimation of how many elementary collisions is safe to take before chances of exiting the domain are too high; then these collisions are aggregated into a single movement. The method does not use any model nor any particular regime of parameters. We give theoretical results both under the normal approximation and without it and test the method on some benchmarks from the literature. The results confirm the theoretical predictions and show that the proposal is an efficient method to sample the escape distribution of the particle.
We consider the task of data-driven identification of dynamical systems, specifically for systems whose behavior at large frequencies is non-standard, as encoded by a non-trivial relative degree of the transfer function or, alternatively, a non-trivial index of a corresponding realization as a descriptor system. We develop novel surrogate modeling strategies that allow state-of-the-art rational approximation algorithms (e.g., AAA and vector fitting) to better handle data coming from such systems with non-trivial relative degree. Our contribution is twofold. On one hand, we describe a strategy to build rational surrogate models with prescribed relative degree, with the objective of mirroring the high-frequency behavior of the high-fidelity problem, when known. The surrogate model's desired degree is achieved through constraints on its barycentric coefficients, rather than through ad-hoc modifications of the rational form. On the other hand, we present a degree-identification routine that allows one to estimate the unknown relative degree of a system from low-frequency data. By identifying the degree of the system that generated the data, we can build a surrogate model that, in addition to matching the data well (at low frequencies), has enhanced extrapolation capabilities (at high frequencies). We showcase the effectiveness and robustness of the newly proposed method through a suite of numerical tests.
The proposed two-dimensional geometrically exact beam element extends our previous work by including the effects of shear distortion, and also of distributed forces and moments acting along the beam. The general flexibility-based formulation exploits the kinematic equations combined with the inverted sectional equations and the integrated form of equilibrium equations. The resulting set of three first-order differential equations is discretized by finite differences and the boundary value problem is converted into an initial value problem using the shooting method. Due to the special structure of the governing equations, the scheme remains explicit even though the first derivatives are approximated by central differences, leading to high accuracy. The main advantage of the adopted approach is that the error can be efficiently reduced by refining the computational grid used for finite differences at the element level while keeping the number of global degrees of freedom low. The efficiency is also increased by dealing directly with the global centerline coordinates and sectional inclination with respect to global axes as the primary unknowns at the element level, thereby avoiding transformations between local and global coordinates. Two formulations of the sectional equations, referred to as the Reissner and Ziegler models, are presented and compared. In particular, stability of an axially loaded beam/column is investigated and the connections to the Haringx and Engesser stability theories are discussed. Both approaches are tested in a series of numerical examples, which illustrate (i) high accuracy with quadratic convergence when the spatial discretization is refined, (ii) easy modeling of variable stiffness along the element (such as rigid joint offsets), (iii) efficient and accurate characterization of the buckling and post-buckling behavior.
Nonparametric procedures are more powerful for detecting interaction in two-way ANOVA when the data are non-normal. In this paper, we compute null critical values for the aligned rank-based tests (APCSSA/APCSSM) where the levels of the factors are between 2 and 6. We compare the performance of these new procedures with the ANOVA F-test for interaction, the adjusted rank transform test (ART), Conover's rank transform procedure (RT), and a rank-based ANOVA test (raov) using Monte Carlo simulations. The new procedures APCSSA/APCSSM are comparable with existing competitors in all settings. Even though there is no single dominant test in detecting interaction effects for non-normal data, nonparametric procedure APCSSM is the most highly recommended procedure for Cauchy errors settings.
The purpose of this work is to investigate the soundness and utility of a neural network-based approach as a framework for exploring the impact of image enhancement techniques on visual cortex activation. In a preliminary study, we prepare a set of state-of-the-art brain encoding models, selected among the top 10 methods that participated in The Algonauts Project 2023 Challenge [16]. We analyze their ability to make valid predictions about the effects of various image enhancement techniques on neural responses. Given the impossibility of acquiring the actual data due to the high costs associated with brain imaging procedures, our investigation builds up on a series of experiments. Specifically, we analyze the ability of brain encoders to estimate the cerebral reaction to various augmentations by evaluating the response to augmentations targeting objects (i.e., faces and words) with known impact on specific areas. Moreover, we study the predicted activation in response to objects unseen during training, exploring the impact of semantically out-of-distribution stimuli. We provide relevant evidence for the generalization ability of the models forming the proposed framework, which appears to be promising for the identification of the optimal visual augmentation filter for a given task, model-driven design strategies as well as for AR and VR applications.
A central task in knowledge compilation is to compile a CNF-SAT instance into a succinct representation format that allows efficient operations such as testing satisfiability, counting, or enumerating all solutions. Useful representation formats studied in this area range from ordered binary decision diagrams (OBDDs) to circuits in decomposable negation normal form (DNNFs). While it is known that there exist CNF formulas that require exponential size representations, the situation is less well studied for other types of constraints than Boolean disjunctive clauses. The constraint satisfaction problem (CSP) is a powerful framework that generalizes CNF-SAT by allowing arbitrary sets of constraints over any finite domain. The main goal of our work is to understand for which type of constraints (also called the constraint language) it is possible to efficiently compute representations of polynomial size. We answer this question completely and prove two tight characterizations of efficiently compilable constraint languages, depending on whether target format is structured. We first identify the combinatorial property of ``strong blockwise decomposability'' and show that if a constraint language has this property, we can compute DNNF representations of linear size. For all other constraint languages we construct families of CSP-instances that provably require DNNFs of exponential size. For a subclass of ``strong uniformly blockwise decomposable'' constraint languages we obtain a similar dichotomy for structured DNNFs. In fact, strong (uniform) blockwise decomposability even allows efficient compilation into multi-valued analogs of OBDDs and FBDDs, respectively. Thus, we get complete characterizations for all knowledge compilation classes between O(B)DDs and DNNFs.
Quantum computing holds the unparalleled potentials to enhance, speed up or innovate machine learning. However, an unambiguous demonstration of quantum learning advantage has not been achieved so far. Here, we rigorously establish a noise-robust, unconditional quantum learning advantage in terms of expressivity, inference speed, and training efficiency, compared to commonly-used classical machine learning models. Our proof is information-theoretic and pinpoints the origin of this advantage: quantum entanglement can be used to reduce the communication required by non-local machine learning tasks. In particular, we design a fully classical task that can be solved with unit accuracy by a quantum model with a constant number of variational parameters using entanglement resources, whereas commonly-used classical models must scale at least linearly with the size of the task to achieve a larger-than-exponentially-small accuracy. We further show that the quantum model can be trained with constant time and a number of samples inversely proportional to the problem size. We prove that this advantage is robust against constant depolarization noise. We show through numerical simulations that even though the classical models can have improved performance as their sizes are increased, they would suffer from overfitting. The constant-versus-linear separation, bolstered by the overfitting problem, makes it possible to demonstrate the quantum advantage with relatively small system sizes. We demonstrate, through both numerical simulations and trapped-ion experiments on IonQ Aria, the desired quantum-classical learning separation. Our results provide a valuable guide for demonstrating quantum learning advantages in practical applications with current noisy intermediate-scale quantum devices.
We introduce a novel neural network module that adeptly handles recursive data flow in neural network architectures. At its core, this module employs a self-consistent approach where a set of recursive equations is solved iteratively, halting when the difference between two consecutive iterations falls below a defined threshold. Leveraging this mechanism, we construct a new neural network architecture, an extension of the conformer transducer, which enriches automatic speech recognition systems with a stream of contextual information. Our method notably improves the accuracy of recognizing rare words without adversely affecting the word error rate for common vocabulary. We investigate the improvement in accuracy for these uncommon words using our novel model, both independently and in conjunction with shallow fusion with a context language model. Our findings reveal that the combination of both approaches can improve the accuracy of detecting rare words by as much as 4.5 times. Our proposed self-consistent recursive methodology is versatile and adaptable, compatible with many recently developed encoders, and has the potential to drive model improvements in speech recognition and beyond.
Driven by the need to generate real-world evidence from multi-site collaborative studies, we introduce an efficient collaborative learning approach to evaluate average treatment effect in a multi-site setting under data sharing constraints. Specifically, the proposed method operates in a federated manner, using individual-level data from a user-defined target population and summary statistics from other source populations, to construct efficient estimator for the average treatment effect on the target population of interest. Our federated approach does not require iterative communications between sites, making it particularly suitable for research consortia with limited resources for developing automated data-sharing infrastructures. Compared to existing work data integration methods in causal inference, it allows distributional shifts in outcomes, treatments and baseline covariates distributions, and achieves semiparametric efficiency bound under appropriate conditions. We illustrate the magnitude of efficiency gains from incorporating extra data sources by examining the effect of insulin vs. non-insulin treatments on heart failure for patients with type II diabetes using electronic health record data collected from the All of Us program.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.