For ultra-reliable, low-latency communications (URLLC) applications such as mission-critical industrial control and extended reality (XR), it is important to ensure the communication quality of individual packets. Prior studies have considered Probabilistic Per-packet Real-time Communications (PPRC) guarantees for single-cell, single-channel networks, but they have not considered real-world complexities such as inter-cell interference in large-scale networks with multiple communication channels and heterogeneous real-time requirements. To fill the gap, we propose a real-time scheduling algorithm based on \emph{local-deadline-partition (LDP)}, and the LDP algorithm ensures PPRC guarantee for large-scale, multi-channel networks with heterogeneous real-time constraints. We also address the associated challenge of schedulability test. In particular, we propose the concept of \emph{feasible set}, identify a closed-form sufficient condition for the schedulability of PPRC traffic, and then propose an efficient distributed algorithm for the schedulability test. We numerically study the properties of the LDP algorithm and observe that it significantly improves the network capacity of URLLC, for instance, by a factor of 5-20 as compared with a typical method. Furthermore, the PPRC traffic supportable by the LDP algorithm is significantly higher than that of state-of-the-art comparison schemes. This demonstrates the potential of fine-grained scheduling algorithms for URLLC wireless systems regarding interference scenarios.
The recent development of integrated sensing and communications (ISAC) technology offers new opportunities to meet high-throughput and low-latency communication as well as high-resolution localization requirements in vehicular networks. However, considering the limited transmit power of the road site units (RSUs) and the relatively small radar cross section (RCS) of vehicles with random reflection coefficients, the power of echo signals may be too weak to be utilized for effective target detection and tracking. Moreover, high-frequency signals usually suffer from large fading loss when penetrating vehicles, which seriously degrades the quality of communication services inside the vehicles. To handle this issue, we propose a novel sensing-assisted communication mechanism by employing an intelligent omni-surface (IOS) on the surface of vehicles to enhance both sensing and communication (S&C) performance. To this end, we first propose a two-stage ISAC protocol, including the joint S&C stage and the communication-only stage, to fulfill more efficient communication performance improvements benefited from sensing. The achievable communication rate maximization problem is formulated by jointly optimizing the transmit beamforming, the IOS phase shifts, and the duration of the joint S&C stage. However, solving this ISAC optimization problem is highly non-trivial since inaccurate estimation and measurement information renders the achievable rate lack of closed-form expression. To handle this issue, we first derive a closed-form expression of the achievable rate under uncertain location information, and then unveil a sufficient and necessary condition for the existence of the joint S&C stage to offer useful insights for practical system design. Moreover, two typical scenarios including interference-limited and noise-limited cases are analyzed.
Age of information (AoI) is an effective performance metric measuring the freshness of information and is particularly suitable for applications involving status update. In this paper, using the age violation probability as the metric, scheduling for heterogeneous multi-source systems is studied. Two queueing disciplines, namely the infinite packet queueing discipline and the single packet queueing discipline, are considered for scheduling packets within each source. A generalized round-robin (GRR) scheduling policy is then proposed to schedule the sources. Bounds on the exponential decay rate of the age violation probability for the proposed GRR scheduling policy under each queueing discipline are rigorously analyzed. Simulation results are provided, which show that the proposed GRR scheduling policy can efficiently serve many sources with heterogeneous arrivals and that our bounds can capture the true decay rate quite accurately. When specialized to the homogeneous source setting, the analysis concretizes the common belief that the single packet queueing discipline has a better AoI performance than the infinite packet queueing discipline. Moreover, simulations on this special case reveals that under the proposed scheduling policy, the two disciplines would have similar asymptotic performance when the inter-arrival time is much larger than the total transmission time.
Solute transport in fluid-particle systems is a fundamental process in numerous scientific and engineering disciplines. The simulation of it necessitates the consideration of solid particles with intricate shapes and sizes. To address this challenge, this study proposes the Random-Walk Metaball-Imaging Discrete Element Lattice Boltzmann Method (RW-MI-DELBM). In this model, we reconstruct particle geometries with the Metaball-Imaging algorithm, capture the particle behavior using the Discrete Element Method (DEM), simulate fluid behavior by the Lattice Boltzmann Method (LBM), and represent solute behavior through the Random Walk Method (RWM). Through the integration of these techniques with specially designed boundary conditions, we achieve to simulate the solute transport in fluid-particle systems comprising complex particle morphologies. Thorough validations, including analytical soluutions and experiments, are performed to assess the robustness and accuracy of this framework. The results demonstrate that the proposed framework can accurately capture the complex dynamics of solute transport under strict mass conservation. In particular, an investigation is carried out to assess the influence of particle morphologies on solute transport in a 3D oscillator, with a focus on identifying correlations between shape features and dispersion coefficients. Notably, all selected shape features exhibited strong correlations with the dispersion coefficient, indicating the significant influence of particle shapes on transport phenomena. However, due to the complexity of the relationship and the limited number of simulations, no clear patterns could be observed. Further comprehensive analyses incorporating a broader range of shape features and varying conditions are necessary to fully comprehend their collective influence on the dispersion coefficient.
Dialect classification is used in a variety of applications, such as machine translation and speech recognition, to improve the overall performance of the system. In a real-world scenario, a deployed dialect classification model can encounter anomalous inputs that differ from the training data distribution, also called out-of-distribution (OOD) samples. Those OOD samples can lead to unexpected outputs, as dialects of those samples are unseen during model training. Out-of-distribution detection is a new research area that has received little attention in the context of dialect classification. Towards this, we proposed a simple yet effective unsupervised Mahalanobis distance feature-based method to detect out-of-distribution samples. We utilize the latent embeddings from all intermediate layers of a wav2vec 2.0 transformer-based dialect classifier model for multi-task learning. Our proposed approach outperforms other state-of-the-art OOD detection methods significantly.
Visual perception is an important component for autonomous navigation of unmanned surface vessels (USV), particularly for the tasks related to autonomous inspection and tracking. These tasks involve vision-based navigation techniques to identify the target for navigation. Reduced visibility under extreme weather conditions in marine environments makes it difficult for vision-based approaches to work properly. To overcome these issues, this paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions. The proposed framework consists of an integrated perception pipeline that uses a generative adversarial network (GAN) to remove noise and highlight the object features before passing them to the object detector (i.e., YOLOv5). The detected visual features are then used by the USV to track the target. The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog. The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset, on which the proposed scheme has outperformed the existing methods across various metrics.
Preliminary trajectory design is a global search problem that seeks multiple qualitatively different solutions to a trajectory optimization problem. Due to its high dimensionality and non-convexity, and the frequent adjustment of problem parameters, the global search becomes computationally demanding. In this paper, we exploit the clustering structure in the solutions and propose an amortized global search (AmorGS) framework. We use deep generative models to predict trajectory solutions that share similar structures with previously solved problems, which accelerates the global search for unseen parameter values. Our method is evaluated using De Jong's 5th function and a low-thrust circular restricted three-body problem.
Automated synthesis of provably correct controllers for cyber-physical systems is crucial for deployment in safety-critical scenarios. However, hybrid features and stochastic or unknown behaviours make this problem challenging. We propose a method for synthesising controllers for Markov jump linear systems (MJLSs), a class of discrete-time models for cyber-physical systems, so that they certifiably satisfy probabilistic computation tree logic (PCTL) formulae. An MJLS consists of a finite set of stochastic linear dynamics and discrete jumps between these dynamics that are governed by a Markov decision process (MDP). We consider the cases where the transition probabilities of this MDP are either known up to an interval or completely unknown. Our approach is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS. We formalise this abstraction as an interval MDP (iMDP) for which we compute intervals of transition probabilities using sampling techniques from the so-called 'scenario approach', resulting in a probabilistically sound approximation. We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
Emerging non-volatile memories (NVMs) represent a disruptive technology that allows a paradigm shift from the conventional von Neumann architecture towards more efficient computing-in-memory (CIM) architectures. Several instrumentation platforms have been proposed to interface NVMs allowing the characterization of single cells and crossbar structures. However, these platforms suffer from low flexibility and are not capable of performing CIM operations on NVMs. Therefore, we recently designed and built the NeuroBreakoutBoard, a highly versatile instrumentation platform capable of executing CIM on NVMs. We present our preliminary results demonstrating a relative error < 5% in the range of 1 k$\Omega$ to 1 M$\Omega$ and showcase the switching behavior of a HfO$_2$/Ti-based memristive cell.
Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.