A theoretical study is performed to analyze the directional response of different types of microphone array designs. 1-D (linear) and 2-D (planar) microphone array types are considered, and the delay and sum beamforming and conventional beamforming techniques are employed to localize the sound source. A non-dimensional parameter, G, is characterized to simplify and standardize the rejection performance of both 1-D and 2-D microphone arrays as a function of array geometry and sound source parameters. This parameter G is then used to determine an improved design of a 2-D microphone array for far-field sound localization. One such design, termed the Equi-area array is introduced and analyzed in detail. The design is shown to have an advantageous rejection performance compared to other conventionally used 2-D planar microphone arrays.
One of the important applications of Golay complementary sets (GCSs) is the reduction of peak-to-mean envelope power ratio (PMEPR) in orthogonal frequency division multiplexing (OFDM) systems. OFDM has played a major role in modern wireless systems such as long-term-evolution (LTE), 5th generation (5G) wireless standards, etc. This paper searches for systematic constructions of GCSs of arbitrary lengths and alphabet sizes. The proposed constructions are based on extended Boolean functions (EBFs). For the first time, we can generate codes of independent parameter choices.
We refine the weighted type graph technique for proving termination of double pushout (DPO) graph transformation systems. We increase the power of the approach for graphs, we generalize the technique to other categories, and we allow for variations of DPO that occur in the literature.
Matrix-vector multiplication forms the basis of many iterative solution algorithms and as such is an important algorithm also for hierarchical matrices. However, due to its low computational intensity, its performance is typically limited by the available memory bandwidth. By optimizing the storage representation of the data within such matrices, this limitation can be lifted and the performance increased. This applies not only to hierarchical matrices but for also for other low-rank approximation schemes, e.g. block low-rank matrices.
With the increase in data volume, more types of data are being used and shared, especially in the power Internet of Things (IoT). However, the processes of data sharing may lead to unexpected information leakage because of the ubiquitous relevance among the different data, thus it is necessary for data owners to conduct preventive audits for data applications before data sharing to avoid the risk of key information leakage. Considering that the same data may play completely different roles in different application scenarios, data owners should know the expected data applications of the data buyers in advance and provide modified data that are less relevant to the private information of the data owners and more relevant to the nonprivate information that the data buyers need. In this paper, data sharing in the power IoT is regarded as the background, and the mutual information of the data and their implicit information is selected as the data feature parameter to indicate the relevance between the data and their implicit information or the ability to infer the implicit information from the data. Therefore, preventive audits should be conducted based on changes in the data feature parameters before and after data sharing. The probability exchange adjustment method is proposed as the theoretical basis of preventive audits under simplified consumption, and the corresponding optimization models are constructed and extended to more practical scenarios with multivariate characteristics. Finally, case studies are used to validate the effectiveness of the proposed preventive audits.
By concatenating a polar transform with a convolutional transform, polarization-adjusted convolutional (PAC) codes can reach the dispersion approximation bound in certain rate cases. However, the sequential decoding nature of traditional PAC decoding algorithms results in high decoding latency. Due to the parallel computing capability, deep neural network (DNN) decoders have emerged as a promising solution. In this paper, we propose three types of DNN decoders for PAC codes: multi-layer perceptron (MLP), convolutional neural network (CNN), and recurrent neural network (RNN). The performance of these DNN decoders is evaluated through extensive simulation. Numerical results show that the MLP decoder has the best error-correction performance under a similar model parameter number.
The development of connected autonomous vehicles (CAVs) facilitates the enhancement of traffic efficiency in complicated scenarios. In unsignalized roundabout scenarios, difficulties remain unsolved in developing an effective and efficient coordination strategy for CAVs. In this paper, we formulate the cooperative autonomous driving problem of CAVs in the roundabout scenario as a constrained optimal control problem, and propose a computationally-efficient parallel optimization framework to generate strategies for CAVs such that the travel efficiency is improved with hard safety guarantees. All constraints involved in the roundabout scenario are addressed appropriately with convex approximation, such that the convexity property of the reformulated optimization problem is exhibited. Then, a parallel optimization algorithm is presented to solve the reformulated optimization problem, where an embodied iterative nearest neighbor search strategy to determine the optimal passing sequence in the roundabout scenario. It is noteworthy that the travel efficiency in the roundabout scenario is enhanced and the computation burden is considerably alleviated with the innovation development. We also examine the proposed method in CARLA simulator and perform thorough comparisons with a rule-based baseline and the commonly used IPOPT optimization solver to demonstrate the effectiveness and efficiency of the proposed approach.
The stability of the Ghurye-Olkin (GO) characterization of Gaussian vectors is analyzed using a partition of the vectors into equivalence classes defined by their matrix factors. The sum of the vectors in each class is near-Gaussian in the characteristic function (c.f.) domain if the GO independence condition is approximately met in the c.f. domain. All vectors have the property that any vector projection is near-Gaussian in the distribution function (d.f.) domain. The proofs of these c.f. and d.f. stabilities use tools that establish the stabilities of theorems by Kac-Bernstein and Cram\'er, respectively. The results are used to prove stability theorems for differential entropies of Gaussian vectors and blind source separation of non-Gaussian sources.
Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.