In this article, we present a new construction of evaluation codes in the Hamming metric, which we call twisted Reed-Solomon codes. Whereas Reed-Solomon (RS) codes are MDS codes, this need not be the case for twisted RS codes. Nonetheless, we show that our construction yields several families of MDS codes. Further, for a large subclass of (MDS) twisted RS codes, we show that the new codes are not generalized RS codes. To achieve this, we use properties of Schur squares of codes as well as an explicit description of the dual of a large subclass of our codes. We conclude the paper with a description of a decoder, that performs very well in practice as shown by extensive simulation results.
We previously proposed the first nontrivial examples of a code having support $t$-designs for all weights obtained from the Assmus-Mattson theorem and having support $t'$-designs for some weights with some $t'>t$. This suggests the possibility of generalizing the Assmus-Mattson theorem, which is very important in design and coding theory. In the present paper, we generalize this example as a strengthening of the Assmus-Mattson theorem along this direction. As a corollary, we provide a new characterization of the extended Golay code $\mathcal{G}_{24}$.
Thanks to the ubiquitousness of Wi-Fi access points and devices, Wi-Fi sensing enables transformative applications in remote health care, security, and surveillance. Existing work has explored the usage of machine learning on channel state information (CSI) computed from Wi-Fi packets to classify events of interest. However, most of these algorithms require a significant amount of data collection, as well as extensive computational power for additional CSI feature extraction. Moreover, the majority of these models suffer from poor accuracy when tested in a new/untrained environment. In this paper, we propose ReWiS, a novel framework for robust and environment-independent Wi-Fi sensing. The key innovation of ReWiS is to leverage few-shot learning (FSL) as the inference engine, which (i) reduces the need for extensive data collection and application-specific feature extraction; (ii) can rapidly generalize to new tasks by leveraging only a few new samples. We prototype ReWiS using off-the-shelf Wi-Fi equipment and showcase its performance by considering a compelling use case of human activity recognition. Thus, we perform an extensive data collection campaign in three different propagation environments with two human subjects. We evaluate the impact of each diversity component on the performance and compare ReWiS with a traditional convolutional neural network (CNN) approach. Experimental results show that ReWiS improves the performance by about 40% with respect to existing single-antenna low-resolution approaches. Moreover, when compared to a CNN-based approach, ReWiS shows a 35% more accuracy and less than 10% drop in accuracy when tested in different environments, while the CNN drops by more than 45%.
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual-information-guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to human faces in datasets designed to obscure them. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels.
Modern web services routinely provide REST APIs for clients to access their functionality. These APIs present unique challenges and opportunities for automated testing, driving the recent development of many techniques and tools that generate test cases for API endpoints using various strategies. Understanding how these techniques compare to one another is difficult, as they have been evaluated on different benchmarks and using different metrics. To fill this gap, we performed an empirical study aimed to understand the landscape in automated testing of REST APIs and guide future research in this area. We first identified, through a systematic selection process, a set of 10 state-of-the-art REST API testing tools that included tools developed by both researchers and practitioners. We then applied these tools to a benchmark of 20 real-world open-source RESTful services and analyzed their performance in terms of code coverage achieved and unique failures triggered. This analysis allowed us to identify strengths, weaknesses, and limitations of the tools considered and of their underlying strategies, as well as implications of our findings for future research in this area.
Recurrent models have been dominating the field of neural machine translation (NMT) for the past few years. Transformers \citep{vaswani2017attention}, have radically changed it by proposing a novel architecture that relies on a feed-forward backbone and self-attention mechanism. Although Transformers are powerful, they could fail to properly encode sequential/positional information due to their non-recurrent nature. To solve this problem, position embeddings are defined exclusively for each time step to enrich word information. However, such embeddings are fixed after training regardless of the task and the word ordering system of the source or target language. In this paper, we propose a novel architecture with new position embeddings depending on the input text to address this shortcoming by taking the order of target words into consideration. Instead of using predefined position embeddings, our solution \textit{generates} new embeddings to refine each word's position information. Since we do not dictate the position of source tokens and learn them in an end-to-end fashion, we refer to our method as \textit{dynamic} position encoding (DPE). We evaluated the impact of our model on multiple datasets to translate from English into German, French, and Italian and observed meaningful improvements in comparison to the original Transformer.
Symbol-pair codes introduced by Cassuto and Blaum in 2010 are designed to protect against the pair errors in symbol-pair read channels. One of the central themes in symbol-error correction is the construction of maximal distance separable (MDS) symbol-pair codes that possess the largest possible pair-error correcting performance. In this paper, we construct more general generator polynomials for two classes of MDS symbol-pair codes with code length $lp$. Based on repeated-root cyclic codes, we derive all MDS symbol-pair codes of length $3p$, when the degree of the generator polynomials is no more than 10. We also give two new classes of (almost maximal distance separable) AMDS symbol-pair codes with the length $lp$ or $4p$ by virtue of repeated-root cyclic codes. For length $3p$, we derive all AMDS symbol-pair codes, when the degree of the generator polynomials is less than 10. The main results are obtained by determining the solutions of certain equations over finite fields.
Most existing works of polar codes focus on the analysis of block error probability. However, in many scenarios, bit error probability is also important for evaluating the performance of channel codes. In this paper, we establish a new framework to analyze the bit error probability of polar codes. Specifically, by revisiting the error event of bit-channel, we first introduce the conditional bit error probability as a metric to evaluate the reliability of bit-channel for both systematic and non-systematic polar codes. Guided by the concept of polar subcode, we then derive an upper bound on the conditional bit error probability of each bit-channel, and accordingly, an upper bound on the bit error probability of polar codes. Based on these, two types of construction metrics aiming at minimizing the bit error probability of polar codes are proposed, which are of linear computational complexity and explicit forms. Simulation results show that the polar codes constructed by the proposed methods can outperform those constructed by the conventional methods.
Deep Learning has implemented a wide range of applications and has become increasingly popular in recent years. The goal of multimodal deep learning is to create models that can process and link information using various modalities. Despite the extensive development made for unimodal learning, it still cannot cover all the aspects of human learning. Multimodal learning helps to understand and analyze better when various senses are engaged in the processing of information. This paper focuses on multiple types of modalities, i.e., image, video, text, audio, body gestures, facial expressions, and physiological signals. Detailed analysis of past and current baseline approaches and an in-depth study of recent advancements in multimodal deep learning applications has been provided. A fine-grained taxonomy of various multimodal deep learning applications is proposed, elaborating on different applications in more depth. Architectures and datasets used in these applications are also discussed, along with their evaluation metrics. Last, main issues are highlighted separately for each domain along with their possible future research directions.
Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.
In structure learning, the output is generally a structure that is used as supervision information to achieve good performance. Considering the interpretation of deep learning models has raised extended attention these years, it will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks (RNNs) whose inner mechanism is still not clearly understood. We find that Finite State Automaton (FSA) that processes sequential data has more interpretable inner mechanism and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. We first give the graphical illustration of FSA for human beings to follow, which shows the interpretability. From the FSA's point of view, we then analyze how the performance of RNNs are affected by the number of gates, as well as the semantic meaning behind the transition of numerical hidden states. Our results suggest that RNNs with simple gated structure such as Minimal Gated Unit (MGU) is more desirable and the transitions in FSA leading to specific classification result are associated with corresponding words which are understandable by human beings.