In this work we construct sequences of locally recoverable AG codes arising from a tower of function fields and give bound for the parameters of the obtained codes. In a particular case of a tower over $\mathbb{F}_{q^2}$ for any odd $q$, defined by Garcia and Stichtenoth in [GS2007], we show that the bound is sharp for the first code in the sequence, and we include a detailed analysis for the following codes in the sequence based on the distribution of rational places that split completely in the considered function field extension.
We introduce and analyse an efficient decoder for the quantum Tanner codes of that can correct adversarial errors of linear weight. Previous decoders for quantum low-density parity-check codes could only handle adversarial errors of weight $O(\sqrt{n \log n})$. We also work on the link between quantum Tanner codes and the Lifted Product codes of Panteleev and Kalachev, and show that our decoder can be adapted to the latter. The decoding algorithm alternates between sequential and parallel procedures and converges in linear time.
With the rapid growth of machine learning, deep neural networks (DNNs) are now being used in numerous domains. Unfortunately, DNNs are "black-boxes", and cannot be interpreted by humans, which is a substantial concern in safety-critical systems. To mitigate this issue, researchers have begun working on explainable AI (XAI) methods, which can identify a subset of input features that are the cause of a DNN's decision for a given input. Most existing techniques are heuristic, and cannot guarantee the correctness of the explanation provided. In contrast, recent and exciting attempts have shown that formal methods can be used to generate provably correct explanations. Although these methods are sound, the computational complexity of the underlying verification problem limits their scalability; and the explanations they produce might sometimes be overly complex. Here, we propose a novel approach to tackle these limitations. We (1) suggest an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation; (2) show how DNN verification can assist in calculating lower and upper bounds on the optimal explanation; (3) propose heuristics that significantly improve the scalability of the verification process; and (4) suggest the use of bundles, which allows us to arrive at more succinct and interpretable explanations. Our evaluation shows that our approach significantly outperforms state-of-the-art techniques, and produces explanations that are more useful to humans. We thus regard this work as a step toward leveraging verification technology in producing DNNs that are more reliable and comprehensible.
The overheads of classical decoding for quantum error correction on superconducting quantum systems grow rapidly with the number of logical qubits and their correction code distance. Decoding at room temperature is bottle-necked by refrigerator I/O bandwidth while cryogenic on-chip decoding is limited by area/power/thermal budget. To overcome these overheads, we are motivated by the observation that in the common case, error signatures are fairly trivial with high redundancy/sparsity, since the error correction codes are over-provisioned to correct for uncommon worst-case complex scenarios (to ensure substantially low logical error rates). If suitably exploited, these trivial signatures can be decoded and corrected with insignificant overhead, thereby alleviating the bottlenecks described above, while still handling the worst-case complex signatures by state-of-the-art means. Our proposal, targeting Surface Codes, consists of: 1) Clique: A lightweight decoder for decoding and correcting trivial common-case errors, designed for the cryogenic domain. The decoder is implemented for SFQ logic. 2) A statistical confidence-based technique for off-chip decoding bandwidth allocation, to efficiently handle rare complex decodes which are not covered by the on-chip decoder. 3) A method for stalling circuit execution, for the worst-case scenarios in which the provisioned off-chip bandwidth is insufficient to complete all requested off-chip decodes. In all, our proposal enables 70-99+% off-chip bandwidth elimination across a range of logical and physical error rates, without significantly sacrificing the accuracy of state-of-the-art off-chip decoding. By doing so, it achieves 10-10000x bandwidth reduction over prior off-chip bandwidth reduction techniques. Furthermore, it achieves a 15-37x resource overhead reduction compared to prior on-chip-only decoding.
A new approach to calculating the finite Fourier transform is suggested throughout the process of this study. The idea that the series has been updated with the appropriate modification and purification, which serves as the basis for the study, and that this update functions as the basis for the investigation is the conceptual goal of this method, which was designed especially for the purpose of this study. It is provided here that this methodology, which was designed especially for the purpose of this study, has been updated with the appropriate modification and purification, which serves as the basis for the study, is provided here. This study also used this update as the premise to get started. In order for this approach to be successful, the starting point must be the presumption that the series has been appropriately purified and organized to the point where it can be considered adequate. The attributes of this series were discovered as a result of the work that was ordered to choose an acceptable application of the Fourier series, to apply it, and to conduct an analysis of it in relation to the finite Fourier transform. These qualities were determined this study. The results of this study provided a better understanding of the characteristics of this series.
Minimal perfect hashing is the problem of mapping a static set of $n$ distinct keys into the address space $\{1,\ldots,n\}$ bijectively. It is well-known that $n\log_2 e$ bits are necessary to specify a minimal perfect hash function $f$, when no additional knowledge of the input keys is to be used. However, it is often the case in practice that the input keys have intrinsic relationships that we can exploit to lower the bit complexity of $f$. For example, consider a string and the set of all its distinct sub-strings of length $k$ - the so-called $k$-mers of the string. Two consecutive $k$-mers in the string have a strong intrinsic relationship in that they share an overlap of $k-1$ symbols. Hence, it seems intuitively possible to beat the classic $\log_2 e$ bits/key barrier in this case. Moreover, we would like $f$ to map consecutive $k$-mers to consecutive addresses, as to preserve as much as possible the relationships between the keys also in the co-domain $\{1,\ldots,n\}$. This is a useful feature in practice as it guarantees a certain degree of locality of reference for $f$, resulting in a better evaluation time when querying consecutive $k$-mers from a string. Motivated by these premises, we initiate the study of a new type of locality-preserving minimal perfect hash functions designed for $k$-mers extracted consecutively from a string (or collections of strings). We show a theoretic lower bound on the bit complexity of any $(1-\varepsilon)$-locality-preserving MPHF, for a parameter $0 < \varepsilon < 1$. The complexity is lower than $n\log_2 e$ bits for sufficiently small $\varepsilon$. We propose a construction that approaches the theoretic minimum space for growing $k$ and present a practical implementation of the method.
We propose coordinating guiding vector fields to achieve two tasks simultaneously with a team of robots: first, the guidance and navigation of multiple robots to possibly different paths or surfaces typically embedded in 2D or 3D; second, their motion coordination while tracking their prescribed paths or surfaces. The motion coordination is defined by desired parametric displacements between robots on the path or surface. Such a desired displacement is achieved by controlling the virtual coordinates, which correspond to the path or surface's parameters, between guiding vector fields. Rigorous mathematical guarantees underpinned by dynamical systems theory and Lyapunov theory are provided for the effective distributed motion coordination and navigation of robots on paths or surfaces from all initial positions. As an example for practical robotic applications, we derive a control algorithm from the proposed coordinating guiding vector fields for a Dubins-car-like model with actuation saturation. Our proposed algorithm is distributed and scalable to an arbitrary number of robots. Furthermore, extensive illustrative simulations and fixed-wing aircraft outdoor experiments validate the effectiveness and robustness of our algorithm.
The Galois inner product is a generalization of the Euclidean inner product and Hermitian inner product. The Galois hull of a linear code is the intersection of itself and its Galois dual code, which has aroused the interest of researchers in these years. In this paper, we study Galois hulls of linear codes. Firstly, the symmetry of the dimensions of Galois hulls is found. Some new necessary and sufficient conditions for linear codes being Galois self-orthogonal codes, Galois self-dual codes and Galois linear complementary dual codes are characterized. Then, based on these properties, we develop the previous theory and propose explicit methods to construct Galois self-orthogonal codes of lengths $n+2i$ ($i\geq 0$) and $n+2i+1$ ($i\geq 1$) from Galois self-orthogonal codes of length $n$. As applications, linear codes of lengths $n+2i$ and $n+2i+1$ with Galois hulls of arbitrary dimensions are derived immediately. After this, two new classes of Hermitian self-orthogonal MDS codes are also constructed. Finally, applying all the results to the constructions of entanglement-assisted quantum error-correcting codes (EAQECCs), many new EAQECCs and MDS EAQECCs with rates greater than or equal to $\frac{1}{2}$ and positive net rates can be obtained.
Sequences with a low correlation have very important applications in communications, cryptography, and compressed sensing. In the literature, many efforts have been made to construct good sequences with various lengths where binary sequences attracts great attention. As a result, various constructions of good binary sequences have been proposed. However, most of the known constructions made use of the multiplicative cyclic group structure of finite field $\mathbb{F}_{p^n}$ for a prime $p$ and a positive integer $n$. In fact, all $p^n+1$ rational places including the place at infinity of the rational function field over $\mathbb{F}_{p^n}$ form a cyclic structure under an automorphism of order $p^n+1$. In this paper, we make use of this cyclic structure to provide an explicit construction of binary sequences with a low correlation of length $p^n+1$ via cyclotomic function fields over $\mathbb{F}_{p^n}$ for any odd prime $p$. Each family of binary sequences has size $p^n-2$ and its correlation is upper bounded by $4+\lfloor 2\cdot p^{n/2}\rfloor$. To the best of our knowledge, this is the first construction of binary sequences with a low correlation of length $p^n+1$ for odd prime $p$. Moreover, our sequences can be constructed explicitly and have competitive parameters.
We study the rank of sub-matrices arising out of kernel functions, $F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$, where $\pmb{x},\pmb{y} \in \mathbb{R}^d$ with $F(\pmb{x},\pmb{y})$ is smooth everywhere except along the line $\pmb{x}=\pmb{y}$. Such kernel functions are frequently encountered in a wide range of applications such as $N$ body problems, Green's functions, integral equations, geostatistics, kriging, Gaussian processes, etc. One of the challenges in dealing with these kernel functions is that the corresponding matrix associated with these kernels is large and dense and thereby, the computational cost of matrix operations is high. In this article, we prove new theorems bounding the numerical rank of sub-matrices arising out of these kernel functions. Under reasonably mild assumptions, we prove that the rank of certain sub-matrices is rank-deficient in finite precision. This rank depends on the dimension of the ambient space and also on the type of interaction between the hyper-cubes containing the corresponding set of particles. This rank structure can be leveraged to reduce the computational cost of certain matrix operations such as matrix-vector products, solving linear systems, etc. We also present numerical results on the growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the theoretical results.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.