The tensor rank of some Gabidulin codes of small dimension is investigated. In particular, we determine the tensor rank of any rank metric code equivalent to an $8$-dimensional $\mathbb{F}_q$-linear generalized Gabidulin code in $\mathbb{F}_{q}^{4\times4}$. This shows that such a code is never minimum tensor rank. In this way, we detect the first infinite family of Gabidulin codes which are not minimum tensor rank.
We study a new two-time-scale stochastic gradient method for solving optimization problems, where the gradients are computed with the aid of an auxiliary variable under samples generated by time-varying Markov random processes parameterized by the underlying optimization variable. These time-varying samples make gradient directions in our update biased and dependent, which can potentially lead to the divergence of the iterates. In our two-time-scale approach, one scale is to estimate the true gradient from these samples, which is then used to update the estimate of the optimal solution. While these two iterates are implemented simultaneously, the former is updated "faster" (using bigger step sizes) than the latter (using smaller step sizes). Our first contribution is to characterize the finite-time complexity of the proposed two-time-scale stochastic gradient method. In particular, we provide explicit formulas for the convergence rates of this method under different structural assumptions, namely, strong convexity, convexity, the Polyak-Lojasiewicz condition, and general non-convexity. We apply our framework to two problems in control and reinforcement learning. First, we look at the standard online actor-critic algorithm over finite state and action spaces and derive a convergence rate of O(k^(-2/5)), which recovers the best known rate derived specifically for this problem. Second, we study an online actor-critic algorithm for the linear-quadratic regulator and show that a convergence rate of O(k^(-2/3)) is achieved. This is the first time such a result is known in the literature. Finally, we support our theoretical analysis with numerical simulations where the convergence rates are visualized.
We construct six new explicit families of linear maximum sum-rank distance (MSRD) codes, each of which has the smallest field sizes among all known MSRD codes for some parameter regime. Using them and a previous result of the author, we provide two new explicit families of linear partial MDS (PMDS) codes with smaller field sizes than previous PMDS codes for some parameter regimes. Our approach is to characterize evaluation points that turn extended Moore matrices into the parity-check matrix of a linear MSRD code. We then produce such sequences from codes with good Hamming-metric parameters. The six new families of linear MSRD codes with smaller field sizes are obtained using MDS codes, Hamming codes, BCH codes and three Algebraic-Geometry codes. The MSRD codes based on Hamming codes, of minimum sum-rank distance $ 3 $, meet a recent bound by Byrne et al.
We consider the question of adaptive data analysis within the framework of convex optimization. We ask how many samples are needed in order to compute $\epsilon$-accurate estimates of $O(1/\epsilon^2)$ gradients queried by gradient descent, and we provide two intermediate answers to this question. First, we show that for a general analyst (not necessarily gradient descent) $\Omega(1/\epsilon^3)$ samples are required. This rules out the possibility of a foolproof mechanism. Our construction builds upon a new lower bound (that may be of interest of its own right) for an analyst that may ask several non adaptive questions in a batch of fixed and known $T$ rounds of adaptivity and requires a fraction of true discoveries. We show that for such an analyst $\Omega (\sqrt{T}/\epsilon^2)$ samples are necessary. Second, we show that, under certain assumptions on the oracle, in an interaction with gradient descent $\tilde \Omega(1/\epsilon^{2.5})$ samples are necessary. Our assumptions are that the oracle has only \emph{first order access} and is \emph{post-hoc generalizing}. First order access means that it can only compute the gradients of the sampled function at points queried by the algorithm. Our assumption of \emph{post-hoc generalization} follows from existing lower bounds for statistical queries. More generally then, we provide a generic reduction from the standard setting of statistical queries to the problem of estimating gradients queried by gradient descent. These results are in contrast with classical bounds that show that with $O(1/\epsilon^2)$ samples one can optimize the population risk to accuracy of $O(\epsilon)$ but, as it turns out, with spurious gradients.
Certain simplicial complexes are used to construct a subset $D$ of $\mathbb{F}_{2^n}^m$ and $D$, in turn, defines the linear code $C_{D}$ over $\mathbb{F}_{2^n}$ that consists of $(v\cdot d)_{d\in D}$ for $v\in \mathbb{F}_{2^n}^m$. Here we deal with the case $n=3$, that is, when $C_{D}$ is an octanary code. We establish a relation between $C_{D}$ and its binary subfield code $C_{D}^{(2)}$ with the help of a generator matrix. For a given length and dimension, a code is called distance optimal if it has the highest possible distance. With respect to the Griesmer bound, five infinite families of distance optimal codes are obtained, and sufficient conditions for certain linear codes to be minimal are established.
Generalized pair weights of linear codes are generalizations of minimum symbol-pair weights, which were introduced by Liu and Pan \cite{LP} recently. Generalized pair weights can be used to characterize the ability of protecting information in the symbol-pair read wire-tap channels of type II. In this paper, we introduce the notion of generalized $b$-symbol weights of linear codes over finite fields, which is a generalization of generalized Hamming weights and generalized pair weights. We obtain some basic properties and bounds of generalized $b$-symbol weights which are called Singleton-like bounds for generalized $b$-symbol weights. As examples, we calculate generalized weight matrices for simplex codes and Hamming codes. We provide a necessary and sufficient condition for a linear code to be a $b$-symbol MDS code by using the generator matrix and the parity check matrix of this linear code. Finally, a necessary and sufficient condition of a linear isomorphism preserving $b$-symbol weights between two linear codes is obtained. As a corollary, we get the classical MacWilliams extension theorem when $b=1$.
For any small positive real $\varepsilon$ and integer $t > \frac{1}{\varepsilon}$, we build a graph with a vertex deletion set of size $t$ to a tree, and twin-width greater than $2^{(1-\varepsilon) t}$. In particular, this shows that the twin-width is sometimes exponential in the treewidth, in the so-called oriented twin-width and grid number, and that adding an apex may multiply the twin-width by at least $2-\varepsilon$. Except for the one in oriented twin-width, these lower bounds are essentially tight.
The lossless compression of a single source $X^n$ was recently shown to be achievable with a notion of strong locality; any $X_i$ can be decoded from a {\emph{constant}} number of compressed bits, with a vanishing in $n$ probability of error. In contrast with the single source setup, we show that for two separately encoded sources $(X^n,Y^n)$, lossless compression and strong locality is generally not possible. More precisely, we show that for the class of "confusable" sources strong locality cannot be achieved whenever one of the sources is compressed below its entropy. In this case, irrespectively of $n$, the probability of error of decoding any $(X_i,Y_i)$ is lower bounded by $2^{-O(d_{\mathrm{loc}})}$, where $d_{\mathrm{loc}}$ denotes the number of compressed bits accessed by the local decoder. Conversely, if the source is not confusable, strong locality is possible even if one of the sources is compressed below its entropy. Results extend to any number of sources.
The minimum energy path (MEP) describes the mechanism of reaction, and the energy barrier along the path can be used to calculate the reaction rate in thermal systems. The nudged elastic band (NEB) method is one of the most commonly used schemes to compute MEPs numerically. It approximates an MEP by a discrete set of configuration images, where the discretization size determines both computational cost and accuracy of the simulations. In this paper, we consider a discrete MEP to be a stationary state of the NEB method and prove an optimal convergence rate of the discrete MEP with respect to the number of images. Numerical simulations for the transitions of some several proto-typical model systems are performed to support the theory.
Sparse coding has been proposed as a theory of visual cortex and as an unsupervised algorithm for learning representations. We show empirically with the MNIST dataset that sparse codes can be very sensitive to image distortions, a behavior that may hinder invariant object recognition. A locally linear analysis suggests that the sensitivity is due to the existence of linear combinations of active dictionary elements with high cancellation. A nearest neighbor classifier is shown to perform worse on sparse codes than original images. For a linear classifier with a sufficiently large number of labeled examples, sparse codes are shown to yield higher accuracy than original images, but no higher than a representation computed by a random feedforward net. Sensitivity to distortions seems to be a basic property of sparse codes, and one should be aware of this property when applying sparse codes to invariant object recognition.
There are many important high dimensional function classes that have fast agnostic learning algorithms when strong assumptions on the distribution of examples can be made, such as Gaussianity or uniformity over the domain. But how can one be sufficiently confident that the data indeed satisfies the distributional assumption, so that one can trust in the output quality of the agnostic learning algorithm? We propose a model by which to systematically study the design of tester-learner pairs $(\mathcal{A},\mathcal{T})$, such that if the distribution on examples in the data passes the tester $\mathcal{T}$ then one can safely trust the output of the agnostic learner $\mathcal{A}$ on the data. To demonstrate the power of the model, we apply it to the classical problem of agnostically learning halfspaces under the standard Gaussian distribution and present a tester-learner pair with a combined run-time of $n^{\tilde{O}(1/\epsilon^4)}$. This qualitatively matches that of the best known ordinary agnostic learning algorithms for this task. In contrast, finite sample Gaussian distribution testers do not exist for the $L_1$ and EMD distance measures. A key step in the analysis is a novel characterization of concentration and anti-concentration properties of a distribution whose low-degree moments approximately match those of a Gaussian. We also use tools from polynomial approximation theory. In contrast, we show strong lower bounds on the combined run-times of tester-learner pairs for the problems of agnostically learning convex sets under the Gaussian distribution and for monotone Boolean functions under the uniform distribution over $\{0,1\}^n$. Through these lower bounds we exhibit natural problems where there is a dramatic gap between standard agnostic learning run-time and the run-time of the best tester-learner pair.