The insertion-deletion codes were motivated to correct the synchronization errors. In this paper we prove several coordinate-ordering-free upper bounds on the insdel distances of linear codes, which are based on the generalized Hamming weights and the formation of minimum Hamming weight codewords. Our bounds are stronger than some previous known bounds. We apply these upper bounds to some cyclic codes and one algebraic-geometric code with any rearrangement of coordinate positions. Some strong upper bounds on the insdel distances of Reed-Muller codes with special coordinate-ordering are also given.
The Sum-of-Squares (SoS) hierarchy of semidefinite programs is a powerful algorithmic paradigm which captures state-of-the-art algorithmic guarantees for a wide array of problems. In the average case setting, SoS lower bounds provide strong evidence of algorithmic hardness or information-computation gaps. Prior to this work, SoS lower bounds have been obtained for problems in the "dense" input regime, where the input is a collection of independent Rademacher or Gaussian random variables, while the sparse regime has remained out of reach. We make the first progress in this direction by obtaining strong SoS lower bounds for the problem of Independent Set on sparse random graphs. We prove that with high probability over an Erdos-Renyi random graph $G\sim G_{n,\frac{d}{n}}$ with average degree $d>\log^2 n$, degree-$D_{SoS}$ SoS fails to refute the existence of an independent set of size $k = \Omega\left(\frac{n}{\sqrt{d}(\log n)(D_{SoS})^{c_0}} \right)$ in $G$ (where $c_0$ is an absolute constant), whereas the true size of the largest independent set in $G$ is $O\left(\frac{n\log d}{d}\right)$. Our proof involves several significant extensions of the techniques used for proving SoS lower bounds in the dense setting. Previous lower bounds are based on the pseudo-calibration heuristic of Barak et al [FOCS 2016] which produces a candidate SoS solution using a planted distribution indistinguishable from the input distribution via low-degree tests. In the sparse case the natural planted distribution does admit low-degree distinguishers, and we show how to adapt the pseudo-calibration heuristic to overcome this. Another notorious technical challenge for the sparse regime is the quest for matrix norm bounds. In this paper, we obtain new norm bounds for graph matrices in the sparse setting.
We show that Gallager's ensemble of Low-Density Parity Check (LDPC) codes achieves list-decoding capacity with high probability. These are the first graph-based codes shown to have this property. This result opens up a potential avenue towards truly linear-time list-decodable codes that achieve list-decoding capacity. Our result on list decoding follows from a much more general result: any $\textit{local}$ property satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code from Gallager's distribution. Local properties are properties characterized by the exclusion of small sets of codewords, and include list-decodability, list-recoverability and average-radius list-decodability. In order to prove our results on LDPC codes, we establish sharp thresholds for when local properties are satisfied by a random linear code. More precisely, we show that for any local property $\mathcal{P}$, there is some $R^*$ so that random linear codes of rate slightly less than $R^*$ satisfy $\mathcal{P}$ with high probability, while random linear codes of rate slightly more than $R^*$, with high probability, do not. We also give a characterization of the threshold rate $R^*$.
Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform $\mathbf{G}_N$ involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of $10^{-2}-10^{-3}$. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.
e give two approximation algorithms solving the Stochastic Boolean Function Evaluation (SBFE) problem for symmetric Boolean functions. The first is an $O(\log n)$-approximation algorithm, based on the submodular goal-value approach of Deshpande, Hellerstein and Kletenik. Our second algorithm, which is simple, is based on the algorithm solving the SBFE problem for $k$-of-$n$ functions, due to Salloum, Breuer, and Ben-Dov. It achieves a $(B-1)$ approximation factor, where $B$ is the number of blocks of 0's and 1's in the standard vector representation of the symmetric Boolean function. As part of the design of the first algorithm, we prove that the goal value of any symmetric Boolean function is less than $n(n+1)/2$. Finally, we give an example showing that for symmetric Boolean functions, minimum expected verification cost and minimum expected evaluation cost are not necessarily equal. This contrasts with a previous result, given by Das, Jafarpour, Orlitsky, Pan and Suresh, which showed that equality holds in the unit-cost case.
Efficient and robust iterative solvers for strong anisotropic elliptic equations are very challenging. In this paper a block preconditioning method is introduced to solve the linear algebraic systems of a class of micro-macro asymptotic-preserving (MMAP) scheme. MMAP method was developed by Degond {\it et al.} in 2012 where the discrete matrix has a $2\times2$ block structure. By the approximate Schur complement a series of block preconditioners are constructed. We first analyze a natural approximate Schur complement that is the coefficient matrix of the original non-AP discretization. However it tends to be singular for very small anisotropic parameters. We then improve it by using more suitable approximation for boundary rows of the exact Schur complement. With these block preconditioners, preconditioned GMRES iterative method is developed to solve the discrete equations. Several numerical tests show that block preconditioning methods can be a robust strategy with respect to grid refinement and the anisotropic strengths.
We study flow scheduling under node capacity constraints. We are given capacitated nodes and an online sequence of jobs, each with a release time and a demand to be routed between two nodes. A schedule specifies which jobs are routed in each step, guaranteeing that the total demand on a node in any step is at most its capacity. A key metric in this scenario is response time: the time between a job's release and its completion. Prior work shows no un-augmented algorithm is competitive for average response time, and that a constant factor competitive ratio is achievable with augmentation exceeding 2 (Dinitz-Moseley Infocom 2020). For maximum response time, the best known result is a 2-competitive algorithm with a augmentation 4 (Jahanjou et al SPAA 2020). We improve these bounds under various response time objectives. We show that, without resource augmentation, the best competitive ratio for maximum response time is $\Omega(n)$, where $n$ is the number of nodes. Our Proportional Allocation algorithm uses $(1+\varepsilon)$ resource augmentation to achieve a $(1/\varepsilon)$-competitive ratio in the setting with general demands and capacities, and splittable jobs. Our Batch Decomposition algorithm is $2$-competitive (resp., optimal) for maximum response time using resource augmentation 2 (resp., 4) in the setting with unit demands and capacities, and unsplittable jobs. We also derive bounds for the simultaneous approximation of average and maximum response time metrics.
We study the classical expander codes, introduced by Sipser and Spielman \cite{SS96}. Given any constants $0< \alpha, \varepsilon < 1/2$, and an arbitrary bipartite graph with $N$ vertices on the left, $M < N$ vertices on the right, and left degree $D$ such that any left subset $S$ of size at most $\alpha N$ has at least $(1-\varepsilon)|S|D$ neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly $\frac{\alpha N}{2 \varepsilon }$. This is strictly better than the best known previous result of $2(1-\varepsilon ) \alpha N$ \cite{Sudan2000note, Viderman13b} whenever $\varepsilon < 1/2$, and improves the previous result significantly when $\varepsilon $ is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever $\varepsilon < \frac{1}{4}$. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.
Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder - a hardware-friendly min-sum algorithm (MSA) - utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis.
We prove the inequality $E[(X/\mu)^k] \le (\frac{k/\mu}{\log(k/\mu+1)})^k \le \exp(k^2/(2\mu))$ for sub-Poissonian random variables, such as Binomially or Poisson distributed random variables with mean $\mu$. The asymptotics $1+O(k^2/\mu)$ can be shown to be tight for small $k$. This improves over previous uniform bounds for the raw moments of those distributions by a factor exponential in $k$.
We revisit the (block-angular) min-max resource sharing problem, which is a well-known generalization of fractional packing and the maximum concurrent flow problem. It consists of finding an $\ell_{\infty}$-minimal element in a Minkowski sum $\mathcal{X}= \sum_{C \in \mathcal{C}} X_C$ of non-empty closed convex sets $X_C \subseteq \mathbb{R}^{\mathcal{R}}_{\geq 0}$, where $\mathcal{C}$ and $\mathcal{R}$ are finite sets. We assume that an oracle for approximate linear minimization over $X_C$ is given. In this setting, the currently fastest known FPTAS is due to M\"uller, Radke, and Vygen. For $\delta \in (0,1]$, it computes a $\sigma(1+\delta)$-approximately optimal solution using $\mathcal{O}((|\mathcal{C}|+|\mathcal{R}|)\log |\mathcal{R}| (\delta^{-2} + \log \log |\mathcal{R}|))$ oracle calls, where $\sigma$ is the approximation ratio of the oracle. We describe an extension of their algorithm and improve on previous results in various ways. Our FPTAS, which, as previous approaches, is based on the multiplicative weight update method, computes close to optimal primal and dual solutions using $\mathcal{O}\left(\frac{|\mathcal{C}|+ |\mathcal{R}|}{\delta^2} \log |\mathcal{R}|\right)$ oracle calls. We prove that our running time is optimal under certain assumptions, implying that no warm-start analysis of the algorithm is possible. A major novelty of our analysis is the concept of local weak duality, which illustrates that the algorithm optimizes (close to) independent parts of the instance separately. Interestingly, this implies that the computed solution is not only approximately $\ell_{\infty}$-minimal, but among such solutions, also its second-highest entry is approximately minimal. We prove that this statement cannot be extended to the third-highest entry.