Let $C$ be a two and three-weight ternary code. Furthermore, we assume that $C_\ell$ are $t$-designs for all $\ell$ by the Assmus--Mattson theorem. We show that $t \leq 5$. As a corollary, we provide a new characterization of the (extended) ternary Golay code.
We derive the first explicit bounds for the spectral gap of a random walk Metropolis algorithm on $R^d$ for any value of the proposal variance, which when scaled appropriately recovers the correct $d^{-1}$ dependence on dimension for suitably regular invariant distributions. We also obtain explicit bounds on the ${\rm L}^2$-mixing time for a broad class of models. In obtaining these results, we refine the use of isoperimetric profile inequalities to obtain conductance profile bounds, which also enable the derivation of explicit bounds in a much broader class of models. We also obtain similar results for the preconditioned Crank--Nicolson Markov chain, obtaining dimension-independent bounds under suitable assumptions.
In this article, we continue the analysis started in \cite{CMT23} for the matrix code of quadratic relationships associated with a Goppa code. We provide new sparse and low-rank elements in the matrix code and categorize them according to their shape. Thanks to this description, we prove that the set of rank 2 matrices in the matrix codes associated with square-free binary Goppa codes, i.e. those used in Classic McEiece, is much larger than what is expected, at least in the case where the Goppa polynomial degree is 2. We build upon the algebraic determinantal modeling introduced in \cite{CMT23} to derive a structural attack on these instances. Our method can break in just a few seconds some recent challenges about key-recovery attacks on the McEliece cryptosystem, consistently reducing their estimated security level. We also provide a general method, valid for any Goppa polynomial degree, to transform a generic pair of support and multiplier into a pair of support and Goppa polynomial.
In this work, we propose two criteria for linear codes obtained from the Plotkin sum construction being symplectic self-orthogonal (SO) and linear complementary dual (LCD). As specific constructions, several classes of symplectic SO codes with good parameters including symplectic maximum distance separable codes are derived via $\ell$-intersection pairs of linear codes and generalized Reed-Muller codes. Also symplectic LCD codes are constructed from general linear codes. Furthermore, we obtain some binary symplectic LCD codes, which are equivalent to quaternary trace Hermitian additive complementary dual codes that outperform best-known quaternary Hermitian LCD codes reported in the literature. In addition, we prove that symplectic SO and LCD codes obtained in these ways are asymptotically good.
In the Maximum Independent Set of Objects problem, we are given an $n$-vertex planar graph $G$ and a family $\mathcal{D}$ of $N$ objects, where each object is a connected subgraph of $G$. The task is to find a subfamily $\mathcal{F} \subseteq \mathcal{D}$ of maximum cardinality that consists of pairwise disjoint objects. This problem is $\mathsf{NP}$-hard and is equivalent to the problem of finding the maximum number of pairwise disjoint polygons in a given family of polygons in the plane. As shown by Adamaszek et al. (J. ACM '19), the problem admits a \emph{quasi-polynomial time approximation scheme} (QPTAS): a $(1-\varepsilon)$-approximation algorithm whose running time is bounded by $2^{\mathrm{poly}(\log(N),1/\epsilon)} \cdot n^{\mathcal{O}(1)}$. Nevertheless, to the best of our knowledge, in the polynomial-time regime only the trivial $\mathcal{O}(N)$-approximation is known for the problem in full generality. In the restricted setting where the objects are pseudolines in the plane, Fox and Pach (SODA '11) gave an $N^{\varepsilon}$-approximation algorithm with running time $N^{2^{\tilde{\mathcal{O}}(1/\varepsilon)}}$, for any $\varepsilon>0$. In this work, we present an $\text{OPT}^{\varepsilon}$-approximation algorithm for the problem that runs in time $N^{\tilde{\mathcal{O}}(1/\varepsilon^2)} n^{\mathcal{O}(1)}$, for any $\varepsilon>0$, thus improving upon the result of Fox and Pach both in terms of generality and in terms of the running time. Our approach combines the methodology of Voronoi separators, introduced by Marx and Pilipczuk (TALG '22), with a new analysis of the approximation factor.
We give a simple and computationally efficient algorithm that, for any constant $\varepsilon>0$, obtains $\varepsilon T$-swap regret within only $T = \mathsf{polylog}(n)$ rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of [Blum and Mansour 2007]. Our algorithm has an exponential dependence on $\varepsilon$, but we prove a new, matching lower bound. Our algorithm for swap regret implies faster convergence to $\varepsilon$-Correlated Equilibrium ($\varepsilon$-CE) in several regimes: For normal form two-player games with $n$ actions, it implies the first uncoupled dynamics that converges to the set of $\varepsilon$-CE in polylogarithmic rounds; a $\mathsf{polylog}(n)$-bit communication protocol for $\varepsilon$-CE in two-player games (resolving an open problem mentioned by [Babichenko-Rubinstein'2017, Goos-Rubinstein'2018, Ganor-CS'2018]; and an $\tilde{O}(n)$-query algorithm for $\varepsilon$-CE (resolving an open problem of [Babichenko'2020] and obtaining the first separation between $\varepsilon$-CE and $\varepsilon$-Nash equilibrium in the query complexity model). For extensive-form games, our algorithm implies a PTAS for $\mathit{normal}$ $\mathit{form}$ $\mathit{correlated}$ $\mathit{equilibria}$, a solution concept often conjectured to be computationally intractable (e.g. [Stengel-Forges'08, Fujii'23]).
Based on a new Taylor-like formula, we derived an improved interpolation error estimate in $W^{1,p}$. We compare it with the classical error estimates based on the standard Taylor formula, and also with the corresponding interpolation error estimate, derived from the mean value theorem. We then assess the improvement in accuracy we can get from this formula, leading to a significant reduction in finite element computation costs.
We construct a graph with $n$ vertices where the smoothed runtime of the 3-FLIP algorithm for the 3-Opt Local Max-Cut problem can be as large as $2^{\Omega(\sqrt{n})}$. This provides the first example where a local search algorithm for the Max-Cut problem can fail to be efficient in the framework of smoothed analysis. We also give a new construction of graphs where the runtime of the FLIP algorithm for the Local Max-Cut problem is $2^{\Omega(n)}$ for any pivot rule. This graph is much smaller and has a simpler structure than previous constructions.
We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.
A collection of graphs is \textit{nearly disjoint} if every pair of them intersects in at most one vertex. We prove that if $G_1, \dots, G_m$ are nearly disjoint graphs of maximum degree at most $D$, then the following holds. For every fixed $C$, if each vertex $v \in \bigcup_{i=1}^m V(G_i)$ is contained in at most $C$ of the graphs $G_1, \dots, G_m$, then the (list) chromatic number of $\bigcup_{i=1}^m G_i$ is at most $D + o(D)$. This result confirms a special case of a conjecture of Vu and generalizes Kahn's bound on the list chromatic index of linear uniform hypergraphs of bounded maximum degree. In fact, this result holds for the correspondence (or DP) chromatic number and thus implies a recent result of Molloy, and we derive this result from a more general list coloring result in the setting of `color degrees' that also implies a result of Reed and Sudakov.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples can be difficult through standard methods. Inference can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. In this paper, we develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in this threshold choice and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation. We apply our method to the well-known, troublesome example of the River Nidd dataset.