We compute the weight distribution of the ${\mathcal R} (4,9)$ by combining the approach described in D. V. Sarwate's Ph.D. thesis from 1973 with knowledge on the affine equivalence classification of Boolean functions. To solve this problem posed, e.g., in the MacWilliams and Sloane book [p. 447], we apply a refined approach based on the classification of Boolean quartic forms in $8$ variables due to Ph. Langevin and G. Leander, and recent results on the classification of the quotient space ${\mathcal R} (4,7)/{\mathcal R} (2,7)$ due to V. Gillot and Ph. Langevin.
The hierarchical matrix ($\mathcal{H}^{2}$-matrix) formalism provides a way to reinterpret the Fast Multipole Method and related fast summation schemes in linear algebraic terms. The idea is to tessellate a matrix into blocks in such as way that each block is either small or of numerically low rank; this enables the storage of the matrix and the application of it to a vector in linear or close to linear complexity. A key motivation for the reformulation is to extend the range of dense matrices that can be represented. Additionally, $\mathcal{H}^{2}$-matrices in principle also extend the range of operations that can be executed to include matrix inversion and factorization. While such algorithms can be highly efficient for certain specialized formats (such as HBS/HSS matrices based on ``weak admissibility''), inversion algorithms for general $\mathcal{H}^{2}$-matrices tend to be based on nested recursions and recompressions, making them challenging to implement efficiently. An exception is the \textit{strong recursive skeletonization (SRS)} algorithm by Minden, Ho, Damle, and Ying, which involves a simpler algorithmic flow. However, SRS greatly increases the number of blocks of the matrix that need to be stored explicitly, leading to high memory requirements. This manuscript presents the \textit{randomized strong recursive skeletonization (RSRS)} algorithm, which is a reformulation of SRS that incorporates the randomized SVD (RSVD) to simultaneously compress and factorize an $\mathcal{H}^{2}$-matrix. RSRS is a ``black box'' algorithm that interacts with the matrix to be compressed only via its action on vectors; this extends the range of the SRS algorithm (which relied on the ``proxy source'' compression technique) to include dense matrices that arise in sparse direct solvers.
Two Latin squares of order $n$ are $r$-orthogonal if, when superimposed, there are exactly $r$ distinct ordered pairs. The spectrum of all values of $r$ for Latin squares of order $n$ is known. A Latin square $A$ of order $n$ is $r$-self-orthogonal if $A$ and its transpose are $r$-orthogonal. The spectrum of all values of $r$ is known for all orders $n\ne 14$. We develop randomized algorithms for computing pairs of $r$-orthogonal Latin squares of order $n$ and algorithms for computing $r$-self-orthogonal Latin squares of order $n$.
We consider the Low Rank Approximation problem, where the input consists of a matrix $A \in \mathbb{R}^{n_R \times n_C}$ and an integer $k$, and the goal is to find a matrix $B$ of rank at most $k$ that minimizes $\| A - B \|_0$, which is the number of entries where $A$ and $B$ differ. For any constant $k$ and $\varepsilon > 0$, we present a polynomial time $(1 + \varepsilon)$-approximation time for this problem, which significantly improves the previous best $poly(k)$-approximation. Our algorithm is obtained by viewing the problem as a Constraint Satisfaction Problem (CSP) where each row and column becomes a variable that can have a value from $\mathbb{R}^k$. In this view, we have a constraint between each row and column, which results in a {\em dense} CSP, a well-studied topic in approximation algorithms. While most of previous algorithms focus on finite-size (or constant-size) domains and involve an exhaustive enumeration over the entire domain, we present a new framework that bypasses such an enumeration in $\mathbb{R}^k$. We also use tools from the rich literature of Low Rank Approximation in different objectives (e.g., $\ell_p$ with $p \in (0, \infty)$) or domains (e.g., finite fields/generalized Boolean). We believe that our techniques might be useful to study other real-valued CSPs and matrix optimization problems. On the hardness side, when $k$ is part of the input, we prove that Low Rank Approximation is NP-hard to approximate within a factor of $\Omega(\log n)$. This is the first superconstant NP-hardness of approximation for any $p \in [0, \infty]$ that does not rely on stronger conjectures (e.g., the Small Set Expansion Hypothesis).
The $\boldsymbol{\beta}$-model for random graphs is commonly used for representing pairwise interactions in a network with degree heterogeneity. Going beyond pairwise interactions, Stasi et al. (2014) introduced the hypergraph $\boldsymbol{\beta}$-model for capturing degree heterogeneity in networks with higher-order (multi-way) interactions. In this paper we initiate the rigorous study of the hypergraph $\boldsymbol{\beta}$-model with multiple layers, which allows for hyperedges of different sizes across the layers. To begin with, we derive the rates of convergence of the maximum likelihood (ML) estimate and establish their minimax rate optimality. We also derive the limiting distribution of the ML estimate and construct asymptotically valid confidence intervals for the model parameters. Next, we consider the goodness-of-fit problem in the hypergraph $\boldsymbol{\beta}$-model. Specifically, we establish the asymptotic normality of the likelihood ratio (LR) test under the null hypothesis, derive its detection threshold, and also its limiting power at the threshold. Interestingly, the detection threshold of the LR test turns out to be minimax optimal, that is, all tests are asymptotically powerless below this threshold. The theoretical results are further validated in numerical experiments. In addition to developing the theoretical framework for estimation and inference for hypergraph $\boldsymbol{\beta}$-models, the above results fill a number of gaps in the graph $\boldsymbol{\beta}$-model literature, such as the minimax optimality of the ML estimates and the non-null properties of the LR test, which, to the best of our knowledge, have not been studied before.
In this paper, we examine a finite element approximation of the steady $p(\cdot)$-Navier-Stokes equations ($p(\cdot)$ is variable dependent) and prove orders of convergence by assuming natural fractional regularity assumptions on the velocity vector field and the kinematic pressure. Compared to previous results, we treat the convective term and employ a more practicable discretization of the power-law index $p(\cdot)$. Numerical experiments confirm the quasi-optimality of the a priori error estimates (for the velocity) with respect to fractional regularity assumptions on the velocity vector field and the kinematic pressure.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. For $k = 2, \ell = 3$, our algorithm achieves an approximation ratio $\alpha = 1$, which is the best possible. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming the Unique Games Conjecture, it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$.
Given a graph $G=(V,E)$ on $n$ vertices and an assignment of colours to its edges, a set of edges $S \subseteq E$ is said to be rainbow if edges from $S$ have pairwise different colours assigned to them. In this paper, we investigate rainbow spanning trees in randomly coloured random $G_{k-out}$ graphs.
We introduce, motivate and study $\varepsilon$-almost collision-flat (ACFU) universal hash functions $f:\mathcal X\times\mathcal S\to\mathcal A$. Their main property is that the number of collisions in any given value is bounded. Each $\varepsilon$-ACFU hash function is an $\varepsilon$-almost universal (AU) hash function, and every $\varepsilon$-almost strongly universal (ASU) hash function is an $\varepsilon$-ACFU hash function. We study how the size of the seed set $\mathcal S$ depends on $\varepsilon,|\mathcal X|$ and $|\mathcal A|$. Depending on how these parameters are interrelated, seed-minimizing ACFU hash functions are equivalent to mosaics of balanced incomplete block designs (BIBDs) or to duals of mosaics of quasi-symmetric block designs; in a third case, mosaics of transversal designs and nets yield seed-optimal ACFU hash functions, but a full characterization is missing. By either extending $\mathcal S$ or $\mathcal X$, it is possible to obtain an $\varepsilon$-ACFU hash function from an $\varepsilon$-AU hash function or an $\varepsilon$-ASU hash function, generalizing the construction of mosaics of designs from a given resolvable design (Gnilke, Greferath, Pav{\v c}evi\'c, Des. Codes Cryptogr. 86(1)). The concatenation of an ASU and an ACFU hash function again yields an ACFU hash function. Finally, we motivate ACFU hash functions by their applicability in privacy amplification.
In the Maximum Independent Set of Objects problem, we are given an $n$-vertex planar graph $G$ and a family $\mathcal{D}$ of $N$ objects, where each object is a connected subgraph of $G$. The task is to find a subfamily $\mathcal{F} \subseteq \mathcal{D}$ of maximum cardinality that consists of pairwise disjoint objects. This problem is $\mathsf{NP}$-hard and is equivalent to the problem of finding the maximum number of pairwise disjoint polygons in a given family of polygons in the plane. As shown by Adamaszek et al. (J. ACM '19), the problem admits a \emph{quasi-polynomial time approximation scheme} (QPTAS): a $(1-\varepsilon)$-approximation algorithm whose running time is bounded by $2^{\mathrm{poly}(\log(N),1/\epsilon)} \cdot n^{\mathcal{O}(1)}$. Nevertheless, to the best of our knowledge, in the polynomial-time regime only the trivial $\mathcal{O}(N)$-approximation is known for the problem in full generality. In the restricted setting where the objects are pseudolines in the plane, Fox and Pach (SODA '11) gave an $N^{\varepsilon}$-approximation algorithm with running time $N^{2^{\tilde{\mathcal{O}}(1/\varepsilon)}}$, for any $\varepsilon>0$. In this work, we present an $\text{OPT}^{\varepsilon}$-approximation algorithm for the problem that runs in time $N^{\tilde{\mathcal{O}}(1/\varepsilon^2)} n^{\mathcal{O}(1)}$, for any $\varepsilon>0$, thus improving upon the result of Fox and Pach both in terms of generality and in terms of the running time. Our approach combines the methodology of Voronoi separators, introduced by Marx and Pilipczuk (TALG '22), with a new analysis of the approximation factor.
Based on a new Taylor-like formula, we derived an improved interpolation error estimate in $W^{1,p}$. We compare it with the classical error estimates based on the standard Taylor formula, and also with the corresponding interpolation error estimate, derived from the mean value theorem. We then assess the improvement in accuracy we can get from this formula, leading to a significant reduction in finite element computation costs.