For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ values from every $\pi$-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.
On the reference tetrahedron $K$, we construct, for each $k \in \mathbb{N}_0$, a right inverse for the trace operator $u \mapsto (u, \partial_{n} u, \ldots, \partial_{n}^k u)|_{\partial K}$. The operator is stable as a mapping from the trace space of $W^{s, p}(K)$ to $W^{s, p}(K)$ for all $p \in (1, \infty)$ and $s \in (k+1/p, \infty)$. Moreover, if the data is the trace of a polynomial of degree $N \in \mathbb{N}_0$, then the resulting lifting is a polynomial of degree $N$. One consequence of the analysis is a novel characterization for the range of the trace operator.
We consider the multivariate max-linear regression problem where the model parameters $\boldsymbol{\beta}_{1},\dotsc,\boldsymbol{\beta}_{k}\in\mathbb{R}^{p}$ need to be estimated from $n$ independent samples of the (noisy) observations $y = \max_{1\leq j \leq k} \boldsymbol{\beta}_{j}^{\mathsf{T}} \boldsymbol{x} + \mathrm{noise}$. The max-linear model vastly generalizes the conventional linear model, and it can approximate any convex function to an arbitrary accuracy when the number of linear models $k$ is large enough. However, the inherent nonlinearity of the max-linear model renders the estimation of the regression parameters computationally challenging. Particularly, no estimator based on convex programming is known in the literature. We formulate and analyze a scalable convex program given by anchored regression (AR) as the estimator for the max-linear regression problem. Under the standard Gaussian observation setting, we present a non-asymptotic performance guarantee showing that the convex program recovers the parameters with high probability. When the $k$ linear components are equally likely to achieve the maximum, our result shows a sufficient number of noise-free observations for exact recovery scales as {$k^{4}p$} up to a logarithmic factor. { This sample complexity coincides with that by alternating minimization (Ghosh et al., {2021}). Moreover, the same sample complexity applies when the observations are corrupted with arbitrary deterministic noise. We provide empirical results that show that our method performs as our theoretical result predicts, and is competitive with the alternating minimization algorithm particularly in presence of multiplicative Bernoulli noise. Furthermore, we also show empirically that a recursive application of AR can significantly improve the estimation accuracy.}
Given two $n$-element structures, $\mathcal{A}$ and $\mathcal{B}$, which can be distinguished by a sentence of $k$-variable first-order logic ($\mathcal{L}^k$), what is the minimum $f(n)$ such that there is guaranteed to be a sentence $\phi \in \mathcal{L}^k$ with at most $f(n)$ quantifiers, such that $\mathcal{A} \models \phi$ but $\mathcal{B} \not \models \phi$? We present various results related to this question obtained by using the recently introduced QVT games. In particular, we show that when we limit the number of variables, there can be an exponential gap between the quantifier depth and the quantifier number needed to separate two structures. Through the lens of this question, we will highlight some difficulties that arise in analysing the QVT game and some techniques which can help to overcome them. As a consequence, we show that $\mathcal{L}^{k+1}$ is exponentially more succinct than $\mathcal{L}^{k}$. We also show, in the setting of the existential-positive fragment, how to lift quantifier depth lower bounds to quantifier number lower bounds. This leads to almost tight bounds.
Given a graph $G$ with a vertex threshold function $\tau$, consider a dynamic process in which any inactive vertex $v$ becomes activated whenever at least $\tau(v)$ of its neighbors are activated. A vertex set $S$ is called a target set if all vertices of $G$ would be activated when initially activating vertices of $S$. In the Minmax Target Set Reconfiguration problem, for a graph $G$ and its two target sets $X$ and $Y$, we wish to transform $X$ into $Y$ by repeatedly adding or removing a single vertex, using only target sets of $G$, so as to minimize the maximum size of any intermediate target set. We prove that it is NP-hard to approximate Minmax Target Set Reconfiguration within a factor of $2-o\left(\frac{1}{\operatorname{polylog} n}\right)$, where $n$ is the number of vertices. Our result establishes a tight lower bound on approximability of Minmax Target Set Reconfiguration, which admits a $2$-factor approximation algorithm. The proof is based on a gap-preserving reduction from Target Set Selection to Minmax Target Set Reconfiguration, where NP-hardness of approximation for the former problem is proven by Chen (SIAM J. Discrete Math., 2009) and Charikar, Naamad, and Wirth (APPROX/RANDOM 2016).
An unbiased $m$-sparsification of a vector $p\in \mathbb{R}^n$ is a random vector $Q\in \mathbb{R}^n$ with mean $p$ that has at most $m<n$ nonzero coordinates. Unbiased sparsification compresses the original vector without introducing bias; it arises in various contexts, such as in federated learning and sampling sparse probability distributions. Ideally, unbiased sparsification should also minimize the expected value of a divergence function $\mathsf{Div}(Q,p)$ that measures how far away $Q$ is from the original $p$. If $Q$ is optimal in this sense, then we call it efficient. Our main results describe efficient unbiased sparsifications for divergences that are either permutation-invariant or additively separable. Surprisingly, the characterization for permutation-invariant divergences is robust to the choice of divergence function, in the sense that our class of optimal $Q$ for squared Euclidean distance coincides with our class of optimal $Q$ for Kullback-Leibler divergence, or indeed any of a wide variety of divergences.
We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $\mathrm{E}(p, q)$-equivariant CNNs. CS-CNNs process multivector fields on pseudo-Euclidean spaces $\mathbb{R}^{p,q}$. They cover, for instance, $\mathrm{E}(3)$-equivariance on $\mathbb{R}^3$ and Poincar\'e-equivariance on Minkowski spacetime $\mathbb{R}^{1,3}$. Our approach is based on an implicit parametrization of $\mathrm{O}(p,q)$-steerable kernels via Clifford group equivariant neural networks. We significantly and consistently outperform baseline methods on fluid dynamics as well as relativistic electrodynamics forecasting tasks.
Sparse linear regression (SLR) is a well-studied problem in statistics where one is given a design matrix $X\in\mathbb{R}^{m\times n}$ and a response vector $y=X\theta^*+w$ for a $k$-sparse vector $\theta^*$ (that is, $\|\theta^*\|_0\leq k$) and small, arbitrary noise $w$, and the goal is to find a $k$-sparse $\widehat{\theta} \in \mathbb{R}^n$ that minimizes the mean squared prediction error $\frac{1}{m}\|X\widehat{\theta}-X\theta^*\|^2_2$. While $\ell_1$-relaxation methods such as basis pursuit, Lasso, and the Dantzig selector solve SLR when the design matrix is well-conditioned, no general algorithm is known, nor is there any formal evidence of hardness in an average-case setting with respect to all efficient algorithms. We give evidence of average-case hardness of SLR w.r.t. all efficient algorithms assuming the worst-case hardness of lattice problems. Specifically, we give an instance-by-instance reduction from a variant of the bounded distance decoding (BDD) problem on lattices to SLR, where the condition number of the lattice basis that defines the BDD instance is directly related to the restricted eigenvalue condition of the design matrix, which characterizes some of the classical statistical-computational gaps for sparse linear regression. Also, by appealing to worst-case to average-case reductions from the world of lattices, this shows hardness for a distribution of SLR instances; while the design matrices are ill-conditioned, the resulting SLR instances are in the identifiable regime. Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
We investigate pseudo-polynomial time algorithms for Subset Sum. Given a multi-set $X$ of $n$ positive integers and a target $t$, Subset Sum asks whether some subset of $X$ sums to $t$. Bringmann proposes an $\tilde{O}(n + t)$-time algorithm [Bringmann SODA'17], and an open question has naturally arisen: can Subset Sum be solved in $O(n + w)$ time? Here $w$ is the maximum integer in $X$. We make a progress towards resolving the open question by proposing an $\tilde{O}(n + \sqrt{wt})$-time algorithm.
We show that there exist infinitely many $n \in \mathbb{Z}^+$ such that for any constant $\epsilon > 0$, any deterministic algorithm to solve $k$-\textsf{SAT} for $k \geq 3$ must perform at least $(2^{k-\frac{3}{2}-\epsilon})^{\frac{n}{k+1}}$ operations, where $n$ is the number of variables in the $k$\textsf{-SAT} instance.
In the Maximum Independent Set of Hyperrectangles problem, we are given a set of $n$ (possibly overlapping) $d$-dimensional axis-aligned hyperrectangles, and the goal is to find a subset of non-overlapping hyperrectangles of maximum cardinality. For $d=1$, this corresponds to the classical Interval Scheduling problem, where a simple greedy algorithm returns an optimal solution. In the offline setting, for $d$-dimensional hyperrectangles, polynomial time $(\log n)^{O(d)}$-approximation algorithms are known. However, the problem becomes notably challenging in the online setting, where the input objects (hyperrectangles) appear one by one in an adversarial order, and on the arrival of an object, the algorithm needs to make an immediate and irrevocable decision whether or not to select the object while maintaining the feasibility. Even for interval scheduling, an $\Omega(n)$ lower bound is known on the competitive ratio. To circumvent these negative results, in this work, we study the online maximum independent set of axis-aligned hyperrectangles in the random-order arrival model, where the adversary specifies the set of input objects which then arrive in a uniformly random order. Starting from the prototypical secretary problem, the random-order model has received significant attention to study algorithms beyond the worst-case competitive analysis. Surprisingly, we show that the problem in the random-order model almost matches the best-known offline approximation guarantees, up to polylogarithmic factors. In particular, we give a simple $(\log n)^{O(d)}$-competitive algorithm for $d$-dimensional hyperrectangles in this model, which runs in $\tilde{O_d}(n)$ time. Our approach also yields $(\log n)^{O(d)}$-competitive algorithms in the random-order model for more general objects such as $d$-dimensional fat objects and ellipsoids.