We consider the problem of extending a function $f^{}_P$ defined on a subset $P$ of an arbitrary set $X$ to $X$ strictly monotonically with respect to a preorder $\succcurlyeq$ defined on $X$, without imposing continuity constraints. We show that whenever $\succcurlyeq$ has a utility representation, $f^{}_P$ is extendable if and only if it is gap-safe increasing. A class of extensions involving an arbitrary utility representation of $\succcurlyeq$ is proposed and investigated. Connections to related topological results are discussed. The condition of extendability and the form of the extension are simplified when $P$ is a Pareto set.
We consider the problem of learning a graph modeling the statistical relations of the $d$ variables of a dataset with $n$ samples $X \in \mathbb{R}^{n \times d}$. Standard approaches amount to searching for a precision matrix $\Theta$ representative of a Gaussian graphical model that adequately explains the data. However, most maximum likelihood-based estimators usually require storing the $d^{2}$ values of the empirical covariance matrix, which can become prohibitive in a high-dimensional setting. In this work, we adopt a compressive viewpoint and aim to estimate a sparse $\Theta$ from a sketch of the data, i.e. a low-dimensional vector of size $m \ll d^{2}$ carefully designed from $X$ using nonlinear random features. Under certain assumptions on the spectrum of $\Theta$ (or its condition number), we show that it is possible to estimate it from a sketch of size $m=\Omega((d+2k)\log(d))$ where $k$ is the maximal number of edges of the underlying graph. These information-theoretic guarantees are inspired by compressed sensing theory and involve restricted isometry properties and instance optimal decoders. We investigate the possibility of achieving practical recovery with an iterative algorithm based on the graphical lasso, viewed as a specific denoiser. We compare our approach and graphical lasso on synthetic datasets, demonstrating its favorable performance even when the dataset is compressed.
We provide a simple online $\Delta(1+o(1))$-edge-coloring algorithm for bipartite graphs of maximum degree $\Delta=\omega(\log n)$ under adversarial vertex arrivals on one side of the graph. Our algorithm slightly improves the result of (Cohen, Peng and Wajc, FOCS19), which was the first, and currently only, to obtain an asymptotically optimal $\Delta(1+o(1))$ guarantee for an adversarial arrival model. More importantly, our algorithm provides a new, simpler approach for tackling online edge coloring.
Suppose we observe a Poisson process in real time for which the intensity may take on two possible values $\lambda_0$ and $\lambda_1$. Suppose further that the priori probability of the true intensity is not given. We solve a minimax version of Bayesian problem of sequential testing of two simple hypotheses to minimize a linear combination of the probability of wrong detection and the expected waiting time in the worst scenario of all possible priori distributions. An equivalent characterization for the least favorable distributions is derived and a sufficient condition for the existence is concluded.
Consider the triplet $(E, \mathcal{P}, \pi)$, where $E$ is a finite ground set, $\mathcal{P} \subseteq 2^E$ is a collection of subsets of $E$ and $\pi : \mathcal{P} \rightarrow [0,1]$ is a requirement function. Given a vector of marginals $\rho \in [0, 1]^E$, our goal is to find a distribution for a random subset $S \subseteq E$ such that $\operatorname{Pr}[e \in S] = \rho_e$ for all $e \in E$ and $\operatorname{Pr}[P \cap S \neq \emptyset] \geq \pi_P$ for all $P \in \mathcal{P}$, or to determine that no such distribution exists. Generalizing results of Dahan, Amin, and Jaillet, we devise a generic decomposition algorithm that solves the above problem when provided with a suitable sequence of admissible support candidates (ASCs). We show how to construct such ASCs for numerous settings, including supermodular requirements, Hoffman-Schwartz-type lattice polyhedra, and abstract networks where $\pi$ fulfils a conservation law. The resulting algorithm can be carried out efficiently when $\mathcal{P}$ and $\pi$ can be accessed via appropriate oracles. For any system allowing the construction of ASCs, our results imply a simple polyhedral description of the set of marginal vectors for which the decomposition problem is feasible. Finally, we characterize balanced hypergraphs as the systems $(E, \mathcal{P})$ that allow the perfect decomposition of any marginal vector $\rho \in [0,1]^E$, i.e., where we can always find a distribution reaching the highest attainable probability $\operatorname{Pr}[P \cap S \neq \emptyset] = \min \{ \sum_{e \in P} \rho_e, 1\}$ for all $P \in \mathcal{P}$.
We consider the problem of maintaining a $(1+\epsilon)\Delta$-edge coloring in a dynamic graph $G$ with $n$ nodes and maximum degree at most $\Delta$. The state-of-the-art update time is $O_\epsilon(\text{polylog}(n))$, by Duan, He and Zhang [SODA'19] and by Christiansen [STOC'23], and more precisely $O(\log^7 n/\epsilon^2)$, where $\Delta = \Omega(\log^2 n / \epsilon^2)$. The following natural question arises: What is the best possible update time of an algorithm for this task? More specifically, \textbf{ can we bring it all the way down to some constant} (for constant $\epsilon$)? This question coincides with the \emph{static} time barrier for the problem: Even for $(2\Delta-1)$-coloring, there is only a naive $O(m \log \Delta)$-time algorithm. We answer this fundamental question in the affirmative, by presenting a dynamic $(1+\epsilon)\Delta$-edge coloring algorithm with $O(\log^4 (1/\epsilon)/\epsilon^9)$ update time, provided $\Delta = \Omega_\epsilon(\text{polylog}(n))$. As a corollary, we also get the first linear time (for constant $\epsilon$) \emph{static} algorithm for $(1+\epsilon)\Delta$-edge coloring; in particular, we achieve a running time of $O(m \log (1/\epsilon)/\epsilon^2)$. We obtain our results by carefully combining a variant of the \textsc{Nibble} algorithm from Bhattacharya, Grandoni and Wajc [SODA'21] with the subsampling technique of Kulkarni, Liu, Sah, Sawhney and Tarnawski [STOC'22].
We provide an algorithm which, with high probability, maintains a $(1-\epsilon)$-approximate maximum flow on an undirected graph undergoing $m$-edge additions in amortized $m^{o(1)} \epsilon^{-3}$ time per update. To obtain this result, we provide a more general algorithm that solves what we call the incremental, thresholded $p$-norm flow problem that asks to determine the first edge-insertion in an undirected graph that causes the minimum $\ell_p$-norm flow to decrease below a given threshold in value. Since we solve this thresholded problem, our data structure succeeds against an adaptive adversary that can only see the data structure's output. Furthermore, since our algorithm holds for $p = 2$, we obtain improved algorithms for dynamically maintaining the effective resistance between a pair of vertices in an undirected graph undergoing edge insertions. Our algorithm builds upon previous dynamic algorithms for approximately solving the minimum-ratio cycle problem that underlie previous advances on the maximum flow problem [Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva, FOCS '22] as well as recent dynamic maximum flow algorithms [v.d.Brand-Liu-Sidford, STOC '23]. Instead of using interior point methods, which were a key component of these recent advances, our algorithm uses an optimization method based on $\ell_p$-norm iterative refinement and the multiplicative weight update method. This ensures a monotonicity property in the minimum-ratio cycle subproblems that allows us to apply known data structures and bypass issues arising from adaptive queries.
For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.
We provide the first $\mathit{constant}$-$\mathit{round}$ construction of post-quantum non-malleable commitments under the minimal assumption that $\mathit{post}$-$\mathit{quantum}$ $\mathit{one}$-$\mathit{way}$ $\mathit{functions}$ exist. We achieve the standard notion of non-malleability with respect to commitments. Prior constructions required $\Omega(\log^*\lambda)$ rounds under the same assumption. We achieve our results through a new technique for constant-round non-malleable commitments which is easier to use in the post-quantum setting. The technique also yields an almost elementary proof of security for constant-round non-malleable commitments in the classical setting, which may be of independent interest. When combined with existing work, our results yield the first constant-round quantum-secure multiparty computation for both classical and quantum functionalities $\mathit{in}$ $\mathit{the}$ $\mathit{plain}$ $\mathit{model}$, under the $\mathit{polynomial}$ hardness of quantum fully-homomorphic encryption and quantum learning with errors.
This work concerns elementwise-transformations of spiked matrices: $Y_n = n^{-1/2} f(n^{1-1/(2\ell_*)} X_n + Z_n)$. Here, $f$ is a function applied elementwise, $X_n$ is a low-rank signal matrix, $Z_n$ is white noise, and $\ell_* \geq 1$ is an integer. We find that principal component analysis is powerful for recovering low-rank signal under highly non-linear and discontinuous transformations. Specifically, in the high-dimensional setting where $Y_n$ is of size $n \times p$ with $n,p \rightarrow \infty$ and $p/n \rightarrow \gamma \in (0, \infty)$, we uncover a phase transition: for signal-to-noise ratios above a sharp threshold -- depending on $f$, the distribution of elements of $Z_n$, and the limiting aspect ratio $\gamma$ -- the principal components of $Y_n$ (partially) recover those of $X_n$. Below this threshold, the principal components are asymptotically orthogonal to the signal. In contrast, in the standard setting where $X_n + n^{-1/2}Z_n$ is observed directly, the analogous phase transition depends only on $\gamma$. Analogous phenomena occur with $X_n$ square and symmetric and $Z_n$ a generalized Wigner matrix.
Classically, the edit distance of two length-$n$ strings can be computed in $O(n^2)$ time, whereas an $O(n^{2-\epsilon})$-time procedure would falsify the Orthogonal Vectors Hypothesis. If the edit distance does not exceed $k$, the running time can be improved to $O(n+k^2)$, which is near-optimal (conditioned on OVH) as a function of $n$ and $k$. Our first main contribution is a quantum $\tilde{O}(\sqrt{nk}+k^2)$-time algorithm that uses $\tilde{O}(\sqrt{nk})$ queries, where $\tilde{O}(\cdot)$ hides polylogarithmic factors. This query complexity is unconditionally optimal, and any significant improvement in the time complexity would resolve a long-standing open question of whether edit distance admits an $O(n^{2-\epsilon})$-time quantum algorithm. Our divide-and-conquer quantum algorithm reduces the edit distance problem to a case where the strings have small Lempel-Ziv factorizations. Then, it combines a quantum LZ compression algorithm with a classical edit-distance subroutine for compressed strings. The LZ factorization problem can be classically solved in $O(n)$ time, which is unconditionally optimal in the quantum setting. We can, however, hope for a quantum speedup if we parameterize the complexity in terms of the factorization size $z$. Already a generic oracle identification algorithm yields the optimal query complexity of $\tilde{O}(\sqrt{nz})$ at the price of exponential running time. Our second main contribution is a quantum algorithm that achieves the optimal time complexity of $\tilde{O}(\sqrt{nz})$. The key tool is a novel LZ-like factorization of size $O(z\log^2n)$ whose subsequent factors can be efficiently computed through a combination of classical and quantum techniques. We can then obtain the string's run-length encoded Burrows-Wheeler Transform (BWT), construct the $r$-index, and solve many fundamental string processing problems in time $\tilde{O}(\sqrt{nz})$.