We revisit the 3SUM problem in the \emph{preprocessed universes} setting. We present an algorithm that, given three sets $A$, $B$, $C$ of $n$ integers, preprocesses them in quadratic time, so that given any subsets $A' \subseteq A$, $B' \subseteq B$, $C' \subseteq C$, it can decide if there exist $a \in A'$, $b \in B'$, $c \in C'$ with $a+b=c$ in time $O(n^{1.5} \log n)$. In contrast to both the first subquadratic $\tilde{O}(n^{13/7})$-time algorithm by Chan and Lewenstein (STOC 2015) and the current fastest $\tilde{O}(n^{11/6})$-time algorithm by Chan, Vassilevska Williams, and Xu (STOC 2023), which are based on the Balog--Szemer\'edi--Gowers theorem from additive combinatorics, our algorithm uses only standard 3SUM-related techniques, namely FFT and linear hashing modulo a prime. It is therefore not only faster but also simpler. Just as the two previous algorithms, ours not only decides if there is a single 3SUM solution but it actually determines for each $c \in C'$ if there is a solution containing it. We also modify the algorithm to still work in the scenario where the set $C$ is unknown at the time of preprocessing. Finally, even though the simplest version of our algorithm is randomized, we show how to make it deterministic losing only polylogarithmic factors in the running time.
This work considers the problem of output-sensitive listing of occurrences of $2k$-cycles for fixed constant $k\geq 2$ in an undirected host graph with $m$ edges and $t$ $2k$-cycles. Recent work of Jin and Xu (and independently Abboud, Khoury, Leibowitz, and Safier) [STOC 2023] gives an $O(m^{4/3}+t)$ time algorithm for listing $4$-cycles, and recent work by Jin, Vassilevska Williams and Zhou [SOSA 2024] gives an $\widetilde{O}(n^2+t)$ time algorithm for listing $6$-cycles in $n$ node graphs. We focus on resolving the next natural question: obtaining listing algorithms for $6$-cycles in the sparse setting, i.e., in terms of $m$ rather than $n$. Previously, the best known result here is the better of Jin, Vassilevska Williams and Zhou's $\widetilde{O}(n^2+t)$ algorithm and Alon, Yuster and Zwick's $O(m^{5/3}+t)$ algorithm. We give an algorithm for listing $6$-cycles with running time $\widetilde{O}(m^{1.6}+t)$. Our algorithm is a natural extension of Dahlgaard, Knudsen and St\"ockel's [STOC 2017] algorithm for detecting a $2k$-cycle. Our main technical contribution is the analysis of the algorithm which involves a type of ``supersaturation'' lemma relating the number of $2k$-cycles in a bipartite graph to the sizes of the parts in the bipartition and the number of edges. We also give a simplified analysis of Dahlgaard, Knudsen and St\"ockel's $2k$-cycle detection algorithm (with a small polylogarithmic increase in the running time), which is helpful in analyzing our listing algorithm.
Graph similarity computation (GSC) is to calculate the similarity between one pair of graphs, which is a fundamental problem with fruitful applications in the graph community. In GSC, graph edit distance (GED) and maximum common subgraph (MCS) are two important similarity metrics, both of which are NP-hard to compute. Instead of calculating the exact values, recent solutions resort to leveraging graph neural networks (GNNs) to learn data-driven models for the estimation of GED and MCS. Most of them are built on components involving node-level interactions crossing graphs, which engender vast computation overhead but are of little avail in effectiveness. In the paper, we present GraSP, a simple yet effective GSC approach for GED and MCS prediction. GraSP achieves high result efficacy through several key instruments: enhanced node features via positional encoding and a GNN model augmented by a gating mechanism, residual connections, as well as multi-scale pooling. Theoretically, GraSP can surpass the 1-WL test, indicating its high expressiveness. Empirically, extensive experiments comparing GraSP against 10 competitors on multiple widely adopted benchmark datasets showcase the superiority of GraSP over prior arts in terms of both effectiveness and efficiency. The code is available at //github.com/HaoranZ99/GraSP.
Martin-L\"{o}f type theory $\mathbf{MLTT}$ was extended by Setzer with the so-called Mahlo universe types. The extension of $\mathbf{MLTT}$ with one Mahlo universe is called $\mathbf{MLM}$ and was introduced to develop a variant of $\mathbf{MLTT}$ equipped with an analogue of a large cardinal. Another instance of constructive systems extended with an analogue of a large set was formulated in the context of Aczel's constructive set theory: $\mathbf{CZF}$. Rathjen, Griffor and Palmgren extended $\mathbf{CZF}$ with inaccessible sets of all transfinite orders. While Rathjen proved that this extended system of $\mathbf{CZF}$ is interpretable in an extension of $\mathbf{MLM}$ with one usual universe type above the Mahlo universe, it is unknown whether it can be interpreted by the Mahlo universe without a universe type above it. We extend $\mathbf{MLM}$ not by a universe type but by the accessibility predicate, and show that $\mathbf{CZF}$ with inaccessible sets can be interpreted in $\mathbf{MLM}$ with the accessibility predicate. Our interpretation of this extension of $\mathbf{CZF}$ is the same as that of Rathjen, Griffor and Palmgren formulated by $\mathbf{MLTT}$ with second-order universe operators, except that we construct the inaccessible sets by using the Mahlo universe and the accessibility predicate. We formalised the main part of our interpretation in the proof assistant Agda.
We present MathDSL, a Domain-Specific Language (DSL) for mathematical equation solving, which, when deployed in program synthesis models, outperforms state-of-the-art reinforcement-learning-based methods. We also introduce a quantitative metric for measuring the conciseness of a mathematical solution and demonstrate the improvement in the quality of generated solutions compared to other methods. Our system demonstrates that a program synthesis system (DreamCoder) using MathDSL can generate programs that solve linear equations with greater accuracy and conciseness than using reinforcement learning systems. Additionally, we demonstrate that if we use the action spaces of previous reinforcement learning systems as DSLs, MathDSL outperforms the action-space-DSLs. We use DreamCoder to store equation-solving strategies as learned abstractions in its program library and demonstrate that by using MathDSL, these can be converted into human-interpretable solution strategies that could have applications in mathematical education.
Given a point set $P$ in a metric space and a real number $t \geq 1$, an \emph{oriented $t$-spanner} is an oriented graph $\overrightarrow{G}=(P,\overrightarrow{E})$, where for every pair of distinct points $p$ and $q$ in $P$, the shortest oriented closed walk in $\overrightarrow{G}$ that contains $p$ and $q$ is at most a factor $t$ longer than the perimeter of the smallest triangle in $P$ containing $p$ and $q$. The \emph{oriented dilation} of a graph $\overrightarrow{G}$ is the minimum $t$ for which $\overrightarrow{G}$ is an oriented $t$-spanner. We present the first algorithm that computes, in Euclidean space, a sparse oriented spanner whose oriented dilation is bounded by a constant. More specifically, for any set of $n$ points in $\mathbb{R}^d$, where $d$ is a constant, we construct an oriented $(2+\varepsilon)$-spanner with $\mathcal{O}(n)$ edges in $\mathcal{O}(n \log n)$ time and $\mathcal{O}(n)$ space. Our construction uses the well-separated pair decomposition and an algorithm that computes a $(1+\varepsilon)$-approximation of the minimum-perimeter triangle in $P$ containing two given query points in $\mathcal{O}(\log n)$ time. While our algorithm is based on first computing a suitable undirected graph and then orienting it, we show that, in general, computing the orientation of an undirected graph that minimises its oriented dilation is NP-hard, even for point sets in the Euclidean plane. We further prove that even if the orientation is already given, computing the oriented dilation is APSP-hard for points in a general metric space. We complement this result with an algorithm that approximates the oriented dilation of a given graph in subcubic time for point sets in $\mathbb{R}^d$, where $d$ is a constant.
Despite the widespread use of the data augmentation (DA) algorithm, the theoretical understanding of its convergence behavior remains incomplete. We prove the first non-asymptotic polynomial upper bounds on mixing times of three important DA algorithms: DA algorithm for Bayesian Probit regression (Albert and Chib, 1993, ProbitDA), Bayesian Logit regression (Polson, Scott, and Windle, 2013, LogitDA), and Bayesian Lasso regression (Park and Casella, 2008, Rajaratnam et al., 2015, LassoDA). Concretely, we demonstrate that with $\eta$-warm start, parameter dimension $d$, and sample size $n$, the ProbitDA and LogitDA require $\mathcal{O}\left(nd\log \left(\frac{\log \eta}{\epsilon}\right)\right)$ steps to obtain samples with at most $\epsilon$ TV error, whereas the LassoDA requires $\mathcal{O}\left(d^2(d\log d +n \log n)^2 \log \left(\frac{\eta}{\epsilon}\right)\right)$ steps. The results are generally applicable to settings with large $n$ and large $d$, including settings with highly imbalanced response data in the Probit and Logit regression. The proofs are based on the Markov chain conductance and isoperimetric inequalities. Assuming that data are independently generated from either a bounded, sub-Gaussian, or log-concave distribution, we improve the guarantees for ProbitDA and LogitDA to $\tilde{\mathcal{O}}(n+d)$ with high probability, and compare it with the best known guarantees of Langevin Monte Carlo and Metropolis Adjusted Langevin Algorithm. We also discuss the mixing times of the three algorithms under feasible initialization.
The ideal estimand for comparing a new treatment $Rx$ with a control $C$ is the $\textit{counterfactual}$ efficacy $Rx:C$, the expected differential outcome between $Rx$ and $C$ if each patient were given $\textit{both}$. While counterfactual $\textit{point estimation}$ from $\textit{factual}$ Randomized Controlled Trials (RCTs) has been available, this article shows $\textit{counterfactual}$ uncertainty quantification (CUQ), quantifying uncertainty for factual point estimates but in a counterfactual setting, is surprisingly achievable. We achieve CUQ whose variability is typically smaller than factual UQ, by creating a new statistical modeling principle called ETZ which is applicable to RCTs with $\textit{Before-and-After}$ treatment Repeated Measures, common in many therapeutic areas. We urge caution when estimate of the unobservable true condition of a patient before treatment has measurement error, because that violation of standard regression assumption can cause attenuation in estimating treatment effects. Fortunately, we prove that, for traditional medicine in general, and for targeted therapy with efficacy defined as averaged over the population, counterfactual point estimation is unbiased. However, for targeted therapy, both Real Human and Digital Twins approaches should respect this limitation, lest predicted treatment effect in $\textit{subgroups}$ will have bias.
We construct a polynomial-time classical algorithm that samples from the output distribution of low-depth noisy Clifford circuits with any product-state inputs and final single-qubit measurements in any basis. This class of circuits includes Clifford-magic circuits and Conjugated-Clifford circuits, which are important candidates for demonstrating quantum advantage using non-universal gates. Additionally, our results generalize a simulation algorithm for IQP circuits [Rajakumar et. al, SODA'25] to the case of IQP circuits augmented with CNOT gates, which is another class of non-universal circuits that are relevant to current experiments. Importantly, our results do not require randomness assumptions over the circuit families considered (such as anticoncentration properties) and instead hold for every circuit in each class. This allows us to place tight limitations on the robustness of these circuits to noise. In particular, we show that there is no quantum advantage at large depths with realistically noisy Clifford circuits, even with perfect magic state inputs, or IQP circuits with CNOT gates, even with arbitrary diagonal non-Clifford gates. The key insight behind the algorithm is that interspersed noise causes a decay of long-range entanglement, and at depths beyond a critical threshold, the noise builds up to an extent that most correlations can be classically simulated. To prove our results, we merge techniques from percolation theory with tools from Pauli path analysis.
Deep learning-based algorithms have seen a massive popularity in different areas of remote sensing image analysis over the past decade. Recently, transformers-based architectures, originally introduced in natural language processing, have pervaded computer vision field where the self-attention mechanism has been utilized as a replacement to the popular convolution operator for capturing long-range dependencies. Inspired by recent advances in computer vision, remote sensing community has also witnessed an increased exploration of vision transformers for a diverse set of tasks. Although a number of surveys have focused on transformers in computer vision in general, to the best of our knowledge we are the first to present a systematic review of recent advances based on transformers in remote sensing. Our survey covers more than 60 recent transformers-based methods for different remote sensing problems in sub-areas of remote sensing: very high-resolution (VHR), hyperspectral (HSI) and synthetic aperture radar (SAR) imagery. We conclude the survey by discussing different challenges and open issues of transformers in remote sensing. Additionally, we intend to frequently update and maintain the latest transformers in remote sensing papers with their respective code at: //github.com/VIROBO-15/Transformer-in-Remote-Sensing
In multi-turn dialog, utterances do not always take the full form of sentences \cite{Carbonell1983DiscoursePA}, which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies these two tasks, sharing the same encoder to extract the common and task-invariant features with different decoders to learn task-specific features. To better fusing information from the question and the dialog history in the encoding part, we propose to augment the Transformer architecture with a memory updater, which is designed to selectively store and update the history dialog information so as to support downstream tasks. For the experiment, we employ human annotators to write and examine a large-scale dialog reading comprehension dataset. Extensive experiments are conducted on this dataset, and the results show that the proposed model brings substantial improvements over several strong baselines on both tasks. In this way, we demonstrate that reasoning can indeed help better response generation and vice versa. We release our large-scale dataset for further research.