The problem of identifying the satisfiability threshold of random $3$-SAT formulas has received a lot of attention during the last decades and has inspired the study of other threshold phenomena in random combinatorial structures. The classical assumption in this line of research is that, for a given set of $n$ Boolean variables, each clause is drawn uniformly at random among all sets of three literals from these variables, independently from other clauses. Here, we keep the uniform distribution of each clause, but deviate significantly from the independence assumption and consider richer families of probability distributions. For integer parameters $n$, $m$, and $k$, we denote by $\DistFamily_k(n,m)$ the family of probability distributions that produce formulas with $m$ clauses, each selected uniformly at random from all sets of three literals from the $n$ variables, so that the clauses are $k$-wise independent. Our aim is to make general statements about the satisfiability or unsatisfiability of formulas produced by distributions in $\DistFamily_k(n,m)$ for different values of the parameters $n$, $m$, and $k$.
We derive a new adaptive leverage score sampling strategy for solving the Column Subset Selection Problem (CSSP). The resulting algorithm, called Adaptive Randomized Pivoting, can be viewed as a randomization of Osinsky's recently proposed deterministic algorithm for CSSP. It guarantees, in expectation, an approximation error that matches the optimal existence result in the Frobenius norm. Although the same guarantee can be achieved with volume sampling, our sampling strategy is much simpler and less expensive. To show the versatility of Adaptive Randomized Pivoting, we apply it to select indices in the Discrete Empirical Interpolation Method, in cross/skeleton approximation of general matrices, and in the Nystroem approximation of symmetric positive semi-definite matrices. In all these cases, the resulting randomized algorithms are new and they enjoy bounds on the expected error that match -- or improve -- the best known deterministic results. A derandomization of the algorithm for the Nystroem approximation results in a new deterministic algorithm with a rather favorable error bound.
We prove, for stably computably enumerable formal systems, direct analogues of the first and second incompleteness theorems of G\"odel. A typical stably computably enumerable set is the set of Diophantine equations with no integer solutions, and in particular such sets are generally not computably enumerable. And so this gives the first extension of the second incompleteness theorem to non classically computable formal systems. Let's motivate this with a somewhat physical application. Let $\mathcal{H} $ be the suitable infinite time limit (stabilization in the sense of the paper) of the mathematical output of humanity, specializing to first order sentences in the language of arithmetic (for simplicity), and understood as a formal system. Suppose that all the relevant physical processes in the formation of $\mathcal{H} $ are Turing computable. Then as defined $\mathcal{H} $ may \emph{not} be computably enumerable, but it is stably computably enumerable. Thus, the classical G\"odel disjunction applied to $\mathcal{H} $ is meaningless, but applying our incompleteness theorems to $\mathcal{H} $ we then get a sharper version of G\"odel's disjunction: assume $\mathcal{H} \vdash PA$ then either $\mathcal{H} $ is not stably computably enumerable or $\mathcal{H} $ is not 1-consistent (in particular is not sound) or $\mathcal{H} $ cannot prove a certain true statement of arithmetic (and cannot disprove it if in addition $\mathcal{H} $ is 2-consistent).
For many problems, quantum algorithms promise speedups over their classical counterparts. However, these results predominantly rely on asymptotic worst-case analysis, which overlooks significant overheads due to error correction and the fact that real-world instances often contain exploitable structure. In this work, we employ the hybrid benchmarking method to evaluate the potential of quantum Backtracking and Grover's algorithm against the 2023 SAT competition main track winner in solving random $k$-SAT instances with tunable structure, designed to represent industry-like scenarios, using both $T$-depth and $T$-count as cost metrics to estimate quantum run times. Our findings reproduce the results of Campbell, Khurana, and Montanaro (Quantum '19) in the unstructured case using hybrid benchmarking. However, we offer a more sobering perspective in practically relevant regimes: almost all quantum speedups vanish, even asymptotically, when minimal structure is introduced or when $T$-count is considered instead of $T$-depth. Moreover, when the requirement is for the algorithm to find a solution within a single day, we find that only Grover's algorithm has the potential to outperform classical algorithms, but only in a very limited regime and only when using $T$-depth. We also discuss how more sophisticated heuristics could restore the asymptotic scaling advantage for quantum backtracking, but our findings suggest that the potential for practical quantum speedups in more structured $k$-SAT solving will remain limited.
We introduce the notion of the free product of $q$-matroids, which is the $q$-analogue of the free product of matroids. We study the properties of this noncommutative binary operation, making an extensive use of the theory of cyclic flats. We show that the free product of two $q$-matroids $M_1$ and $M_2$ is maximal with respect to the weak order on $q$-matroids having $M_1$ as a restriction and $M_2$ as the complementary contraction. We characterise $q$-matroids that are irreducible with respect to the free product and we prove that the factorization of a $q$-matroid into a free product of irreducibles is unique up to isomorphism. We discuss the representability of the free product, with a particular focus on rank one uniform $q$-matroids and show that such a product is represented by clubs on the projective line.
Most of the scientific literature on causal modeling considers the structural framework of Pearl and the potential-outcome framework of Rubin to be formally equivalent, and therefore interchangeably uses do-interventions and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we agnostically superimpose the two causal models to specify under which mathematical conditions structural counterfactual outcomes and potential outcomes need to, do not need to, can, or cannot be equal (almost surely or law). Our comparison reminds that a structural causal model and a Rubin causal model compatible with the same observations do not have to coincide, and highlights real-world problems where they even cannot correspond. Then, we examine common claims and practices from the causal-inference literature in the light of these results. In doing so, we aim at clarifying the relationship between the two causal frameworks, and the interpretation of their respective counterfactuals.
A convergent numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (\UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is \NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the \UGC).
We propose and analyse a boundary-preserving numerical scheme for the weak approximations of some stochastic partial differential equations (SPDEs) with bounded state-space. We impose regularity assumptions on the drift and diffusion coefficients only locally on the state-space. In particular, the drift and diffusion coefficients may be non-globally Lipschitz continuous and superlinearly growing. The scheme consists of a finite difference discretisation in space and a Lie--Trotter splitting followed by exact simulation and exact integration in time. We prove weak convergence of optimal order 1/4 for globally Lipschitz continuous test functions of the scheme by proving strong convergence towards a strong solution driven by a different noise process. Boundary-preservation is ensured by the use of Lie--Trotter time splitting followed by exact simulation and exact integration. Numerical experiments confirm the theoretical results and demonstrate the effectiveness of the proposed Lie--Trotter-Exact (LTE) scheme compared to existing methods for SPDEs.
We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in {\mathbb C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in {\mathbb C}^{m \times g_i}$ and $C_j \in {\mathbb C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq {\mathbb C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq {\mathbb C}^m$ defined respectively by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dag}AC^{\dag}$ and that the optimal approximation $BB^{\dag}AC^{\dag}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large $B$ and $C$ may also be large and direct calculation of $B^{\dag}$ and $C^{\dag}$ becomes unwieldy and inefficient. { The proposed method avoids} this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.
We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.