In this paper, we introduce a novel algorithm to solve projected model counting (PMC). PMC asks to count solutions of a Boolean formula with respect to a given set of projection variables, where multiple solutions that are identical when restricted to the projection variables count as only one solution. Inspired by the observation that the so-called "treewidth" is one of the most prominent structural parameters, our algorithm utilizes small treewidth of the primal graph of the input instance. More precisely, it runs in time O(2^2k+4n2) where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, yielding asymptotically tight runtime bounds of our algorithm. While the algorithm above serves as a first theoretical upper bound and although it might be quite appealing for small values of k, unsurprisingly a naive implementation adhering to this runtime bound suffers already from instances of relatively small width. Therefore, we turn our attention to several measures in order to resolve this issue towards exploiting treewidth in practice: We present a technique called nested dynamic programming, where different levels of abstractions of the primal graph are used to (recursively) compute and refine tree decompositions of a given instance. Finally, we provide a nested dynamic programming algorithm and an implementation that relies on database technology for PMC and a prominent special case of PMC, namely model counting (#Sat). Experiments indicate that the advancements are promising, allowing us to solve instances of treewidth upper bounds beyond 200.
Covariate adjustment is desired by both practitioners and regulators of randomized clinical trials because it improves precision for estimating treatment effects. However, covariate adjustment presents a particular challenge in time-to-event analysis. We propose to apply covariate adjusted pseudovalue regression to estimate between-treatment difference in restricted mean survival times (RMST). Our proposed method incorporates a prognostic covariate to increase precision of treatment effect estimate, maintaining strict type I error control without introducing bias. In addition, the amount of increase in precision can be quantified and taken into account in sample size calculation at the study design stage. Consequently, our proposed method provides the ability to design smaller randomized studies at no expense to statistical power.
We study pseudo-polynomial time algorithms for the fundamental \emph{0-1 Knapsack} problem. In terms of $n$ and $w_{\max}$, previous algorithms for 0-1 Knapsack have cubic time complexities: $O(n^2w_{\max})$ (Bellman 1957), $O(nw_{\max}^2)$ (Kellerer and Pferschy 2004), and $O(n + w_{\max}^3)$ (Polak, Rohwedder, and W\k{e}grzycki 2021). On the other hand, fine-grained complexity only rules out $O((n+w_{\max})^{2-\delta})$ running time, and it is an important question in this area whether $\tilde O(n+w_{\max}^2)$ time is achievable. Our main result makes significant progress towards solving this question: - The 0-1 Knapsack problem has a deterministic algorithm in $\tilde O(n + w_{\max}^{2.5})$ time. Our techniques also apply to the easier \emph{Subset Sum} problem: - The Subset Sum problem has a randomized algorithm in $\tilde O(n + w_{\max}^{1.5})$ time. This improves (and simplifies) the previous $\tilde O(n + w_{\max}^{5/3})$-time algorithm by Polak, Rohwedder, and W\k{e}grzycki (2021) (based on Galil and Margalit (1991), and Bringmann and Wellnitz (2021)). Similar to recent works on Knapsack (and integer programs in general), our algorithms also utilize the \emph{proximity} between optimal integral solutions and fractional solutions. Our new ideas are as follows: - Previous works used an $O(w_{\max})$ proximity bound in the $\ell_1$-norm. As our main conceptual contribution, we use an additive-combinatorial theorem by Erd\H{o}s and S\'{a}rk\"{o}zy (1990) to derive an $\ell_0$-proximity bound of $\tilde O(\sqrt{w_{\max}})$. - Then, the main technical component of our Knapsack result is a dynamic programming algorithm that exploits both $\ell_0$- and $\ell_1$-proximity. It is based on a vast extension of the ``witness propagation'' method, originally designed by Deng, Mao, and Zhong (2023) for the easier \emph{unbounded} setting only.
We define the relative fractional independence number of two graphs, $G$ and $H$, as $$\alpha^*(G|H)=\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)},$$ where the maximum is taken over all graphs $W$, $G\boxtimes W$ is the strong product of $G$ and $W$, and $\alpha$ denotes the independence number. We give a non-trivial linear program to compute $\alpha^*(G|H)$ and discuss some of its properties. We show that $$\alpha^*(G|H)\geq \frac{X(G)}{X(H)},$$ where $X(G)$ can be the independence number, the zero-error Shannon capacity, the fractional independence number, the Lov'{a}sz number, or the Schrijver's or Szegedy's variants of the Lov'{a}sz number of a graph $G$. This inequality is the first explicit non-trivial upper bound on the ratio of the invariants of two arbitrary graphs, as mentioned earlier, which can also be used to obtain upper or lower bounds for these invariants. As explicit applications, we present new upper bounds for the ratio of the zero-error Shannon capacity of two Cayley graphs and compute new lower bounds on the Shannon capacity of certain Johnson graphs (yielding the exact value of their Haemers number). Moreover, we show that the relative fractional independence number can be used to present a stronger version of the well-known No-Homomorphism Lemma. The No-Homomorphism Lemma is widely used to show the non-existence of a homomorphism between two graphs and is also used to give an upper bound on the independence number of a graph. Our extension of the No-Homomorphism Lemma is computationally more accessible than its original version.
We assume to be given structural equations over discrete variables inducing a directed acyclic graph, namely, a structural causal model, together with data about its internal nodes. The question we want to answer is how we can compute bounds for partially identifiable counterfactual queries from such an input. We start by giving a map from structural casual models to credal networks. This allows us to compute exact counterfactual bounds via algorithms for credal nets on a subclass of structural causal models. Exact computation is going to be inefficient in general given that, as we show, causal inference is NP-hard even on polytrees. We target then approximate bounds via a causal EM scheme. We evaluate their accuracy by providing credible intervals on the quality of the approximation; we show through a synthetic benchmark that the EM scheme delivers accurate results in a fair number of runs. In the course of the discussion, we also point out what seems to be a neglected limitation to the trending idea that counterfactual bounds can be computed without knowledge of the structural equations. We also present a real case study on palliative care to show how our algorithms can readily be used for practical purposes.
Treewidth is as an important parameter that yields tractability for many problems. For example, graph problems expressible in Monadic Second Order (MSO) logic and QUANTIFIED SAT or, more generally, QUANTIFIED CSP, are fixed-parameter tractable parameterized by the treewidth of the input's (primal) graph plus the length of the MSO-formula [Courcelle, Information & Computation 1990] and the quantifier rank [Chen, ECAI 2004], respectively. The algorithms generated by these (meta-)results have running times whose dependence on treewidth is a tower of exponents. A conditional lower bound by Fichte et al. [LICS 2020] shows that, for QUANTIFIED SAT, the height of this tower is equal to the number of quantifier alternations. Lower bounds showing that at least double-exponential factors in the running time are necessary, exhibit the extraordinary computational hardness of such problems, and are rare: there are very few (for treewidth tw and vertex cover vc parameterizations) and they are for $\Sigma_2^p$-, $\Sigma_3^p$- or #NP-complete problems. We show, for the first time, that it is not necessary to go higher up in the polynomial hierarchy to obtain such lower bounds. Specifically, for the well-studied NP-complete metric graph problems METRIC DIMENSION, STRONG METRIC DIMENSION, and GEODETIC SET, we prove that they do not admit $2^{2^{o(tw)}} \cdot n^{O(1)}$-time algorithms, even on bounded diameter graphs, unless the ETH fails. For STRONG METRIC DIMENSION, this lower bound holds even for vc. This is impossible for the other two as they admit $2^{O({vc}^2)} \cdot n^{O(1)}$-time algorithms. We show that, unless the ETH fails, they do not admit $2^{o({vc}^2)}\cdot n^{O(1)}$-time algorithms, thereby adding to the short list of problems admitting such lower bounds. The latter results also yield lower bounds on the vertex-kernel sizes. We complement all our lower bounds with matching upper bounds.
Multi-core neuromorphic systems typically use on-chip routers to transmit spikes among cores. These routers require significant memory resources and consume a large part of the overall system's energy budget. A promising alternative approach to using standard CMOS and SRAM-based routers is to exploit the features of memristive crossbar arrays and use them as programmable switch-matrices that route spikes. However, the scaling of these crossbar arrays presents physical challenges, such as `IR drop' on the metal lines due to the parasitic resistance, and leakage current accumulation on multiple active `off' memristors. While reliability challenges of this type have been extensively studied in synchronous systems for compute-in-memory matrix-vector multiplication (MVM) accelerators and storage class memory, little effort has been devoted so far to characterizing the scaling limits of memristor-based crossbar routers. In this paper, we study the challenges of memristive crossbar arrays, when used as routing channels to transmit spikes in asynchronous Spiking Neural Network (SNN) hardware. We validate our analytical findings with experimental results obtained from a 4K-ReRAM chip which demonstrate its functionality as a routing crossbar. We determine the functionality bounds on the routing due to the IR drop and leak problem, based both on experimental measurements, modeling and circuit simulations in a 22nm FDSOI technology. This work highlights the constraint of this approach and provides useful guidelines for engineering memristor properties in memristive crossbar routers for building multi-core asynchronous neuromorphic systems.
The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.
In this paper we derive tight lower bounds resolving the hardness status of several fundamental weighted matroid problems. One notable example is budgeted matroid independent set, for which we show there is no fully polynomial-time approximation scheme (FPTAS), indicating the Efficient PTAS of [Doron-Arad, Kulik and Shachnai, SOSA 2023] is the best possible. Furthermore, we show that there is no pseudo-polynomial time algorithm for exact weight matroid independent set, implying the algorithm of [Camerini, Galbiati and Maffioli, J. Algorithms 1992] for representable matroids cannot be generalized to arbitrary matroids. Similarly, we show there is no Fully PTAS for constrained minimum basis of a matroid and knapsack cover with a matroid, implying the existing Efficient PTAS for the former is optimal. For all of the above problems, we obtain unconditional lower bounds in the oracle model, where the independent sets of the matroid can be accessed only via a membership oracle. We complement these results by showing that the same lower bounds hold under standard complexity assumptions, even if the matroid is encoded as part of the instance. All of our bounds are based on a specifically structured family of paving matroids.
Computational methods for thermal radiative transfer problems exhibit high computational costs and a prohibitive memory footprint when the spatial and directional domains are finely resolved. A strategy to reduce such computational costs is dynamical low-rank approximation (DLRA), which represents and evolves the solution on a low-rank manifold, thereby significantly decreasing computational and memory requirements. Efficient discretizations for the DLRA evolution equations need to be carefully constructed to guarantee stability while enabling mass conservation. In this work, we focus on the Su-Olson closure and derive a stable discretization through an implicit coupling of energy and radiation density. Moreover, we propose a rank-adaptive strategy to preserve local mass conservation. Numerical results are presented which showcase the accuracy and efficiency of the proposed method.
We characterise the classes of tournaments with tractable first-order model checking. For every hereditary class of tournaments $\mathcal T$, first-order model checking either is fixed parameter tractable, or is AW$[*]$-hard. This dichotomy coincides with the fact that $\mathcal T$ has either bounded or unbounded twin-width, and that the growth of $\mathcal T$ is either at most exponential or at least factorial. From the model-theoretic point of view, we show that NIP classes of tournaments coincide with bounded twin-width. Twin-width is also characterised by three infinite families of obstructions: $\mathcal T$ has bounded twin-width if and only if it excludes one tournament from each family. This generalises results of Bonnet et al. on ordered graphs. The key for these results is a polynomial time algorithm which takes as input a tournament $T$ and compute a linear order $<$ on $V(T)$ such that the twin-width of the birelation $(T,<)$ is at most some function of the twin-width of $T$. Since approximating twin-width can be done in polynomial time for an ordered structure $(T,<)$, this provides a polytime approximation of twin-width for tournaments. Our results extend to oriented graphs with stable sets of bounded size, which may also be augmented by arbitrary binary relations.