A Radial Basis Function Generated Finite-Differences (RBF-FD) inspired technique for evaluating definite integrals over bounded volumes that have smooth boundaries in three dimensions is described. Such methods are necessary in many areas of Applied Mathematics, Mathematical Physics and myriad other application areas. Previous approaches needed restrictive uniformity in the node set, which the algorithm presented here does not require. By using RBF-FD approach, the proposed algorithm computes quadrature weights for $N$ arbitrarily scattered nodes in only $O(N\mbox{ log}N)$ operations with high orders of accuracy.
Majority-SAT is the problem of determining whether an input $n$-variable formula in conjunctive normal form (CNF) has at least $2^{n-1}$ satisfying assignments. Majority-SAT and related problems have been studied extensively in various AI communities interested in the complexity of probabilistic planning and inference. Although Majority-SAT has been known to be PP-complete for over 40 years, the complexity of a natural variant has remained open: Majority-$k$SAT, where the input CNF formula is restricted to have clause width at most $k$. We prove that for every $k$, Majority-$k$SAT is in P. In fact, for any positive integer $k$ and rational $\rho \in (0,1)$ with bounded denominator, we give an algorithm that can determine whether a given $k$-CNF has at least $\rho \cdot 2^n$ satisfying assignments, in deterministic linear time (whereas the previous best-known algorithm ran in exponential time). Our algorithms have interesting positive implications for counting complexity and the complexity of inference, significantly reducing the known complexities of related problems such as E-MAJ-$k$SAT and MAJ-MAJ-$k$SAT. At the heart of our approach is an efficient method for solving threshold counting problems by extracting sunflowers found in the corresponding set system of a $k$-CNF. We also show that the tractability of Majority-$k$SAT is somewhat fragile. For the closely related GtMajority-SAT problem (where we ask whether a given formula has greater than $2^{n-1}$ satisfying assignments) which is known to be PP-complete, we show that GtMajority-$k$SAT is in P for $k\le 3$, but becomes NP-complete for $k\geq 4$. These results are counterintuitive, because the ``natural'' classifications of these problems would have been PP-completeness, and because there is a stark difference in the complexity of GtMajority-$k$SAT and Majority-$k$SAT for all $k\ge 4$.
Submodular function minimization (SFM) and matroid intersection are fundamental discrete optimization problems with applications in many fields. It is well known that both of these can be solved making $\mathrm{poly}(N)$ queries to a relevant oracle (evaluation oracle for SFM and rank oracle for matroid intersection), where $N$ denotes the universe size. However, all known polynomial query algorithms are highly adaptive, requiring at least $N$ rounds of querying the oracle. A natural question is whether these can be efficiently solved in a highly parallel manner, namely, with $\mathrm{poly}(N)$ queries using only poly-logarithmic rounds of adaptivity. An important step towards understanding the adaptivity needed for efficient parallel SFM was taken recently in the work of Balkanski and Singer who showed that any SFM algorithm making $\mathrm{poly}(N)$ queries necessarily requires $\Omega(\log N/\log \log N)$ rounds. This left open the possibility of efficient SFM algorithms in poly-logarithmic rounds. For matroid intersection, even the possibility of a constant round, $\mathrm{poly}(N)$ query algorithm was not hitherto ruled out. In this work, we prove that any, possibly randomized, algorithm for submodular function minimization or matroid intersection making $\mathrm{poly}(N)$ queries requires $\tilde{\Omega}\left(N^{1/3}\right)$ rounds of adaptivity. In fact, we show a polynomial lower bound on the number of rounds of adaptivity even for algorithms that make at most $2^{N^{1-\delta}}$ queries, for any constant $\delta> 0$. Therefore, even though SFM and matroid intersection are efficiently solvable, they are not highly parallelizable in the oracle model.
In this paper we get error bounds for fully discrete approximations of infinite horizon problems via the dynamic programming approach. It is well known that considering a time discretization with a positive step size $h$ an error bound of size $h$ can be proved for the difference between the value function (viscosity solution of the Hamilton-Jacobi-Bellman equation corresponding to the infinite horizon) and the value function of the discrete time problem. However, including also a spatial discretization based on elements of size $k$ an error bound of size $O(k/h)$ can be found in the literature for the error between the value functions of the continuous problem and the fully discrete problem. In this paper we revise the error bound of the fully discrete method and prove, under similar assumptions to those of the time discrete case, that the error of the fully discrete case is in fact $O(h+k)$ which gives first order in time and space for the method. This error bound matches the numerical experiments of many papers in the literature in which the behaviour $1/h$ from the bound $O(k/h)$ have not been observed.
We revisit Min-Mean-Cycle, the classical problem of finding a cycle in a weighted directed graph with minimum mean weight. Despite an extensive algorithmic literature, previous work falls short of a near-linear runtime in the number of edges $m$. We propose an approximation algorithm that, for graphs with polylogarithmic diameter, achieves a near-linear runtime. In particular, this is the first algorithm whose runtime scales in the number of vertices $n$ as $\tilde{O}(n^2)$ for the complete graph. Moreover, unconditionally on the diameter, the algorithm uses only $O(n)$ memory beyond reading the input, making it "memory-optimal". Our approach is based on solving a linear programming relaxation using entropic regularization, which reduces the problem to Matrix Balancing -- \'a la the popular reduction of Optimal Transport to Matrix Scaling. The algorithm is practical and simple to implement.
Deterministic models for radiation transport describe the density of radiation particles moving through a background material. In radiation therapy applications, the phase space of this density is composed of energy, spatial position and direction of flight. The resulting six-dimensional phase space prohibits fine numerical discretizations, which are essential for the construction of accurate and reliable treatment plans. In this work, we tackle the high dimensional phase space through a dynamical low-rank approximation of the particle density. Dynamical low-rank approximation (DLRA) evolves the solution on a low-rank manifold in time. Interpreting the energy variable as a pseudo-time lets us employ the DLRA framework to represent the solution of the radiation transport equation on a low-rank manifold for every energy. Stiff scattering terms are treated through an efficient implicit energy discretization and a rank adaptive integrator is chosen to dynamically adapt the rank in energy. To facilitate the use of boundary conditions and reduce the overall rank, the radiation transport equation is split into collided and uncollided particles through a collision source method. Uncollided particles are described by a directed quadrature set guaranteeing low computational costs, whereas collided particles are represented by a low-rank solution. It can be shown that the presented method is L$^2$-stable under a time step restriction which does not depend on stiff scattering terms. Moreover, the implicit treatment of scattering does not require numerical inversions of matrices. Numerical results for radiation therapy configurations as well as the line source benchmark underline the efficiency of the proposed method.
We propose new query applications of the well known randomized incremental construction of the Trapezoidal Search DAG (TSD) on a set of $n$ line segments in the plane, where queries are allowed to be any axis aligned window. We show that our algorithm reports the $m$ trapezoids that are intersected by the query in $\mathcal{O}(m+\log n)$ expected time, regardless of the spatial location of the segment set and the query. In case the query is a {\em vertical segment}, the query time bound reduces to $\mathcal{O}(k +\log n)$ where $k$ is the number of segments that are intersected. This improves on the query and space bound of the well known Segment Tree based approach, which is to date the theoretical bottleneck for optimal query time. In the case where the set of segments is a connected planar subdivision, this method can easily be extended to an algorithm which reports the $k$ segments which intersect an axis aligned query window in $\mathcal{O}(k + \log n)$ expected time. Our publicly available implementation handles degeneracies exactly, including segments with overlap and multi-intersections. Experiments show that the method is practical and provides more reliable query times in comparison to R-trees and the segment tree based data structure on real-world and synthetic data sets.
We investigate high-order Convolution Quadratures methods for the solution of the wave equation in unbounded domains in two dimensions that rely on Nystr\"om discretizations for the solution of the ensemble of associated Laplace domain modified Helmholtz problems. We consider two classes of CQ discretizations, one based on linear multistep methods and the other based on Runge-Kutta methods, in conjunction with Nystr\"om discretizations based on Alpert and QBX quadratures of Boundary Integral Equation (BIE) formulations of the Laplace domain Helmholtz problems with complex wavenumbers. We present a variety of accuracy tests that showcase the high-order in time convergence (up to and including fifth order) that the Nystr\"om CQ discretizations are capable of delivering for a variety of two dimensional scatterers and types of boundary conditions.
Quantified responsibility ascription in complex scenarios is of crucial importance in current debates regarding collective action, for example in the face of various environmental crises. Within this endeavor, we recently proposed considering a probabilistic view of causation, rather than the deterministic views employed in much of the previous formal responsibility literature, and presented a corresponding framework as well as initial candidate functions applicable to a range of scenarios. In the current paper, we extend this contribution by formally evaluating the qualities of proposed functions through an axiomatic approach. First, we decompose responsibility ascription into distinct contributing functions, before defining a number of desirable properties, or axioms, for these. Afterwards we evaluate the proposed functions regarding compliance with these axioms. We find an incompatibility between axioms determining upper and lower bounds in one of the contributing functions, imposing a choice for one variant - upper bound or lower bound. For the other contributing function we are able to axiomatically characterize one specific function. As the previously mentioned incompatibility extends to the combined responsibility function we finally present maximally axiom compliant combined functions for each variant - including the upper bound axiom or including the lower bound axiom.
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.