We introduce the problem of finding a satisfying assignment to a CNF formula that must further belong to a prescribed input subspace. Equivalent formulations of the problem include finding a point outside a union of subspaces (the Union-of-Subspace Avoidance (USA) problem), and finding a common zero of a system of polynomials over $\F_2$ each of which is a product of affine forms. We focus on the case of k-CNF formulas (the k-SUB-SAT problem). Clearly, it is no easier than k-SAT, and might be harder. Indeed, via simple reductions we show NP-hardness for k=2 and W[1]-hardness parameterized by the co-dimension of the subspace. We also prove that the optimization version Max-2-SUB-SAT is NP-hard to approximate better than the trivial 3/4 ratio even on satisfiable instances. On the algorithmic front, we investigate fast exponential algorithms which give non-trivial savings over brute-force algorithms. We give a simple branching algorithm with runtime 1.5^r for 2-SUB-SAT, where $r$ is the subspace dimension and an O^*(1.4312)^n time algorithm where $n$ is the number of variables. For k more than 2, while known algorithms for solving a system of degree $k$ polynomial equations already imply a solution with runtime 2^{r(1-1/2k)}, we explore a more combinatorial approach. For instance, based on the notion of critical variables, we give an algorithm with running time ${n\choose {\le t}} 2^{n-n/k}$, where $n$ is the number of variables and $t$ is the co-dimension of the subspace. This improves upon the running time of the polynomial equations approach for small co-dimension. Our algorithm also achieves polynomial space in contrast to the algebraic approach that uses exponential space.
We introduce and analyze various Regularized Combined Field Integral Equations (CFIER) formulations of time-harmonic Navier equations in media with piece-wise constant material properties. These formulations can be derived systematically starting from suitable coercive approximations of Dirichlet-to-Neumann operators (DtN), and we present a periodic pseudodifferential calculus framework within which the well posedness of CIER formulations can be established. We also use the DtN approximations to derive and analyze Optimized Schwarz (OS) methods for the solution of elastodynamics transmission problems. The pseudodifferential calculus we develop in this paper relies on careful singularity splittings of the kernels of Navier boundary integral operators which is also the basis of high-order Nystr\"om quadratures for their discretizations. Based on these high-order discretizations we investigate the rate of convergence of iterative solvers applied to CFIER and OS formulations of scattering and transmission problems. We present a variety of numerical results that illustrate that the CFIER methodology leads to important computational savings over the classical CFIE one, whenever iterative solvers are used for the solution of the ensuing discretized boundary integral equations. Finally, we show that the OS methods are competitive in the high-frequency high-contrast regime.
In recent years finite tensor products of reproducing kernel Hilbert spaces (RKHSs) of Gaussian kernels on the one hand and of Hermite spaces on the other hand have been considered in tractability analysis of multivariate problems. In the present paper we study countably infinite tensor products for both types of spaces. We show that the incomplete tensor product in the sense of von Neumann may be identified with an RKHS whose domain is a proper subset of the sequence space $\mathbb{R}^\mathbb{N}$. Moreover, we show that each tensor product of spaces of Gaussian kernels having square-summable shape parameters is isometrically isomorphic to a tensor product of Hermite spaces; the corresponding isomorphism is given explicitly, respects point evaluations, and is also an $L^2$-isometry. This result directly transfers to the case of finite tensor products. Furthermore, we provide regularity results for Hermite spaces of functions of a single variable.
Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and lower bounds on the optimal policy cost of discounted-cost Markov decision processes (MDPs). Formulating an ALP requires (i) basis functions, the linear combination of which defines the VFA, and (ii) a state-relevance distribution, which determines the relative importance of different states in the ALP objective for the purpose of minimizing VFA error. Both these choices are typically heuristic: basis function selection relies on domain knowledge while the state-relevance distribution is specified using the frequency of states visited by a heuristic policy. We propose a self-guided sequence of ALPs that embeds random basis functions obtained via inexpensive sampling and uses the known VFA from the previous iteration to guide VFA computation in the current iteration. Self-guided ALPs mitigate the need for domain knowledge during basis function selection as well as the impact of the initial choice of the state-relevance distribution, thus significantly reducing the ALP implementation burden. We establish high probability error bounds on the VFAs from this sequence and show that a worst-case measure of policy performance is improved. We find that these favorable implementation and theoretical properties translate to encouraging numerical results on perishable inventory control and options pricing applications, where self-guided ALP policies improve upon policies from problem-specific methods. More broadly, our research takes a meaningful step toward application-agnostic policies and bounds for MDPs.
In this paper we study systems of autonomous algebraic ODEs in several differential indeterminates. We develop a notion of algebraic dimension of such systems by considering them as algebraic systems. Afterwards we apply differential elimination and analyze the behavior of the dimension in the resulting Thomas decomposition. For such systems of algebraic dimension one, we show that all formal Puiseux series solutions can be approximated up to an arbitrary order by convergent solutions. We show that the existence of Puiseux series and algebraic solutions can be decided algorithmically. Moreover, we present a symbolic algorithm to compute all algebraic solutions. The output can either be represented by triangular systems or by their minimal polynomials.
Many real-world problems like Social Influence Maximization face the dilemma of choosing the best $K$ out of $N$ options at a given time instant. This setup can be modeled as a combinatorial bandit which chooses $K$ out of $N$ arms at each time, with an aim to achieve an efficient trade-off between exploration and exploitation. This is the first work for combinatorial bandits where the feedback received can be a non-linear function of the chosen $K$ arms. The direct use of multi-armed bandit requires choosing among $N$-choose-$K$ options making the state space large. In this paper, we present a novel algorithm which is computationally efficient and the storage is linear in $N$. The proposed algorithm is a divide-and-conquer based strategy, that we call CMAB-SM. Further, the proposed algorithm achieves a \textit{regret bound} of $\tilde O(K^{\frac{1}{2}}N^{\frac{1}{3}}T^{\frac{2}{3}})$ for a time horizon $T$, which is \textit{sub-linear} in all parameters $T$, $N$, and $K$. %When applied to the problem of Social Influence Maximization, the performance of the proposed algorithm surpasses the UCB algorithm and some more sophisticated domain-specific methods.
We give a number of approximation metatheorems for monotone maximization problems expressible in the first-order logic, in substantially more general settings than the previously known. We obtain * constant-factor approximation algorithm in any class of graphs with bounded expansion, * a QPTAS in any class with strongly sublinear separators, and * a PTAS in any fractionally treewidth-fragile class (which includes all common classes with strongly sublinear separators. Moreover, our tools also give an exact subexponential-time algorithm in any class with strongly sublinear separators.
We present a uniform description of sets of $m$ linear forms in $n$ variables over the field of rational numbers whose computation requires $m(n - 1)$ additions.
While a Quantum Approximate Optimization Algorithm (QAOA) is intended to provide a quantum advantage in finding approximate solutions to combinatorial optimization problems, noise in the system is a hurdle in exploiting its full potential. Several error mitigation techniques have been studied to lessen the effect of noise on this algorithm. Recently, Majumdar et al. proposed a Depth First Search (DFS) based method to reduce $n-1$ CNOT gates in the ansatz design of QAOA for finding Max-Cut in a graph G = (V, E), |V| = n. However, this method tends to increase the depth of the circuit, making it more prone to relaxation error. The depth of the circuit is proportional to the height of the DFS tree, which can be $n-1$ in the worst case. In this paper, we propose an $O(\Delta \cdot n^2)$ greedy heuristic algorithm, where $\Delta$ is the maximum degree of the graph, that finds a spanning tree of lower height, thus reducing the overall depth of the circuit while still retaining the $n-1$ reduction in the number of CNOT gates needed in the ansatz. We numerically show that this algorithm achieves nearly 10 times increase in the probability of success for each iteration of QAOA for Max-Cut. We further show that although the average depth of the circuit produced by this heuristic algorithm still grows linearly with n, our algorithm reduces the slope of the linear increase from 1 to 0.11.
We consider a Johnson-N\'ed\'elec FEM-BEM coupling, which is a direct and non-symmetric coupling of finite and boundary element methods, in order to solve interface problems for the magnetostatic Maxwell's equations with the magnetic vector potential ansatz. In the FEM-domain, equations may be non-linear, whereas they are exclusively linear in the BEM-part to guarantee the existence of a fundamental solution. First, the weak problem is formulated in quotient spaces to avoid resolving to a saddle point problem. Second, we establish in this setting well-posedness of the arising problem using the framework of Lipschitz and strongly monotone operators as well as a stability result for a special type of non-linearity, which is typically considered in magnetostatic applications. Then, the discretization is performed in the isogeometric context, i.e., the same type of basis functions that are used for geometry design are considered as ansatz functions for the discrete setting. In particular, NURBS are employed for geometry considerations, and B-Splines, which can be understood as a special type of NURBS, for analysis purposes. In this context, we derive a priori estimates w.r.t. h-refinement, and point out to an interesting behavior of BEM, which consists in an amelioration of the convergence rates, when a functional of the solution is evaluated in the exterior BEM-domain. This improvement may lead to a doubling of the convergence rate under certain assumptions. Finally, we end the paper with a numerical example to illustrate the theoretical results, along with a conclusion and an outlook.
A recent literature considers causal inference using noisy proxies for unobserved confounding factors. The proxies are divided into two sets that are independent conditional on the confounders. One set of proxies are `negative control treatments' and the other are `negative control outcomes'. Existing work applies to low-dimensional settings with a fixed number of proxies and confounders. In this work we consider linear models with many proxy controls and possibly many confounders. A key insight is that if each group of proxies is strictly larger than the number of confounding factors, then a matrix of nuisance parameters has a low-rank structure and a vector of nuisance parameters has a sparse structure. We can exploit the rank-restriction and sparsity to reduce the number of free parameters to be estimated. The number of unobserved confounders is not known a priori but we show that it is identified, and we apply penalization methods to adapt to this quantity. We provide an estimator with a closed-form as well as a doubly-robust estimator that must be evaluated using numerical methods. We provide conditions under which our doubly-robust estimator is uniformly root-$n$ consistent, asymptotically centered normal, and our suggested confidence intervals have asymptotically correct coverage. We provide simulation evidence that our methods achieve better performance than existing approaches in high dimensions, particularly when the number of proxies is substantially larger than the number of confounders.