In $d$ dimensions, approximating an arbitrary function oscillating with frequency $\lesssim k$ requires $\sim k^d$ degrees of freedom. A numerical method for solving the Helmholtz equation (with wavenumber $k$) suffers from the pollution effect if, as $k\to \infty$, the total number of degrees of freedom needed to maintain accuracy grows faster than this natural threshold. While the $h$-version of the finite element method (FEM) (where accuracy is increased by decreasing the meshwidth $h$ and keeping the polynomial degree $p$ fixed) suffers from the pollution effect, the celebrated papers [Melenk, Sauter 2010], [Melenk, Sauter 2011], [Esterhazy, Melenk 2012], and [Melenk, Parsania, Sauter 2013] showed that the $hp$-FEM (where accuracy is increased by decreasing the meshwidth $h$ and increasing the polynomial degree $p$) applied to a variety of constant-coefficient Helmholtz problems does not suffer from the pollution effect. The heart of the proofs of these results is a PDE result splitting the solution of the Helmholtz equation into "high" and "low" frequency components. The main novelty of the present paper is that we prove this splitting for the constant-coefficient Helmholtz equation in full-space (i.e., in $\mathbb{R}^d$) using only integration by parts and elementary properties of the Fourier transform (this is contrast to the proof for this set-up in [Melenk, Sauter 2010] which uses somewhat-involved bounds on Bessel and Hankel functions). We combine this splitting with (i) standard arguments about convergence of the FEM applied to the Helmholtz equation (the so-called "Schatz argument", which we reproduce here) and (ii) polynomial-approximation results (which we quote from the literature without proof) to give a simple proof that the $hp$-FEM does not suffer from the pollution effect for the constant-coefficient full-space Helmholtz equation.
In this paper we present an algebraic dimension-oblivious two-level domain decomposition solver for discretizations of elliptic partial differential equations. The proposed parallel solver is based on a space-filling curve partitioning approach that is applicable to any discretization, i.e. it directly operates on the assembled matrix equations. Moreover, it allows for the effective use of arbitrary processor numbers independent of the dimension of the underlying partial differential equation while maintaining optimal convergence behavior. This is the core property required to attain a sparse grid based combination method with extreme scalability which can utilize exascale parallel systems efficiently. Moreover, this approach provides a basis for the development of a fault-tolerant solver for the numerical treatment of high-dimensional problems. To achieve the required data redundancy we are therefore concerned with large overlaps of our domain decomposition which we construct via space-filling curves. In this paper, we propose our space-filling curve based domain decomposition solver and present its convergence properties and scaling behavior. The results of numerical experiments clearly show that our approach provides optimal convergence and scaling behavior in arbitrary dimension utilizing arbitrary processor numbers.
Two novel parallel Newton-Krylov Balancing Domain Decomposition by Constraints (BDDC) and Dual-Primal Finite Element Tearing and Interconnecting (FETI-DP) solvers are here constructed, analyzed and tested numerically for implicit time discretizations of the three-dimensional Bidomain system of equations. This model represents the most advanced mathematical description of the cardiac bioelectrical activity and it consists of a degenerate system of two non-linear reaction-diffusion partial differential equations (PDEs), coupled with a stiff system of ordinary differential equations (ODEs). A finite element discretization in space and a segregated implicit discretization in time, based on decoupling the PDEs from the ODEs, yields at each time step the solution of a non-linear algebraic system. The Jacobian linear system at each Newton iteration is solved by a Krylov method, accelerated by BDDC or FETI-DP preconditioners, both augmented with the recently introduced {\em deluxe} scaling of the dual variables. A polylogarithmic convergence rate bound is proven for the resulting parallel Bidomain solvers. Extensive numerical experiments on linux clusters up to two thousands processors confirm the theoretical estimates, showing that the proposed parallel solvers are scalable and quasi-optimal.
We give a fast algorithm for sampling uniform solutions of general constraint satisfaction problems (CSPs) in a local lemma regime. The expected running time of our algorithm is near-linear in $n$ and a fixed polynomial in $\Delta$, where $n$ is the number of variables and $\Delta$ is the max degree of constraints. Previously, up to similar conditions, sampling algorithms with running time polynomial in both $n$ and $\Delta$, only existed for the almost atomic case, where each constraint is violated by a small number of forbidden local configurations. Our sampling approach departs from all previous fast algorithms for sampling LLL, which were based on Markov chains. A crucial step of our algorithm is a recursive marginal sampler that is of independent interests. Within a local lemma regime, this marginal sampler can draw a random value for a variable according to its marginal distribution, at a local cost independent of the size of the CSP.
In this paper we get error bounds for fully discrete approximations of infinite horizon problems via the dynamic programming approach. It is well known that considering a time discretization with a positive step size $h$ an error bound of size $h$ can be proved for the difference between the value function (viscosity solution of the Hamilton-Jacobi-Bellman equation corresponding to the infinite horizon) and the value function of the discrete time problem. However, including also a spatial discretization based on elements of size $k$ an error bound of size $O(k/h)$ can be found in the literature for the error between the value functions of the continuous problem and the fully discrete problem. In this paper we revise the error bound of the fully discrete method and prove, under similar assumptions to those of the time discrete case, that the error of the fully discrete case is in fact $O(h+k)$ which gives first order in time and space for the method. This error bound matches the numerical experiments of many papers in the literature in which the behaviour $1/h$ from the bound $O(k/h)$ have not been observed.
Numerical solution of heterogeneous Helmholtz problems presents various computational challenges, with descriptive theory remaining out of reach for many popular approaches. Robustness and scalability are key for practical and reliable solvers in large-scale applications, especially for large wave number problems. In this work we explore the use of a GenEO-type coarse space to build a two-level additive Schwarz method applicable to highly indefinite Helmholtz problems. Through a range of numerical tests on a 2D model problem, discretised by finite elements on pollution-free meshes, we observe robust convergence, iteration counts that do not increase with the wave number, and good scalability of our approach. We further provide results showing a favourable comparison with the DtN coarse space. Our numerical study shows promise that our solver methodology can be effective for challenging heterogeneous applications.
Static Analysis tools have rules for several code quality issues and these rules are created by experts manually. In this paper, we address the problem of automatic synthesis of code quality rules from examples. We formulate the rule synthesis problem as synthesizing first order logic formulas over graph representations of code. We present a new synthesis algorithm RhoSynth that is based on Integer Linear Programming-based graph alignment for identifying code elements of interest to the rule. We bootstrap RhoSynth by leveraging code changes made by developers as the source of positive and negative examples. We also address rule refinement in which the rules are incrementally improved with additional user-provided examples. We validate RhoSynth by synthesizing more than 30 Java code quality rules. These rules have been deployed as part of a code review system in a company and their precision exceeds 75% based on developer feedback collected during live code-reviews. Through comparisons with recent baselines, we show that current state-of-the-art program synthesis approaches are unable to synthesize most of these rules.
In the storied Colonel Blotto game, two colonels allocate $a$ and $b$ troops, respectively, to $k$ distinct battlefields. A colonel wins a battle if they assign more troops to that particular battle, and each colonel seeks to maximize their total number of victories. Despite the problem's formulation in 1921, the first polynomial-time algorithm to compute Nash equilibrium (NE) strategies for this game was discovered only quite recently. In 2016, \citep{ahmadinejad_dehghani_hajiaghayi_lucier_mahini_seddighin_2019} formulated a breakthrough algorithm to compute NE strategies for the Colonel Blotto game\footnote{To the best of our knowledge, the algorithm from \citep{ahmadinejad_dehghani_hajiaghayi_lucier_mahini_seddighin_2019} has computational complexity $O(k^{14}\max\{a,b\}^{13})$}, receiving substantial media coverage (e.g. \citep{Insider}, \citep{NSF}, \citep{ScienceDaily}). In this work, we present the first known $\epsilon$-approximation algorithm to compute NE strategies in the two-player Colonel Blotto game in runtime $\widetilde{O}(\epsilon^{-4} k^8 \max\{a,b\}^2)$ for arbitrary settings of these parameters. Moreover, this algorithm computes approximate coarse correlated equilibrium strategies in the multiplayer (continuous and discrete) Colonel Blotto game (when there are $\ell > 2$ colonels) with runtime $\widetilde{O}(\ell \epsilon^{-4} k^8 n^2 + \ell^2 \epsilon^{-2} k^3 n (n+k))$, where $n$ is the maximum troop count. Before this work, no polynomial-time algorithm was known to compute exact or approximate equilibrium (in any sense) strategies for multiplayer Colonel Blotto with arbitrary parameters. Our algorithm computes these approximate equilibria by a novel (to the author's knowledge) sampling technique with which we implicitly perform multiplicative weights update over the exponentially many strategies available to each player.
This paper presents a density-based topology optimization approach considering additive manufacturing limitations. The presented method considers the minimum size of parts, the minimum size of cavities, the inability of printing overhanging parts without the use of sacrificial supporting structures, and the printing directions. These constraints are geometrically addressed and implemented. The minimum size on solid and void zones is imposed through a well-known filtering technique. The sacrificial support material is reduced using a constraint that limits the maximum overhang angle of parts by comparing the structural gradient with a critical reference slope. Due to the local nature of the gradient, the chosen restriction is prone to introduce parts that meet the structural slope but that may not be self-supporting. The restriction limits the maximum overhang angle for a user-defined printing direction, which could reduce structural performance if the orientation is not properly selected. To ease these challenges, a new approach to reduce the introduction of such non-self-supporting parts and a novel method that includes different printing directions in the maximum overhang angle constraint are presented. The proposed strategy for considering the minimum size of solid and void phases, maximum overhang angle, and printing direction, is illustrated by solving a set of 2D benchmark design problems including stiff structures and compliant mechanisms. We also provide MATLAB codes in the appendix for educational purposes and for replication of the results.
We present a novel static analysis technique to derive higher moments for program variables for a large class of probabilistic loops with potentially uncountable state spaces. Our approach is fully automatic, meaning it does not rely on externally provided invariants or templates. We employ algebraic techniques based on linear recurrences and introduce program transformations to simplify probabilistic programs while preserving their statistical properties. We develop power reduction techniques to further simplify the polynomial arithmetic of probabilistic programs and define the theory of moment-computable probabilistic loops for which higher moments can precisely be computed. Our work has applications towards recovering probability distributions of random variables and computing tail probabilities. The empirical evaluation of our results demonstrates the applicability of our work on many challenging examples.
Sufficient dimension reduction (SDR) is a successful tool in regression models. It is a feasible method to solve and analyze the nonlinear nature of the regression problems. This paper introduces the \textbf{itdr} R package that provides several functions based on integral transformation methods to estimate the SDR subspaces in a comprehensive and user-friendly manner. In particular, the \textbf{itdr} package includes the Fourier method (FM) and the convolution method (CM) of estimating the SDR subspaces such as the central mean subspace (CMS) and the central subspace (CS). In addition, the \textbf{itdr} package facilitates the recovery of the CMS and the CS by using the iterative Hessian transformation (IHT) method and the Fourier transformation approach for inverse dimension reduction method (invFM), respectively. Moreover, the use of the package is illustrated by three datasets. \textcolor{black}{Furthermore, this is the first package that implements integral transformation methods to estimate SDR subspaces. Hence, the \textbf{itdr} package may provide a huge contribution to research in the SDR field.