This paper introduces discrete-holomorphic Perfectly Matched Layers (PMLs) specifically designed for high-order finite difference (FD) discretizations of the scalar wave equation. In contrast to standard PDE-based PMLs, the proposed method achieves the remarkable outcome of completely eliminating numerical reflections at the PML interface, in practice achieving errors at the level of machine precision. Our approach builds upon the ideas put forth in a recent publication [Journal of Computational Physics 381 (2019): 91-109] expanding the scope from the standard second-order FD method to arbitrary high-order schemes. This generalization uses additional localized PML variables to accommodate the larger stencils employed. We establish that the numerical solutions generated by our proposed schemes exhibit an exponential decay rate as they propagate within the PML domain. To showcase the effectiveness of our method, we present a variety of numerical examples, including waveguide problems. These examples highlight the importance of employing high-order schemes to effectively address and minimize undesired numerical dispersion errors, emphasizing the practical advantages and applicability of our approach.
We study the problem of enumerating Tarski fixed points, focusing on the relational lattices of equivalences, quasiorders and binary relations. We present a polynomial space enumeration algorithm for Tarski fixed points on these lattices and other lattices of polynomial height. It achieves polynomial delay when enumerating fixed points of increasing isotone maps on all three lattices, as well as decreasing isotone maps on the lattice of binary relations. In those cases in which the enumeration algorithm does not guarantee polynomial delay on the three relational lattices on the other hand, we prove exponential lower bounds for deciding the existence of three fixed points when the isotone map is given as an oracle, and that it is NP-hard to find three or more Tarski fixed points. More generally, we show that any deterministic or bounded-error randomized algorithm must perform a number of queries asymptotically at least as large as the lattice width to decide the existence of three fixed points when the isotone map is given as an oracle. Finally, we demonstrate that our findings yield a polynomial delay and space algorithm for listing bisimulations and instances of some related models of behavioral or role equivalence.
Statistical models are at the heart of any empirical study for hypothesis testing. We present a new cross-platform Python-based package which employs different likelihood prescriptions through a plug-in system. This framework empowers users to propose, examine, and publish new likelihood prescriptions without developing software infrastructure, ultimately unifying and generalising different ways of constructing likelihoods and employing them for hypothesis testing, all in one place. Within this package, we propose a new simplified likelihood prescription that surpasses its predecessors' approximation accuracy by incorporating asymmetric uncertainties. Furthermore, our package facilitates the inclusion of various likelihood combination routines, thereby broadening the scope of independent studies through a meta-analysis. By remaining agnostic to the source of the likelihood prescription and the signal hypothesis generator, our platform allows for the seamless implementation of packages with different likelihood prescriptions, fostering compatibility and interoperability.
This paper introduces the Scaled Coordinate Transformation Boundary Element Method (SCTBEM), a novel boundary-type method for solving 3D potential problems. To address the challenges of applying the Boundary Element Method (BEM) to complex problems, it is common practice to use the fundamental solution corresponding to the partial governing equation operator to establish the integral equation. However, this approach introduces domain integral, which may jeopardize the dimensionality reduction advantages of BEM. To preserve the benefits of dimensionality reduction, this paper proposes a novel domain integral transformation method known as the Scaled Coordinate Transformation (SCT). The SCT is purely a mathematical operation that does not rely on particular solution of operators, which requires only discretization on the structure's surface while remaining analytical in the radial direction. An even better novelty is that the lower-order singularity can be eliminated by coordinate translation technique. To facilitate the wider adoption of BEM, the authors present 99-line MATLAB code. Numerical results confirm that the SCTBEM exhibits high numerical accuracy even when dealing with complex model.
In this paper, we study the problem of estimating the autocovariance sequence resulting from a reversible Markov chain. A motivating application for studying this problem is the estimation of the asymptotic variance in central limit theorems for Markov chains. We propose a novel shape-constrained estimator of the autocovariance sequence, which is based on the key observation that the representability of the autocovariance sequence as a moment sequence imposes certain shape constraints. We examine the theoretical properties of the proposed estimator and provide strong consistency guarantees for our estimator. In particular, for geometrically ergodic reversible Markov chains, we show that our estimator is strongly consistent for the true autocovariance sequence with respect to an $\ell_2$ distance, and that our estimator leads to strongly consistent estimates of the asymptotic variance. Finally, we perform empirical studies to illustrate the theoretical properties of the proposed estimator as well as to demonstrate the effectiveness of our estimator in comparison with other current state-of-the-art methods for Markov chain Monte Carlo variance estimation, including batch means, spectral variance estimators, and the initial convex sequence estimator.
We present the full approximation scheme constraint decomposition (FASCD) multilevel method for solving variational inequalities (VIs). FASCD is a common extension of both the full approximation scheme (FAS) multigrid technique for nonlinear partial differential equations, due to A.~Brandt, and the constraint decomposition (CD) method introduced by X.-C.~Tai for VIs arising in optimization. We extend the CD idea by exploiting the telescoping nature of certain function space subset decompositions arising from multilevel mesh hierarchies. When a reduced-space (active set) Newton method is applied as a smoother, with work proportional to the number of unknowns on a given mesh level, FASCD V-cycles exhibit nearly mesh-independent convergence rates, and full multigrid cycles are optimal solvers. The example problems include differential operators which are symmetric linear, nonsymmetric linear, and nonlinear, in unilateral and bilateral VI problems.
We study the online graph exploration problem proposed by Kalyanasundaram and Pruhs (1994) and prove a constant competitive ratio on minor-free graphs. This result encompasses and significantly extends the graph classes that were previously known to admit a constant competitive ratio. The main ingredient of our proof is that we find a connection between the performance of the particular exploration algorithm Blocking and the existence of light spanners. Conversely, we exploit this connection to construct light spanners of bounded genus graphs. In particular, we achieve a lightness that improves on the best known upper bound for genus g>0 and recovers the known tight bound for the planar case (g=0).
We propose a distributed bundle adjustment (DBA) method using the exact Levenberg-Marquardt (LM) algorithm for super large-scale datasets. Most of the existing methods partition the global map to small ones and conduct bundle adjustment in the submaps. In order to fit the parallel framework, they use approximate solutions instead of the LM algorithm. However, those methods often give sub-optimal results. Different from them, we utilize the exact LM algorithm to conduct global bundle adjustment where the formation of the reduced camera system (RCS) is actually parallelized and executed in a distributed way. To store the large RCS, we compress it with a block-based sparse matrix compression format (BSMC), which fully exploits its block feature. The BSMC format also enables the distributed storage and updating of the global RCS. The proposed method is extensively evaluated and compared with the state-of-the-art pipelines using both synthetic and real datasets. Preliminary results demonstrate the efficient memory usage and vast scalability of the proposed method compared with the baselines. For the first time, we conducted parallel bundle adjustment using LM algorithm on a real datasets with 1.18 million images and a synthetic dataset with 10 million images (about 500 times that of the state-of-the-art LM-based BA) on a distributed computing system.
We propose a matrix-free parallel two-level-deflation preconditioner combined with the Complex Shifted Laplacian preconditioner(CSLP) for the two-dimensional Helmholtz problems. The Helmholtz equation is widely studied in seismic exploration, antennas, and medical imaging. It is one of the hardest problems to solve both in terms of accuracy and convergence, due to scalability issues of the numerical solvers. Motivated by the observation that for large wavenumbers, the eigenvalues of the CSLP-preconditioned system shift towards zero, deflation with multigrid vectors, and further high-order vectors were incorporated to obtain wave-number-independent convergence. For large-scale applications, high-performance parallel scalable methods are also indispensable. In our method, we consider the preconditioned Krylov subspace methods for solving the linear system obtained from finite-difference discretization. The CSLP preconditioner is approximated by one parallel geometric multigrid V-cycle. For the two-level deflation, the matrix-free Galerkin coarsening as well as high-order re-discretization approaches on the coarse grid are studied. The results of matrix-vector multiplications in Krylov subspace methods and the interpolation/restriction operators are implemented based on the finite-difference grids without constructing any coefficient matrix. These adjustments lead to direct improvements in terms of memory consumption. Numerical experiments of model problems show that wavenumber independence has been obtained for medium wavenumbers. The matrix-free parallel framework shows satisfactory weak and strong parallel scalability.
We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems for which the observational map is based on partial differential equations and, consequently, is computationally expensive to evaluate. A-optimality is a widely used and easy-to-interpret criterion for Bayesian experimental design. This criterion seeks the optimal experimental design by minimizing the expected conditional variance, which is also known as the expected posterior variance. This study presents a novel likelihood-free approach to the A-optimal experimental design that does not require sampling or integrating the Bayesian posterior distribution. The expected conditional variance is obtained via the variance of the conditional expectation using the law of total variance, and we take advantage of the orthogonal projection property to approximate the conditional expectation. We derive an asymptotic error estimation for the proposed estimator of the expected conditional variance and show that the intractability of the posterior distribution does not affect the performance of our approach. We use an artificial neural network (ANN) to approximate the nonlinear conditional expectation in the implementation of our method. We then extend our approach for dealing with the case that the domain of experimental design parameters is continuous by integrating the training process of the ANN into minimizing the expected conditional variance. Through numerical experiments, we demonstrate that our method greatly reduces the number of observation model evaluations compared with widely used importance sampling-based approaches. This reduction is crucial, considering the high computational cost of the observational models. Code is available at //github.com/vinh-tr-hoang/DOEviaPACE.
Characterizing shapes of high-dimensional objects via Ricci curvatures plays a critical role in many research areas in mathematics and physics. However, even though several discretizations of Ricci curvatures for discrete combinatorial objects such as networks have been proposed and studied by mathematicians, the computational complexity aspects of these discretizations have escaped the attention of theoretical computer scientists to a large extent. In this paper, we study one such discretization, namely the Ollivier-Ricci curvature, from the perspective of efficient computation by fine-grained reductions and local query-based algorithms. Our main contributions are the following. (a) We relate our curvature computation problem to minimum weight perfect matching problem on complete bipartite graphs via fine-grained reduction. (b) We formalize the computational aspects of the curvature computation problems in suitable frameworks so that they can be studied by researchers in local algorithms. (c) We provide the first known lower and upper bounds on queries for query-based algorithms for the curvature computation problems in our local algorithms framework. En route, we also illustrate a localized version of our fine-grained reduction. We believe that our results bring forth an intriguing set of research questions, motivated both in theory and practice, regarding designing efficient algorithms for curvatures of objects.