We describe a new, adaptive solver for the two-dimensional Poisson equation in complicated geometries. Using classical potential theory, we represent the solution as the sum of a volume potential and a double layer potential. Rather than evaluating the volume potential over the given domain, we first extend the source data to a geometrically simpler region with high order accuracy. This allows us to accelerate the evaluation of the volume potential using an efficient, geometry-unaware fast multipole-based algorithm. To impose the desired boundary condition, it remains only to solve the Laplace equation with suitably modified boundary data. This is accomplished with existing fast and accurate boundary integral methods. The novelty of our solver is the scheme used for creating the source extension, assuming it is provided on an adaptive quad-tree. For leaf boxes intersected by the boundary, we construct a universal "stencil" and require that the data be provided at the subset of those points that lie within the domain interior. This universality permits us to precompute and store an interpolation matrix which is used to extrapolate the source data to an extended set of leaf nodes with full tensor-product grids on each. We demonstrate the method's speed, robustness and high-order convergence with several examples, including domains with piecewise smooth boundaries.
We propose a matrix-free solver for the numerical solution of the cardiac electrophysiology model consisting of the monodomain nonlinear reaction-diffusion equation coupled with a system of ordinary differential equations for the ionic species. Our numerical approximation is based on the high-order Spectral Element Method (SEM) to achieve accurate numerical discretization while employing a much smaller number of Degrees of Freedom than first-order Finite Elements. We combine vectorization with sum-factorization, thus allowing for a very efficient use of high-order polynomials in a high performance computing framework. We validate the effectiveness of our matrix-free solver in a variety of applications and perform different electrophysiological simulations ranging from a simple slab of cardiac tissue to a realistic four-chamber heart geometry. We compare SEM to SEM with Numerical Integration (SEM-NI), showing that they provide comparable results in terms of accuracy and efficiency. In both cases, increasing the local polynomial degree $p$ leads to better numerical results and smaller computational times than reducing the mesh size $h$. We also implement a matrix-free Geometric Multigrid preconditioner that results in a comparable number of linear solver iterations with respect to a state-of-the-art matrix-based Algebraic Multigrid preconditioner. As a matter of fact, the matrix-free solver proposed here yields up to 45$\times$ speed-up with respect to a conventional matrix-based solver.
Cutting planes are a crucial component of state-of-the-art mixed-integer programming solvers, with the choice of which subset of cuts to add being vital for solver performance. We propose new distance-based measures to qualify the value of a cut by quantifying the extent to which it separates relevant parts of the relaxed feasible set. For this purpose, we use the analytic centers of the relaxation polytope or of its optimal face, as well as alternative optimal solutions of the linear programming relaxation. We assess the impact of the choice of distance measure on root node performance and throughout the whole branch-and-bound tree, comparing our measures against those prevalent in the literature. Finally, by a multi-output regression, we predict the relative performance of each measure, using static features readily available before the separation process. Our results indicate that analytic center-based methods help to significantly reduce the number of branch-and-bound nodes needed to explore the search space and that our multiregression approach can further improve on any individual method.
Researchers have widely used exploratory factor analysis (EFA) to learn the latent structure underlying multivariate data. Rotation and regularised estimation are two classes of methods in EFA that they often use to find interpretable loading matrices. In this paper we propose a new family of oblique rotations based on component-wise $L^p$ loss functions $(0 < p\leq 1)$ that is closely related to an $L^p$ regularised estimator. We develop model selection and post-selection inference procedures based on the proposed rotation method. When the true loading matrix is sparse, the proposed method tends to outperform traditional rotation and regularised estimation methods in terms of statistical accuracy and computational cost. Since the proposed loss functions are nonsmooth, we develop an iteratively reweighted gradient projection algorithm for solving the optimisation problem. We also develop theoretical results that establish the statistical consistency of the estimation, model selection, and post-selection inference. We evaluate the proposed method and compare it with regularised estimation and traditional rotation methods via simulation studies. We further illustrate it using an application to the Big Five personality assessment.
The computation of the distance of two time series is time-consuming for any elastic distance function that accounts for misalignments. Among those functions, DTW is the most prominent. However, a recent extensive evaluation has shown that the move-split merge (MSM) metric is superior to DTW regarding the analytical accuracy of the 1-NN classifier. Unfortunately, the running time of the standard dynamic programming algorithm for MSM distance computation is $\Omega(n^2)$, where $n$ is the length of the longest time series. In this paper, we provide approaches to reducing the cost of MSM distance computations by using lower and upper bounds for early pruning paths in the underlying dynamic programming table. For the case of one time series being a constant, we present a linear-time algorithm. In addition, we propose new linear-time heuristics and adapt heuristics known from DTW to computing the MSM distance. One heuristic employs the metric property of MSM and the previously introduced linear-time algorithm. Our experimental studies demonstrate substantial speed-ups in our approaches compared to previous MSM algorithms. In particular, the running time for MSM is faster than a state-of-the-art DTW distance computation for a majority of the popular UCR data sets.
In this paper, we consider a semiconducting device with an active zone made of a single-layer material. The associated Poisson equation for the electrostatic potential (to be solved in order to perform self-consistent computations) is characterized by a surface particle density and an out-of-plane dielectric permittivity in the region surrounding the single-layer. To avoid mesh refinements in such a region, we propose an interface problem based on the natural domain decomposition suggested by the physical device. Two different interface continuity conditions are discussed. Then, we write the corresponding variational formulations adapting the so called three-fields formulation for domain decomposition and we approximate them using a proper finite element method. Finally, numerical experiments are performed to illustrate some specific features of this interface approach.
Summation-by-parts (SBP) operators allow us to systematically develop energy-stable and high-order accurate numerical methods for time-dependent differential equations. Until recently, the main idea behind existing SBP operators was that polynomials can accurately approximate the solution, and SBP operators should thus be exact for them. However, polynomials do not provide the best approximation for some problems, with other approximation spaces being more appropriate. We recently addressed this issue and developed a theory for one-dimensional SBP operators based on general function spaces, coined function-space SBP (FSBP) operators. In this paper, we extend the theory of FSBP operators to multiple dimensions. We focus on their existence, connection to quadratures, construction, and mimetic properties. A more exhaustive numerical demonstration of multi-dimensional FSBP (MFSBP) operators and their application will be provided in future works. Similar to the one-dimensional case, we demonstrate that most of the established results for polynomial-based multi-dimensional SBP (MSBP) operators carry over to the more general class of MFSBP operators. Our findings imply that the concept of SBP operators can be applied to a significantly larger class of methods than is currently done. This can increase the accuracy of the numerical solutions and/or provide stability to the methods.
Summation-by-parts (SBP) operators are popular building blocks for systematically developing stable and high-order accurate numerical methods for time-dependent differential equations. The main idea behind existing SBP operators is that the solution is assumed to be well approximated by polynomials up to a certain degree, and the SBP operator should therefore be exact for them. However, polynomials might not provide the best approximation for some problems, and other approximation spaces may be more appropriate. In this paper, a theory for SBP operators based on general function spaces is developed. We demonstrate that most of the established results for polynomial-based SBP operators carry over to this general class of SBP operators. Our findings imply that the concept of SBP operators can be applied to a significantly larger class of methods than currently known. We exemplify the general theory by considering trigonometric, exponential, and radial basis functions.
We develop a moment method based on the Hermite series of arbitrary order to calculate viscous-slip, thermal-slip, and temperature-jump coefficients for general gas-surface scattering kernels. Under some usual assumptions of scattering kernels, the solvability is obtained by showing the positive definiteness of the symmetric coefficient matrix in the boundary conditions. For gas flows with the Cercignani-Lampis gas-surface interaction and inverse-power-law intermolecular potentials, the model can capture the slip and jump coefficients accurately with elegant analytic expressions. On the one hand, the proposed method can apply to the cases of arbitrary order moments with increasing accuracy. On the other hand, the explicit formulae for low-order situations are simpler and more accurate than some existing results in references. Therefore, one may apply these formulae in slip and jump conditions to improve the accuracy of macroscopic fluid dynamic models for gas flows.
We consider optimal sensor placement for a family of linear Bayesian inverse problems characterized by a deterministic hyper-parameter. The hyper-parameter describes distinct configurations in which measurements can be taken of the observed physical system. To optimally reduce the uncertainty in the system's model with a single set of sensors, the initial sensor placement needs to account for the non-linear state changes of all admissible configurations. We address this requirement through an observability coefficient which links the posteriors' uncertainties directly to the choice of sensors. We propose a greedy sensor selection algorithm to iteratively improve the observability coefficient for all configurations through orthogonal matching pursuit. The algorithm allows explicitly correlated noise models even for large sets of candidate sensors, and remains computationally efficient for high-dimensional forward models through model order reduction. We demonstrate our approach on a large-scale geophysical model of the Perth Basin, and provide numerical studies regarding optimality and scalability with regard to classic optimal experimental design utility functions.
Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter $\rho \rightarrow \infty$. By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.