The purpose of this paper is to develop the anti-Gauss cubature rule for approximating integrals defined on the square whose integrand function may have algebraic singularities at the boundaries. An application of such a rule to the numerical solution of second-kind Fredholm integral equations is also explored. The stability, convergence, and conditioning of the proposed Nystr\"om-type method are studied. The numerical solution of the resulting dense linear system is also investigated and several numerical tests are presented.
We generalize the Poisson limit theorem to binary functions of random objects whose law is invariant under the action of an amenable group. Examples include stationary random fields, exchangeable sequences, and exchangeable graphs. A celebrated result of E. Lindenstrauss shows that normalized sums over certain increasing subsets of such groups approximate expectations. Our results clarify that the corresponding unnormalized sums of binary statistics are asymptotically Poisson, provided suitable mixing conditions hold. They extend further to randomly subsampled sums and also show that strict invariance of the distribution is not needed if the requisite mixing condition defined by the group holds. We illustrate the results with applications to random fields, Cayley graphs, and Poisson processes on groups.
We study discretizations of fractional fully nonlinear equations by powers of discrete Laplacians. Our problems are parabolic and of order $\sigma\in(0,2)$ since they involve fractional Laplace operators $(-\Delta)^{\sigma/2}$. They arise e.g.~in control and game theory as dynamic programming equations, and solutions are non-smooth in general and should be interpreted as viscosity solutions. Our approximations are realized as finite-difference quadrature approximations and are 2nd order accurate for all values of $\sigma$. The accuracy of previous approximations depend on $\sigma$ and are worse when $\sigma$ is close to $2$. We show that the schemes are monotone, consistent, $L^\infty$-stable, and convergent using a priori estimates, viscosity solutions theory, and the method of half-relaxed limits. We present several numerical examples.
We consider approximation of the variable-coefficient Helmholtz equation in the exterior of a Dirichlet obstacle using perfectly-matched-layer (PML) truncation; it is well known that this approximation is exponentially accurate in the PML width and the scaling angle, and the approximation was recently proved to be exponentially accurate in the wavenumber $k$ in [Galkowski, Lafontaine, Spence, 2021]. We show that the $hp$-FEM applied to this problem does not suffer from the pollution effect, in that there exist $C_1,C_2>0$ such that if $hk/p\leq C_1$ and $p \geq C_2 \log k$ then the Galerkin solutions are quasioptimal (with constant independent of $k$), under the following two conditions (i) the solution operator of the original Helmholtz problem is polynomially bounded in $k$ (which occurs for "most" $k$ by [Lafontaine, Spence, Wunsch, 2021]), and (ii) either there is no obstacle and the coefficients are smooth or the obstacle is analytic and the coefficients are analytic in a neighbourhood of the obstacle and smooth elsewhere. This $hp$-FEM result is obtained via a decomposition of the PML solution into "high-" and "low-frequency" components, analogous to the decomposition for the original Helmholtz solution recently proved in [Galkowski, Lafontaine, Spence, Wunsch, 2022]. The decomposition is obtained using tools from semiclassical analysis (i.e., the PDE techniques specifically designed for studying Helmholtz problems with large $k$).
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.
We develop a novel and efficient discontinuous Galerkin spectral element method (DG-SEM) for the spherical rotating shallow water equations in vector invariant form. We prove that the DG-SEM is energy stable, and discretely conserves mass, vorticity, and linear geostrophic balance on general curvlinear meshes. These theoretical results are possible due to our novel entropy stable numerical DG fluxes for the shallow water equations in vector invariant form. We experimentally verify these results on a cubed sphere mesh. Additionally, we show that our method is robust, that is can be run stably without any dissipation. The entropy stable fluxes are sufficient to control the grid scale noise generated by geostrophic turbulence without the need for artificial stabilisation.
Due to the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, solving inverse problems of nonlinear differential equations (DEs) with multiple solutions is a challenging task. To address this, we propose homotopy physics-informed neural networks (HomPINNs), a novel framework that leverages homotopy continuation and neural networks (NNs) to solve inverse problems. The proposed framework begins with the use of NNs to simultaneously approximate unlabeled observations across diverse solutions while adhering to DE constraints. Through homotopy continuation, the proposed method solves the inverse problem by tracing the observations and identifying multiple solutions. The experiments involve testing the performance of the proposed method on one-dimensional DEs and applying it to solve a two-dimensional Gray-Scott simulation. Our findings demonstrate that the proposed method is scalable and adaptable, providing an effective solution for solving DEs with multiple solutions and unknown parameters. Moreover, it has significant potential for various applications in scientific computing, such as modeling complex systems and solving inverse problems in physics, chemistry, biology, etc.
This paper studies an evolving bulk--surface finite element method for a model of tissue growth, which is a modification of the model of Eyles, King and Styles (2019). The model couples a Poisson equation on the domain with a forced mean curvature flow of the free boundary, with nontrivial bulk--surface coupling in both the velocity law of the evolving surface and the boundary condition of the Poisson equation. The numerical method discretizes evolution equations for the mean curvature and the outer normal and it uses a harmonic extension of the surface velocity into the bulk. The discretization admits a convergence analysis in the case of continuous finite elements of polynomial degree at least two. The stability of the discretized bulk--surface coupling is a major concern. The error analysis combines stability estimates and consistency estimates to yield optimal-order $H^1$-norm error bounds for the computed tissue pressure and for the surface position, velocity, normal vector and mean curvature. Numerical experiments illustrate and complement the theoretical results.
SARRIGUREN, a new complete algorithm for SAT based on counting clauses (which is valid also for Unique-SAT and #SAT) is described, analyzed and tested. Although existing complete algorithms for SAT perform slower with clauses with many literals, that is an advantage for SARRIGUREN, because the more literals are in the clauses the bigger is the probability of overlapping among clauses, a property that makes the clause counting process more efficient. Actually, it provides a $O(m^2 \times n/k)$ time complexity for random $k$-SAT instances of $n$ variables and $m$ relatively dense clauses, where that density level is relative to the number of variables $n$, that is, clauses are relatively dense when $k\geq7\sqrt{n}$. Although theoretically there could be worst-cases with exponential complexity, the probability of those cases to happen in random $k$-SAT with relatively dense clauses is practically zero. The algorithm has been empirically tested and that polynomial time complexity maintains also for $k$-SAT instances with less dense clauses ($k\geq5\sqrt{n}$). That density could, for example, be of only 0.049 working with $n=20000$ variables and $k=989$ literals. In addition, they are presented two more complementary algorithms that provide the solutions to $k$-SAT instances and valuable information about number of solutions for each literal. Although this algorithm does not solve the NP=P problem (it is not a polynomial algorithm for 3-SAT), it broads the knowledge about that subject, because $k$-SAT with $k>3$ and dense clauses is not harder than 3-SAT. Moreover, the Python implementation of the algorithms, and all the input datasets and obtained results in the experiments are made available.
High-frequency issues have been remarkably challenges in numerical methods for partial differential equations. In this paper, a learning based numerical method (LbNM) is proposed for Helmholtz equation with high frequency. The main novelty is using Tikhonov regularization method to stably learn the solution operator by utilizing relevant information especially the fundamental solutions. Then applying the solution operator to a new boundary input could quickly update the solution. Based on the method of fundamental solutions and the quantitative Runge approximation, we give the error estimate. This indicates interpretability and generalizability of the present method. Numerical results validates the error analysis and demonstrates the high-precision and high-efficiency features.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.