We present a simple yet accurate method to compute the adjoint double layer potential, which is used to solve the Neumann boundary value problem for Laplace's equation in three dimensions. An expansion in curvilinear coordinates leads us to modify the expression for the adjoint double layer so that the singularity is reduced when evaluating the integral on the surface. We then regularize the Green's function, with a radial parameter $\delta$. We show that a natural regularization has error $O(\delta^3)$, and a simple modification improves the error to $O(\delta^5)$. The integral is evaluated numerically without the need of special coordinates. We use this treatment of the adjoint double layer to solve the classical integral equation for the interior Neumann problem and evaluate the solution on the boundary. Choosing $\delta = ch^{4/5}$, we find about $O(h^4)$ convergence in our examples, where $h$ is the spacing in a background grid.
G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles, rendering it applicable to settings arbitrarily far from equilibrium. For Markovian coarse-grainings, we prove a number of algorithmic fluctuation inequalities. The most important of these is a very general formulation of the second law of thermodynamics. In the presence of a heat and work reservoir, it implies algorithmic versions of Jarzynski's equality and Landauer's principle. Finally, to demonstrate how a deficiency of algorithmic entropy can be used as a resource, we model an information engine powered by compressible strings.
There are many numerical methods for solving partial different equations (PDEs) on manifolds such as classical implicit, finite difference, finite element, and isogeometric analysis methods which aim at improving the interoperability between finite element method and computer aided design (CAD) software. However, these approaches have difficulty when the domain has singularities since the solution at the singularity may be multivalued. This paper develops a novel numerical approach to solve elliptic PDEs on real, closed, connected, orientable, and almost smooth algebraic curves and surfaces. Our method integrates numerical algebraic geometry, differential geometry, and a finite difference scheme which is demonstrated on several examples.
We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs. Our algorithm utilizes a vertex-percolation process with a carefully chosen rejection filter and works under a percolation subcriticality condition. We show that this condition is optimal in the sense that the task of (approximately) sampling weighted rooted graphlets becomes impossible in finite expected time for infinite graphs and intractable for finite graphs when the condition does not hold. We apply our sampling algorithm as a subroutine to give near linear-time perfect sampling algorithms for polymer models and weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. This new perfect sampling algorithm for polymer models gives improved sampling algorithms for spin systems at low temperatures on expander graphs and unbalanced bipartite graphs, among other applications.
We study the high-order local discontinuous Galerkin (LDG) method for the $p$-Laplace equation. We reformulate our spatial discretization as an equivalent convex minimization problem and use a preconditioned gradient descent method as the nonlinear solver. For the first time, a weighted preconditioner that provides $hk$-independent convergence is applied in the LDG setting. For polynomial order $k \geqslant 1$, we rigorously establish the solvability of our scheme and provide a priori error estimates in a mesh-dependent energy norm. Our error estimates are under a different and non-equivalent distance from existing LDG results. For arbitrarily high-order polynomials under the assumption that the exact solution has enough regularity, the error estimates demonstrate the potential for high-order accuracy. Our numerical results exhibit the desired convergence speed facilitated by the preconditioner, and we observe best convergence rates in gradient variables in alignment with linear LDG, and optimal rates in the primal variable when $1 < p \leqslant 2$.
The main result of this paper is an edge-coloured version of Tutte's $f$-factor theorem. We give a necessary and sufficient condition for an edge-coloured graph $G^c$ to have a properly coloured $f$-factor. We state and prove our result in terms of an auxiliary graph $G_f^c$ which has a 1-factor if and only if $G^c$ has a properly coloured $f$-factor; this is analogous to the "short proof" of the $f$-factor theorem given by Tutte in 1954. An alternative statement, analogous to the original $f$-factor theorem, is also given. We show that our theorem generalises the $f$-factor theorem; that is, the former implies the latter. We consider other properties of edge-coloured graphs, and show that similar results are unlikely for $f$-factors with rainbow components and distance-$d$-coloured $f$-factors, even when $d=2$ and the number of colours used is asymptotically minimal.
The noncommutative sum-of-squares (ncSoS) hierarchy was introduced by Navascu\'{e}s-Pironio-Ac\'{i}n as a sequence of semidefinite programming relaxations for approximating values of noncommutative polynomial optimization problems, which were originally intended to generalize quantum values of nonlocal games. Recent work has started to analyze the hierarchy for approximating ground energies of local Hamiltonians, initially through rounding algorithms which output product states for degree-2 ncSoS applied to Quantum Max-Cut. Some rounding methods are known which output entangled states, but they use degree-4 ncSoS. Based on this, Hwang-Neeman-Parekh-Thompson-Wright conjectured that degree-2 ncSoS cannot beat product state approximations for Quantum Max-Cut and gave a partial proof relying on a conjectural generalization of Borrell's inequality. In this work we consider a family of Hamiltonians (called the quantum rotor model in condensed matter literature or lattice $O(k)$ vector model in quantum field theory) with infinite-dimensional local Hilbert space $L^{2}(S^{k - 1})$, and show that a degree-2 ncSoS relaxation approximates the ground state energy better than any product state.
This paper develops a general asymptotic theory of local polynomial (LP) regression for spatial data observed at irregularly spaced locations in a sampling region $R_n \subset \mathbb{R}^d$. We adopt a stochastic sampling design that can generate irregularly spaced sampling sites in a flexible manner including both pure increasing and mixed increasing domain frameworks. We first introduce a nonparametric regression model for spatial data defined on $\mathbb{R}^d$ and then establish the asymptotic normality of LP estimators with general order $p \geq 1$. We also propose methods for constructing confidence intervals and establishing uniform convergence rates of LP estimators. Our dependence structure conditions on the underlying processes cover a wide class of random fields such as L\'evy-driven continuous autoregressive moving average random fields. As an application of our main results, we discuss a two-sample testing problem for mean functions and their partial derivatives.
We present a method for the unsupervised segmentation of electron microscopy images, which are powerful descriptors of materials and chemical systems. Images are oversegmented into overlapping chips, and similarity graphs are generated from embeddings extracted from a domain$\unicode{x2010}$pretrained convolutional neural network (CNN). The Louvain method for community detection is then applied to perform segmentation. The graph representation provides an intuitive way of presenting the relationship between chips and communities. We demonstrate our method to track irradiation$\unicode{x2010}$induced amorphous fronts in thin films used for catalysis and electronics. This method has potential for "on$\unicode{x2010}$the$\unicode{x2010}$fly" segmentation to guide emerging automated electron microscopes.
We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets -- movie ratings, hyperspectral images, and high-dimensional functions -- suggest that the proposed algorithms are more efficient and have better reconstruction ability than other candidates.
Clustering of extreme events can have profound and detrimental societal consequences. The extremal index, a number in the unit interval, is a key parameter in modelling the clustering of extremes. The study of extremal index often assumes a local dependence condition known as the $D^{(d)}(u_n)$ condition. In this paper, we develop a hypothesis test for $D^{(d)}(u_n)$ condition based on asymptotic results. We develop an estimator for the extremal index by leveraging the inference procedure based on the $D^{(d)}(u_n)$ condition, and we establish the asymptotic normality of this estimator. The finite sample performances of the hypothesis test and the estimation are examined in a simulation study, where we consider both models that satisfies the $D^{(d)}(u_n)$ condition and models that violate this condition. In a simple case study, our statistical procedure shows that daily temperature in summer shares a common clustering structure of extreme values based on the data observed in three weather stations in the Netherlands, Belgium and Spain.