Riesz potentials are well known objects of study in the theory of singular integrals that have been the subject of recent, increased interest from the numerical analysis community due to their connections with fractional Laplace problems and proposed use in certain domain decomposition methods. While the L$^p$-mapping properties of Riesz potentials on flat geometries are well-established, comparable results on rougher geometries for Sobolev spaces are very scarce. In this article, we study the continuity properties of the surface Riesz potential generated by the $1/\sqrt{x}$ singular kernel on a polygonal domain $\Omega \subset \mathbb{R}^2$. We prove that this surface Riesz potential maps L$^{2}(\partial\Omega)$ into H$^{+1/2}(\partial\Omega)$. Our proof is based on a careful analysis of the Riesz potential in the neighbourhood of corners of the domain $\Omega$. The main tool we use for this corner analysis is the Mellin transform which can be seen as a counterpart of the Fourier transform that is adapted to corner geometries.
We study the performance of a new block preconditioner for a class of $3\times3$ block saddle point problems which arise from finite element methods for solving time-dependent Maxwell equations and some other practical problems. We also estimate the lower and upper bounds of eigenvalues of the preconditioned matrix. \cred{Finally, we examine our new preconditioner to accelerate the convergence speed of the GMRES method which shows the effectiveness of the preconditioner.
Spreading processes on graphs arise in a host of application domains, from the study of online social networks to viral marketing to epidemiology. Various discrete-time probabilistic models for spreading processes have been proposed. These are used for downstream statistical estimation and prediction problems, often involving messages or other information that is transmitted along with infections caused by the process. It is thus important to design models of cascade observation that take into account phenomena that lead to uncertainty about the process state at any given time. We highlight one such phenomenon -- temporal distortion -- caused by a misalignment between the rate at which observations of a cascade process are made and the rate at which the process itself operates, and argue that failure to correct for it results in degradation of performance on downstream statistical tasks. To address these issues, we formulate the clock estimation problem in terms of a natural distortion measure. We give a clock estimation algorithm, which we call FastClock, that runs in linear time in the size of its input and is provably statistically accurate for a broad range of model parameters when cascades are generated from the independent cascade process with known parameters and when the underlying graph is Erd\H{o}s-R\'enyi. We further give empirical results on the performance of our algorithm in comparison to the state of the art estimator, a likelihood proxy maximization-based estimator implemented via dynamic programming. We find that, in a broad parameter regime, our algorithm substantially outperforms the dynamic programming algorithm in terms of both running time and accuracy.
We train neural networks to perform likelihood-free inference from $(25\,h^{-1}{\rm Mpc})^2$ 2D maps containing the total mass surface density from thousands of hydrodynamic simulations of the CAMELS project. We show that the networks can extract information beyond one-point functions and power spectra from all resolved scales ($\gtrsim 100\,h^{-1}{\rm kpc}$) while performing a robust marginalization over baryonic physics at the field level: the model can infer the value of $\Omega_{\rm m} (\pm 4\%)$ and $\sigma_8 (\pm 2.5\%)$ from simulations completely different to the ones used to train it.
Defining shape and form as equivalence classes under translation, rotation and -- for shapes -- also scale, we extend generalized additive regression to models for the shape/form of planar curves or landmark configurations. The model respects the resulting quotient geometry of the response, employing the squared geodesic distance as loss function and a geodesic response function mapping the additive predictor to the shape/form space. For fitting the model, we propose a Riemannian $L_2$-Boosting algorithm well-suited for a potentially large number of possibly parameter-intensive model terms, which also yiels automated model selection. We provide novel intuitively interpretable visualizations for (even non-linear) covariate effects in the shape/form space via suitable tensor based factorizations. The usefulness of the proposed framework is illustrated in an analysis of 1) astragalus shapes of wild and domesticated sheep and 2) cell forms generated in a biophysical model, as well as 3) in a realistic simulation study with response shapes and forms motivated from a dataset on bottle outlines.
Suppose $A \in \mathbb{R}^{n \times n}$ is invertible and we are looking for the solution of $Ax = b$. Given an initial guess $x_1 \in \mathbb{R}$, we show that by reflecting through hyperplanes generated by the rows of $A$, we can generate an infinite sequence $(x_k)_{k=1}^{\infty}$ such that all elements have the same distance to the solution, i.e. $\|x_k - x\| = \|x_1 - x\|$. If the hyperplanes are chosen at random, averages over the sequence converge and $$ \mathbb{E} \left\| x - \frac{1}{m} \sum_{k=1}^{m}{ x_k} \right\| \leq \frac{1 + \|A\|_F \|A^{-1}\|}{\sqrt{m}} \cdot\|x-x_1\|.$$ The bound does not depend on the dimension of the matrix. This introduces a purely geometric way of attacking the problem: are there fast ways of estimating the location of the center of a sphere from knowing many points on the sphere? Our convergence rate (coinciding with that of the Random Kaczmarz method) comes from averaging, can one do better?
Given a fixed finite metric space $(V,\mu)$, the {\em minimum $0$-extension problem}, denoted as ${\tt 0\mbox{-}Ext}[\mu]$, is equivalent to the following optimization problem: minimize function of the form $\min\limits_{x\in V^n} \sum_i f_i(x_i) + \sum_{ij}c_{ij}\mu(x_i,x_j)$ where $c_{ij},c_{vi}$ are given nonnegative costs and $f_i:V\rightarrow \mathbb R$ are functions given by $f_i(x_i)=\sum_{v\in V}c_{vi}\mu(x_i,v)$. The computational complexity of ${\tt 0\mbox{-}Ext}[\mu]$ has been recently established by Karzanov and by Hirai: if metric $\mu$ is {\em orientable modular} then ${\tt 0\mbox{-}Ext}[\mu]$ can be solved in polynomial time, otherwise ${\tt 0\mbox{-}Ext}[\mu]$ is NP-hard. To prove the tractability part, Hirai developed a theory of discrete convex functions on orientable modular graphs generalizing several known classes of functions in discrete convex analysis, such as $L^\natural$-convex functions. We consider a more general version of the problem in which unary functions $f_i(x_i)$ can additionally have terms of the form $c_{uv;i}\mu(x_i,\{u,v\})$ for $\{u,v\}\in F$, where set $F\subseteq\binom{V}{2}$ is fixed. We extend the complexity classification above by providing an explicit condition on $(\mu,F)$ for the problem to be tractable. In order to prove the tractability part, we generalize Hirai's theory and define a larger class of discrete convex functions. It covers, in particular, another well-known class of functions, namely submodular functions on an integer lattice. Finally, we improve the complexity of Hirai's algorithm for solving ${\tt 0\mbox{-}Ext}$ on orientable modular graphs.
We prove that for any integer $n\in\mathbb{N}$, $d\in\{1,\ldots,n\}$ and any $\varepsilon,\delta\in(0,1)$, a bounded function $f:\{-1,1\}^n\to[-1,1]$ of degree at most $d$ can be learned with probability at least $1-\delta$ and $L_2$-error $\varepsilon$ using $\log(\tfrac{n}{\delta})\,\varepsilon^{-d-1} C^{d^{3/2}\sqrt{\log d}}$ random queries for a universal finite constant $C>1$.
This article investigates the approximation quality achievable for biobjective minimization problems with respect to the Pareto cone by solutions that are (approximately) optimal with respect to larger ordering cones. When simultaneously considering $\alpha$-approximations for all closed convex ordering cones of a fixed inner angle $\gamma \in [\frac \pi 2, \pi]$, an approximation guarantee between $\alpha$ and $2 \alpha$ is achieved, which depends continuously on $\gamma$. The analysis is best-possible for any inner angle and it generalizes and unifies the known results that the set of supported solutions is a 2-approximation and that the efficient set itself is a 1-approximation. Moreover, it is shown that, for maximization problems, no approximation guarantee is achievable by considering larger ordering cones in the described fashion, which again generalizes a known result about the set of supported solutions.
We present a product formula for the initial parts of the sparse resultant associated to an arbitrary family of supports, generalising a previous result by Sturmfels. This allows to compute the homogeneities and degrees of the sparse resultant, and its evaluation at systems of Laurent polynomials with smaller supports. We obtain a similar product formula for some of the initial parts of the principal minors of the Sylvester-type square matrix associated to a mixed subdivision of a polytope. Applying these results, we prove that the sparse resultant can be computed as the quotient of the determinant of such a square matrix by a certain principal minor, under suitable hypothesis. This generalises the classical Macaulay formula for the homogeneous resultant, and confirms a conjecture of Canny and Emiris.
For a multivariate normal distribution, the sparsity of the covariance and precision matrices encodes complete information about independence and conditional independence properties. For general distributions, the covariance and precision matrices reveal correlations and so-called partial correlations between variables, but these do not, in general, have any correspondence with respect to independence properties. In this paper, we prove that, for a certain class of non-Gaussian distributions, these correspondences still hold, exactly for the covariance and approximately for the precision. The distributions -- sometimes referred to as "nonparanormal" -- are given by diagonal transformations of multivariate normal random variables. We provide several analytic and numerical examples illustrating these results.