In this article, we develop comprehensive frequency domain methods for estimating and inferring the second-order structure of spatial point processes. The main element here is on utilizing the discrete Fourier transform (DFT) of the point pattern and its tapered counterpart. Under second-order stationarity, we show that both the DFTs and the tapered DFTs are asymptotically jointly independent Gaussian even when the DFTs share the same limiting frequencies. Based on these results, we establish an $\alpha$-mixing central limit theorem for a statistic formulated as a quadratic form of the tapered DFT. As applications, we derive the asymptotic distribution of the kernel spectral density estimator and establish a frequency domain inferential method for parametric stationary point processes. For the latter, the resulting model parameter estimator is computationally tractable and yields meaningful interpretations even in the case of model misspecification. We investigate the finite sample performance of our estimator through simulations, considering scenarios of both correctly specified and misspecified models. Furthermore, we extend our proposed DFT-based frequency domain methods to a class of non-stationary spatial point processes.
Interpolation of data on non-Euclidean spaces is an active research area fostered by its numerous applications. This work considers the Hermite interpolation problem: finding a sufficiently smooth manifold curve that interpolates a collection of data points on a Riemannian manifold while matching a prescribed derivative at each point. We propose a novel procedure relying on the general concept of retractions to solve this problem on a large class of manifolds, including those for which computing the Riemannian exponential or logarithmic maps is not straightforward, such as the manifold of fixed-rank matrices. We analyze the well-posedness of the method by introducing and showing the existence of retraction-convex sets, a generalization of geodesically convex sets. We extend to the manifold setting a classical result on the asymptotic interpolation error of Hermite interpolation. We finally illustrate these results and the effectiveness of the method with numerical experiments on the manifold of fixed-rank matrices and the Stiefel manifold of matrices with orthonormal columns.
In this work, the high order accuracy and the well-balanced (WB) properties of some novel continuous interior penalty (CIP) stabilizations for the Shallow Water (SW) equations are investigated. The underlying arbitrary high order numerical framework is given by a Residual Distribution (RD)/continuous Galerkin (CG) finite element method (FEM) setting for the space discretization coupled with a Deferred Correction (DeC) time integration, to have a fully-explicit scheme. If, on the one hand, the introduced CIP stabilizations are all specifically designed to guarantee the exact preservation of the lake at rest steady state, on the other hand, some of them make use of general structures to tackle the preservation of general steady states, whose explicit analytical expression is not known. Several basis functions have been considered in the numerical experiments and, in all cases, the numerical results confirm the high order accuracy and the ability of the novel stabilizations to exactly preserve the lake at rest steady state and to capture small perturbations of such equilibrium. Moreover, some of them, based on the notions of space residual and global flux, have shown very good performances and superconvergences in the context of general steady solutions not known in closed-form. Many elements introduced here can be extended to other hyperbolic systems, e.g., to the Euler equations with gravity.
This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.
Quantum-inspired classical algorithms provide us with a new way to understand the computational power of quantum computers for practically-relevant problems, especially in machine learning. In the past several years, numerous efficient algorithms for various tasks have been found, while an analysis of lower bounds is still missing. Using communication complexity, in this work we propose the first method to study lower bounds for these tasks. We mainly focus on lower bounds for solving linear regressions, supervised clustering, principal component analysis, recommendation systems, and Hamiltonian simulations. More precisely, we show that for linear regressions, in the row-sparse case, the lower bound is quadratic in the Frobenius norm of the underlying matrix, which is tight. In the dense case, with an extra assumption on the accuracy we obtain that the lower bound is quartic in the Frobenius norm, which matches the upper bound. For supervised clustering, we obtain a tight lower bound that is quartic in the Frobenius norm. For the other three tasks, we obtain a lower bound that is quadratic in the Frobenius norm, and the known upper bound is quartic in the Frobenius norm. Through this research, we find that large quantum speedup can exist for sparse, high-rank, well-conditioned matrix-related problems. Finally, we extend our method to study lower bounds analysis of quantum query algorithms for matrix-related problems. Some applications are given.
Digital credentials represent a cornerstone of digital identity on the Internet. To achieve privacy, certain functionalities in credentials should be implemented. One is selective disclosure, which allows users to disclose only the claims or attributes they want. This paper presents a novel approach to selective disclosure that combines Merkle hash trees and Boneh-Lynn-Shacham (BLS) signatures. Combining these approaches, we achieve selective disclosure of claims in a single credential and creation of a verifiable presentation containing selectively disclosed claims from multiple credentials signed by different parties. Besides selective disclosure, we enable issuing credentials signed by multiple issuers using this approach.
The availability of digital twins for the cardiovascular system will enable insightful computational tools both for research and clinical practice. This, however, demands robust and well defined models and methods for the different steps involved in the process. We present a vessel coordinate system (VCS) that enables the unanbiguous definition of locations in a vessel section, by adapting the idea of cylindrical coordinates to the vessel geometry. Using the VCS model, point correspondence can be defined among different samples of a cohort, allowing data transfer, quantitative comparison, shape coregistration or population analysis. Furthermore, the VCS model allows for the generation of specific meshes (e.g. cylindrical grids, ogrids) necessary for an accurate reconstruction of the geometries used in fluid simulations. We provide the technical details for coordinates computation and discuss the assumptions taken to guarantee that they are well defined. The VCS model is tested in a series of applications. We present a robust, low dimensional, patient specific vascular model and use it to study phenotype variability analysis of the thoracic aorta within a cohort of patients. Point correspondence is exploited to build an haemodynamics atlas of the aorta for the same cohort. The atlas originates from fluid simulations (Navier-Stokes with Finite Volume Method) conducted using OpenFOAMv10. We finally present a relevant discussion on the VCS model, which covers its impact in important areas such as shape modeling and computer fluids dynamics (CFD).
We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.
In this work, we propose a method to learn the solution operators of PDEs defined on varying domains via MIONet, and theoretically justify this method. We first extend the approximation theory of MIONet to further deal with metric spaces, establishing that MIONet can approximate mappings with multiple inputs in metric spaces. Subsequently, we construct a set consisting of some appropriate regions and provide a metric on this set thus make it a metric space, which satisfies the approximation condition of MIONet. Building upon the theoretical foundation, we are able to learn the solution mapping of a PDE with all the parameters varying, including the parameters of the differential operator, the right-hand side term, the boundary condition, as well as the domain. Without loss of generality, we for example perform the experiments for 2-d Poisson equations, where the domains and the right-hand side terms are varying. The results provide insights into the performance of this method across convex polygons, polar regions with smooth boundary, and predictions for different levels of discretization on one task. Reasonably, we point out that this is a meshless method, hence can be flexibly used as a general solver for a type of PDE.
Mass lumping techniques are commonly employed in explicit time integration schemes for problems in structural dynamics and both avoid solving costly linear systems with the consistent mass matrix and increase the critical time step. In isogeometric analysis, the critical time step is constrained by so-called "outlier" frequencies, representing the inaccurate high frequency part of the spectrum. Removing or dampening these high frequencies is paramount for fast explicit solution techniques. In this work, we propose robust mass lumping and outlier removal techniques for nontrivial geometries, including multipatch and trimmed geometries. Our lumping strategies provably do not deteriorate (and often improve) the CFL condition of the original problem and are combined with deflation techniques to remove persistent outlier frequencies. Numerical experiments reveal the advantages of the method, especially for simulations covering large time spans where they may halve the number of iterations with little or no effect on the numerical solution.
This paper delves into a nonparametric estimation approach for the interaction function within diffusion-type particle system models. We introduce two estimation methods based upon an empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with both procedures, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding minimax estimation error of the interaction function converges to zero with parametric rate. This result is rather suprising given complexity of the underlying estimation problem and rather large classes of interaction functions for which the above parametric rate holds.