In this note, we propose an approach to initialize the Iterative Closest Point (ICP) algorithm to match unlabelled point clouds related by rigid transformations. The method is based on matching the ellipsoids defined by the points' covariance matrices and then testing the various principal half-axes matchings that differ by elements of a finite reflection group. We derive bounds on the robustness of our approach to noise and numerical experiments confirm our theoretical findings.
Multiscale Finite Element Methods (MsFEMs) are now well-established finite element type approaches dedicated to multiscale problems. They first compute local, oscillatory, problem-dependent basis functions that generate a suitable discretization space, and next perform a Galerkin approximation of the problem on that space. We investigate here how these approaches can be implemented in a non-intrusive way, in order to facilitate their dissemination within industrial codes or non-academic environments. We develop an abstract framework that covers a wide variety of MsFEMs for linear second-order partial differential equations. Non-intrusive MsFEM approaches are developed within the full generality of this framework, which may moreover be beneficial to steering software development and improving the theoretical understanding and analysis of MsFEMs.
Global seismicity on all three solar system's bodies with in situ measurements (Earth, Moon, and Mars) is due mainly to mechanical Rieger resonance (RR) of the solar wind's macroscopic flapping, driven by the well-known PRg=~154-day Rieger period and detected commonly in most heliophysical data types and the interplanetary magnetic field (IMF). Thus, InSight mission marsquakes rates are periodic with PRg as characterized by a very high (>>12) fidelity {\Phi}=2.8 10^6 and by being the only >99%-significant spectral peak in the 385.8-64.3-nHz (1-180-day) band of highest planetary energies; the longest-span (v.9) release of raw data revealed the entire RR, excluding a tectonically active Mars. For check, I analyze rates of Oct 2015-Feb 2019, Mw5.6+ earthquakes, and all (1969-1977) Apollo mission moonquakes. To decouple magnetosphere and IMF effects, I study Earth and Moon seismicity during traversals of Earth magnetotail vs. IMF. The analysis showed with >99-67% confidence and {\Phi}>>12 fidelity that (an unspecified majority of) moonquakes and Mw5.6+ earthquakes also recur at Rieger periods. About half of spectral peaks split but also into clusters that average to usual Rieger periodicities, where magnetotail reconnecting clears the signal. Moonquakes are mostly forced at times of solar-wind resonance and not just during tides, as previously and simplistically believed. Earlier claims that solar plasma dynamics could be seismogenic are confirmed. This result calls for reinterpreting the seismicity phenomenon and for reliance on global magnitude scales. Predictability of solar-wind macroscopic dynamics is now within reach, which paves the way for long-term physics-based seismic and space weather prediction and the safety of space missions. Gauss-Vanicek Spectral Analysis revolutionizes geophysics by computing nonlinear global dynamics directly (renders approximating of dynamics obsolete).
In this paper, we develop a unified regression approach to model unconditional quantiles, M-quantiles and expectiles of multivariate dependent variables exploiting the multidimensional Huber's function. To assess the impact of changes in the covariates across the entire unconditional distribution of the responses, we extend the work of Firpo et al. (2009) by running a mean regression of the recentered influence function on the explanatory variables. We discuss the estimation procedure and establish the asymptotic properties of the derived estimators. A data-driven procedure is also presented to select the tuning constant of the Huber's function. The validity of the proposed methodology is explored with simulation studies and through an application using the Survey of Household Income and Wealth 2016 conducted by the Bank of Italy.
In this paper, we investigate the properties of standard and multilevel Monte Carlo methods for weak approximation of solutions of stochastic differential equations (SDEs) driven by the infinite-dimensional Wiener process and Poisson random measure with the Lipschitz payoff function. The error of the truncated dimension randomized numerical scheme, which is determined by two parameters, i.e grid density $n \in \mathbb{N}_{+}$ and truncation dimension parameter $M \in \mathbb{N}_{+},$ is of the order $n^{-1/2}+\delta(M)$ such that $\delta(\cdot)$ is positive and decreasing to $0$. The paper introduces the complexity model and provides proof for the upper complexity bound of the multilevel Monte Carlo method which depends on two increasing sequences of parameters for both $n$ and $M.$ The complexity is measured in terms of upper bound for mean-squared error and compared with the complexity of the standard Monte Carlo algorithm. The results from numerical experiments as well as Python and CUDA C implementation are also reported.
Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.
Purpose: Disease progression of retinal atrophy associated with AMD requires the accurate quantification of the retinal atrophy changes on longitudinal OCT studies. It is based on finding, comparing, and delineating subtle atrophy changes on consecutive pairs (prior and current) of unregistered OCT scans. Methods: We present a fully automatic end-to-end pipeline for the simultaneous detection and quantification of time-related atrophy changes associated with dry AMD in pairs of OCT scans of a patient. It uses a novel simultaneous multi-channel column-based deep learning model trained on registered pairs of OCT scans that concurrently detects and segments retinal atrophy segments in consecutive OCT scans by classifying light scattering patterns in matched pairs of vertical pixel-wide columns (A-scans) in registered prior and current OCT slices (B-scans). Results: Experimental results on 4,040 OCT slices with 5.2M columns from 40 scans pairs of 18 patients (66% training/validation, 33% testing) with 24.13+-14.0 months apart in which Complete RPE and Outer Retinal Atrophy (cRORA) was identified in 1,998 OCT slices (735 atrophy lesions from 3,732 segments, 0.45M columns) yield a mean atrophy segments detection precision, recall of 0.90+-0.09, 0.95+-0.06 and 0.74+-0.18, 0.94+-0.12 for atrophy lesions with AUC=0.897, all above observer variability. Simultaneous classification outperforms standalone classification precision and recall by 30+-62% and 27+-0% for atrophy segments and lesions. Conclusions: simultaneous column-based detection and quantification of retinal atrophy changes associated with AMD is accurate and outperforms standalone classification methods. Translational relevance: an automatic and efficient way to detect and quantify retinal atrophy changes associated with AMD.
In this work we introduce the lag irreversibility function as a method to assess time-irreversibility in discrete time series. It quantifies the degree of time-asymmetry for the joint probability function of the state variable under study and the state variable lagged in time. We test its performance in a time-irreversible Markov chain model for which theoretical results are known. Moreover, we use our approach to analyze electrocardiographic recordings of four groups of subjects: healthy young individuals, healthy elderly individuals, and persons with two different disease conditions, namely, congestive heart failure and atrial fibrillation. We find that by studying jointly the variability of the amplitudes of the different waves in the electrocardiographic signals, one can obtain an efficient method to discriminate between the groups already mentioned. Finally, we test the accuracy of our method using the ROC analysis.
In this paper, we show that the constant-dimensional Weisfeiler-Leman algorithm for groups (Brachter & Schweitzer, LICS 2020) can be fruitfully used to improve parallel complexity upper bounds on isomorphism testing for several families of groups. In particular, we show: - Groups with an Abelian normal Hall subgroup whose complement is $O(1)$-generated are identified by constant-dimensional Weisfeiler-Leman using only a constant number of rounds. This places isomorphism testing for this family of groups into $\textsf{L}$; the previous upper bound for isomorphism testing was $\textsf{P}$ (Qiao, Sarma, & Tang, STACS 2011). - We use the individualize-and-refine paradigm to obtain a $\textsf{quasiSAC}^{1}$ isomorphism test for groups without Abelian normal subgroups, previously only known to be in $\textsf{P}$ (Babai, Codenotti, & Qiao, ICALP 2012). - We extend a result of Brachter & Schweitzer (arXiv, 2021) on direct products of groups to the parallel setting. Namely, we also show that Weisfeiler-Leman can identify direct products in parallel, provided it can identify each of the indecomposable direct factors in parallel. They previously showed the analogous result for $\textsf{P}$. We finally consider the count-free Weisfeiler-Leman algorithm, where we show that count-free WL is unable to even distinguish Abelian groups in polynomial-time. Nonetheless, we use count-free WL in tandem with bounded non-determinism and limited counting to obtain a new upper bound of $\beta_{1}\textsf{MAC}^{0}(\textsf{FOLL})$ for isomorphism testing of Abelian groups. This improves upon the previous $\textsf{TC}^{0}(\textsf{FOLL})$ upper bound due to Chattopadhyay, Tor\'an, & Wagner (ACM Trans. Comput. Theory, 2013).
Detecting changes that occurred in a pair of 3D airborne LiDAR point clouds, acquired at two different times over the same geographical area, is a challenging task because of unmatching spatial supports and acquisition system noise. Most recent attempts to detect changes on point clouds are based on supervised methods, which require large labelled data unavailable in real-world applications. To address these issues, we propose an unsupervised approach that comprises two components: Neural Field (NF) for continuous shape reconstruction and a Gaussian Mixture Model for categorising changes. NF offer a grid-agnostic representation to encode bi-temporal point clouds with unmatched spatial support that can be regularised to increase high-frequency details and reduce noise. The reconstructions at each timestamp are compared at arbitrary spatial scales, leading to a significant increase in detection capabilities. We apply our method to a benchmark dataset of simulated LiDAR point clouds for urban sprawling. The dataset offers different challenging scenarios with different resolutions, input modalities and noise levels, allowing a multi-scenario comparison of our method with the current state-of-the-art. We boast the previous methods on this dataset by a 10% margin in intersection over union metric. In addition, we apply our methods to a real-world scenario to identify illegal excavation (looting) of archaeological sites and confirm that they match findings from field experts.
Monte Carlo methods represent a cornerstone of computer science. They allow to sample high dimensional distribution functions in an efficient way. In this paper we consider the extension of Automatic Differentiation (AD) techniques to Monte Carlo process, addressing the problem of obtaining derivatives (and in general, the Taylor series) of expectation values. Borrowing ideas from the lattice field theory community, we examine two approaches. One is based on reweighting while the other represents an extension of the Hamiltonian approach typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show that the Hamiltonian approach can be understood as a change of variables of the reweighting approach, resulting in much reduced variances of the coefficients of the Taylor series. This work opens the door to find other variance reduction techniques for derivatives of expectation values.