Panoptic segmentation is the combination of semantic and instance segmentation: assign the points in a 3D point cloud to semantic categories and partition them into distinct object instances. It has many obvious applications for outdoor scene understanding, from city mapping to forest management. Existing methods struggle to segment nearby instances of the same semantic category, like adjacent pieces of street furniture or neighbouring trees, which limits their usability for inventory- or management-type applications that rely on object instances. This study explores the steps of the panoptic segmentation pipeline concerned with clustering points into object instances, with the goal to alleviate that bottleneck. We find that a carefully designed clustering strategy, which leverages multiple types of learned point embeddings, significantly improves instance segmentation. Experiments on the NPM3D urban mobile mapping dataset and the FOR-instance forest dataset demonstrate the effectiveness and versatility of the proposed strategy.
The joint retrieval of surface reflectances and atmospheric parameters in VSWIR imaging spectroscopy is a computationally challenging high-dimensional problem. Using NASA's Surface Biology and Geology mission as the motivational context, the uncertainty associated with the retrievals is crucial for further application of the retrieved results for environmental applications. Although Markov chain Monte Carlo (MCMC) is a Bayesian method ideal for uncertainty quantification, the full-dimensional implementation of MCMC for the retrieval is computationally intractable. In this work, we developed a block Metropolis MCMC algorithm for the high-dimensional VSWIR surface reflectance retrieval that leverages the structure of the forward radiative transfer model to enable tractable fully Bayesian computation. We use the posterior distribution from this MCMC algorithm to assess the limitations of optimal estimation, the state-of-the-art Bayesian algorithm in operational retrievals which is more computationally efficient but uses a Gaussian approximation to characterize the posterior. Analyzing the differences in the posterior computed by each method, the MCMC algorithm was shown to give more physically sensible results and reveals the non-Gaussian structure of the posterior, specifically in the atmospheric aerosol optical depth parameter and the low-wavelength surface reflectances.
As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate a potentially impredicative term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the algorithm, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in an algorithm which uses a constraint-postponement strategy to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's arithmetic library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.
Introduction: Oblique Target-rotation in the context of exploratory factor analysis is a relevant method for the investigation of the oblique independent clusters model. It was argued that minimizing single cross-loadings by means of target rotation may lead to large effects of sampling error on the target rotated factor solutions. Method: In order to minimize effects of sampling error on results of Target-rotation we propose to compute the mean cross-loadings for each block of salient loadings of the independent clusters model and to perform target rotation for the block-wise mean cross-loadings. The resulting transformation-matrix is than applied to the complete unrotated loading matrix in order to produce mean Target-rotated factors. Results: A simulation study based on correlated independent factor models revealed that mean oblique Target-rotation resulted in smaller negative bias of factor inter-correlations than conventional Target-rotation based on single loadings, especially when sample size was small and when the number of factors was large. An empirical example revealed that the similarity of Target-rotated factors computed for small subsamples with Target-rotated factors of the total sample was more pronounced for mean Target-rotation than for conventional Target-rotation. Discussion: Mean Target-rotation can be recommended in the context of oblique independent factor models, especially for small samples. An R-script and an SPSS-script for this form of Target-rotation are provided in the Appendix.
By a semi-Lagrangian change of coordinates, the hydrostatic Euler equations describing free-surface sheared flows is rewritten as a system of quasilinear equations, where stability conditions can be determined by the analysis of its hyperbolic structure. This new system can be written as a quasi linear system in time and horizontal variables and involves no more vertical derivatives. However, the coefficients in front of the horizontal derivatives include an integral operator acting on the new vertical variable. The spectrum of these operators is studied in detail, in particular it includes a continuous part. Riemann invariants are then determined as conserved quantities along the characteristic curves. Examples of solutions are provided, in particular stationary solutions and solutions blowing-up in finite time. Eventually, we propose an exact multi-layer $\mathbb{P}_0$-discretization, which could be used to solve numerically this semi-Lagrangian system, and analyze the eigenvalues of the corresponding discretized operator to investigate the hyperbolic nature of the approximated system.
We study a family of distances between functions of a single variable. These distances are examples of integral probability metrics, and have been used previously for comparing probability measures. Special cases include the Earth Mover's Distance and the Kolmogorov Metric. We examine their properties for general signals, proving that they are robust to a broad class of perturbations and that the distance between one-dimensional tomographic projections of a two-dimensional function is bounded by the size of the difference in projection angles. We also establish error bounds for approximating the metric from finite samples, and prove that these approximations are robust to additive Gaussian noise. The results are illustrated in numerical experiments.
We aim to efficiently compute spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion, and mutation/selection through a fitness function originated in the FK semigroup. We analyze convergence of the algorithm based on operator splitting, present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.
We study the continuous multi-reference alignment model of estimating a periodic function on the circle from noisy and circularly-rotated observations. Motivated by analogous high-dimensional problems that arise in cryo-electron microscopy, we establish minimax rates for estimating generic signals that are explicit in the dimension $K$. In a high-noise regime with noise variance $\sigma^2 \gtrsim K$, for signals with Fourier coefficients of roughly uniform magnitude, the rate scales as $\sigma^6$ and has no further dependence on the dimension. This rate is achieved by a bispectrum inversion procedure, and our analyses provide new stability bounds for bispectrum inversion that may be of independent interest. In a low-noise regime where $\sigma^2 \lesssim K/\log K$, the rate scales instead as $K\sigma^2$, and we establish this rate by a sharp analysis of the maximum likelihood estimator that marginalizes over latent rotations. A complementary lower bound that interpolates between these two regimes is obtained using Assouad's hypercube lemma. We extend these analyses also to signals whose Fourier coefficients have a slow power law decay.
This note shows how to compute, to high relative accuracy under mild assumptions, complex Jacobi rotations for diagonalization of Hermitian matrices of order two, using the correctly rounded functions $\mathtt{cr\_hypot}$ and $\mathtt{cr\_rsqrt}$, proposed for standardization in the C programming language as recommended by the IEEE-754 floating-point standard. The rounding to nearest (ties to even) and the non-stop arithmetic are assumed. The numerical examples compare the observed with theoretical bounds on the relative errors in the rotations' elements, and show that the maximal observed departure of the rotations' determinants from unity is smaller than that of the transformations computed by LAPACK.
We study various aspects of the first-order transduction quasi-order, which provides a way of measuring the relative complexity of classes of structures based on whether one can encode the other using a formula of first-order (FO) logic. In contrast with the conjectured simplicity of the transduction quasi-order for monadic second-order logic, the FO-transduction quasi-order is very complex; in particular, we prove that the quotient partial order is not a lattice, although it is a bounded distributive join-semilattice, as is the subposet of additive classes. Many standard properties from structural graph theory and model theory naturally appear in this quasi-order. For example, we characterize transductions of paths, cubic graphs, and cubic trees in terms of bandwidth, bounded degree, and treewidth. We establish that the classes of all graphs with pathwidth at most~$k$, for $k\geq 1$, form a strict hierarchy in the FO-transduction quasi-order and leave open whether same is true for treewidth. This leads to considering whether properties admit maximum or minimum classes in this quasi-order. We prove that many properties do not admit a maximum class, and that star forests are the minimum class that is not a transduction of a class with bounded degree, which can be seen as an instance of transduction duality. We close with a notion of dense analogues of sparse classes, and discuss several related conjectures. As a ubiquitous tool in our results, we prove a normal form for FO-transductions that manifests the locality of FO logic. This is among several other technical results about FO-transductions which we anticipate being broadly useful.
There is a presumption in human-computer interaction that laying out menus and most other material in neat rows and columns helps users get work done. The rule has been so implicit in the field of design as to allow for no debate. However, the idea that perfect collinearity benefits creates an advantage for both either search and or recall has rarely been tested. Drawing from separate branches of cognitive literature, we tested a minimal brainstorming interface with either aligned or eccentrically arranged layouts on 96 college students. Incidental exact recall of recently worked locations improved in the eccentric condition. And in both conditions there were frequent near-miss recall errors to neighboring aligned objects and groups of objects. Further analysis found only marginal performance advantages specifically for females with the eccentric design. However, NASA-TLX subjective measures showed that in eccentric, females reported higher performance, less effort, and yet also higher frustration; while males reported lower performance with about the same effort, and lower frustration.