The eigenvalue method, suggested by the developer of the extensively used Analytic Hierarchy Process methodology, exhibits right-left asymmetry: the priorities derived from the right eigenvector do not necessarily coincide with the priorities derived from the reciprocal left eigenvector. This paper offers a comprehensive numerical experiment to compare the two eigenvector-based weighting procedures and their reasonable alternative of the row geometric mean with respect to four measures. The underlying pairwise comparison matrices are constructed randomly with different dimensions and levels of inconsistency. The disagreement between the two eigenvectors turns out to be not always a monotonic function of these important characteristics of the matrix. The ranking contradictions can affect alternatives with relatively distant priorities. The row geometric mean is found to be almost at the midpoint between the right and inverse left eigenvectors, making it a straightforward compromise between them.
A new numerical domain decomposition method is proposed for solving elliptic equations on compact Riemannian manifolds. The advantage of this method is to avoid global triangulations or grids on manifolds. Our method is numerically tested on some $4$-dimensional manifolds such as the unit sphere $S^{4}$, the complex projective space $\mathbb{CP}^{2}$ and the product manifold $S^{2} \times S^{2}$.
Data pruning algorithms are commonly used to reduce the memory and computational cost of the optimization process. Recent empirical results reveal that random data pruning remains a strong baseline and outperforms most existing data pruning methods in the high compression regime, i.e., where a fraction of $30\%$ or less of the data is kept. This regime has recently attracted a lot of interest as a result of the role of data pruning in improving the so-called neural scaling laws; in [Sorscher et al.], the authors showed the need for high-quality data pruning algorithms in order to beat the sample power law. In this work, we focus on score-based data pruning algorithms and show theoretically and empirically why such algorithms fail in the high compression regime. We demonstrate ``No Free Lunch" theorems for data pruning and present calibration protocols that enhance the performance of existing pruning algorithms in this high compression regime using randomization.
We propose and analyze an extended Fourier pseudospectral (eFP) method for the spatial discretization of the Gross-Pitaevskii equation (GPE) with low regularity potential by treating the potential in an extended window for its discrete Fourier transform. The proposed eFP method maintains optimal convergence rates with respect to the regularity of the exact solution even if the potential is of low regularity and enjoys similar computational cost as the standard Fourier pseudospectral method, and thus it is both efficient and accurate. Furthermore, similar to the Fourier spectral/pseudospectral methods, the eFP method can be easily coupled with different popular temporal integrators including finite difference methods, time-splitting methods and exponential-type integrators. Numerical results are presented to validate our optimal error estimates and to demonstrate that they are sharp as well as to show its efficiency in practical computations.
Many of the tools available for robot learning were designed for Euclidean data. However, many applications in robotics involve manifold-valued data. A common example is orientation; this can be represented as a 3-by-3 rotation matrix or a quaternion, the spaces of which are non-Euclidean manifolds. In robot learning, manifold-valued data are often handled by relating the manifold to a suitable Euclidean space, either by embedding the manifold or by projecting the data onto one or several tangent spaces. These approaches can result in poor predictive accuracy, and convoluted algorithms. In this paper, we propose an "intrinsic" approach to regression that works directly within the manifold. It involves taking a suitable probability distribution on the manifold, letting its parameter be a function of a predictor variable, such as time, then estimating that function non-parametrically via a "local likelihood" method that incorporates a kernel. We name the method kernelised likelihood estimation. The approach is conceptually simple, and generally applicable to different manifolds. We implement it with three different types of manifold-valued data that commonly appear in robotics applications. The results of these experiments show better predictive accuracy than projection-based algorithms.
Purpose: To develop an open-source, fully-automatic deep learning algorithm, DeepGPET, for choroid region segmentation in optical coherence tomography (OCT) data. Methods: We used a dataset of 715 OCT B-scans (82 subjects, 115 eyes) from 3 clinical studies related to systemic disease. Ground truth segmentations were generated using a clinically validated, semi-automatic choroid segmentation method, Gaussian Process Edge Tracing (GPET). We finetuned a UNet with MobileNetV3 backbone pre-trained on ImageNet. Standard segmentation agreement metrics, as well as derived measures of choroidal thickness and area, were used to evaluate DeepGPET, alongside qualitative evaluation from a clinical ophthalmologist. Results: DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0.9994, Dice=0.9664; Pearson correlation of 0.8908 for choroidal thickness and 0.9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34.49s ($\pm$15.09) using GPET to 1.25s ($\pm$0.10) using DeepGPET. Both methods performed similarly according to a clinical ophthalmologist, who qualitatively judged a subset of segmentations by GPET and DeepGPET, based on smoothness and accuracy of segmentations. Conclusions: DeepGPET, a fully-automatic, open-source algorithm for choroidal segmentation, will enable researchers to efficiently extract choroidal measurements, even for large datasets. As no manual interventions are required, DeepGPET is less subjective than semi-automatic methods and could be deployed in clinical practice without necessitating a trained operator.
Contraction in Wasserstein 1-distance with explicit rates is established for generalized Hamiltonian Monte Carlo with stochastic gradients under possibly nonconvex conditions. The algorithms considered include splitting schemes of kinetic Langevin diffusion. As consequence, quantitative Gaussian concentration bounds are provided for empirical averages. Convergence in Wasserstein 2-distance, total variation and relative entropy are also given, together with numerical bias estimates.
Causal representation learning algorithms discover lower-dimensional representations of data that admit a decipherable interpretation of cause and effect; as achieving such interpretable representations is challenging, many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision. Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings. Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships. Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with a latent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and a scientific dataset, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
We introduce a novel structure-preserving method in order to approximate the compressible ideal Magnetohydrodynamics (MHD) equations. This technique addresses the MHD equations using a non-divergence formulation, where the contributions of the magnetic field to the momentum and total mechanical energy are treated as source terms. Our approach uses the Marchuk-Strang splitting technique and involves three distinct components: a compressible Euler solver, a source-system solver, and an update procedure for the total mechanical energy. The scheme allows for significant freedom on the choice of Euler's equation solver, while the magnetic field is discretized using a curl-conforming finite element space, yielding exact preservation of the involution constraints. We prove that the method preserves invariant domain properties, including positivity of density, positivity of internal energy, and the minimum principle of the specific entropy. If the scheme used to solve Euler's equation conserves total energy, then the resulting MHD scheme can be proven to preserve total energy. Similarly, if the scheme used to solve Euler's equation is entropy-stable, then the resulting MHD scheme is entropy stable as well. In our approach, the CFL condition does not depend on magnetosonic wave-speeds, but only on the usual maximum wave speed from Euler's system. To validate the effectiveness of our method, we solve a variety of ideal MHD problems, showing that the method is capable of delivering high-order accuracy in space for smooth problems, while also offering unconditional robustness in the shock hydrodynamics regime as well.
We introduce a convergent hierarchy of lower bounds on the minimum value of a real homogeneous polynomial over the sphere. The main practical advantage of our hierarchy over the sum-of-squares (SOS) hierarchy is that the lower bound at each level of our hierarchy is obtained by a minimum eigenvalue computation, as opposed to the full semidefinite program (SDP) required at each level of SOS. In practice, this allows us to go to much higher levels than are computationally feasible for the SOS hierarchy. For both hierarchies, the underlying space at the $k$-th level is the set of homogeneous polynomials of degree $2k$. We prove that our hierarchy converges as $O(1/k)$ in the level $k$, matching the best-known convergence of the SOS hierarchy when the number of variables $n$ is less than the half-degree $d$ (the best-known convergence of SOS when $n \geq d$ is $O(1/k^2)$). More generally, we introduce a convergent hierarchy of minimum eigenvalue computations for minimizing the inner product between a real tensor and an element of the spherical Segre-Veronese variety, with similar convergence guarantees. As examples, we obtain hierarchies for computing the (real) tensor spectral norm, and for minimizing biquadratic forms over the sphere. Hierarchies of eigencomputations for more general constrained polynomial optimization problems are discussed.
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.