This paper presents an application of artificial intelligence on mass spectrometry data for detecting habitability potential of ancient Mars. Although data was collected for planet Mars the same approach can be replicated for any terrestrial object of our solar system. Furthermore, proposed methodology can be adapted to any domain that uses mass spectrometry. This research is focused in data analysis of two mass spectrometry techniques, evolved gas analysis (EGA-MS) and gas chromatography (GC-MS), which are used to identify specific chemical compounds in geological material samples. The study demonstrates the applicability of EGA-MS and GC-MS data to extra-terrestrial material analysis. Most important features of proposed methodology includes square root transformation of mass spectrometry values, conversion of raw data to 2D sprectrograms and utilization of specific machine learning models and techniques to avoid overfitting on relative small datasets. Both EGA-MS and GC-MS datasets come from NASA and two machine learning competitions that the author participated and exploited. Complete running code for the GC-MS dataset/competition is available at GitHub.1 Raw training mass spectrometry data include [0, 1] labels of specific chemical compounds, selected to provide valuable insights and contribute to our understanding of the potential past habitability of Mars.
The brain projects require the collection of massive electrophysiological data, aiming to the longitudinal, sectional, or populational neuroscience studies. Quality metrics automatically label the data after centralized preprocessing. However, although the waveforms-based metrics are partially useful, they may be unreliable by neglecting the spectral profiles. Here, we detected the phenomenon of parallel log spectra (PaLOS) that the scalp EEG power in the log scale were parallel to each other from 10% of 2549 HBN EEG. This phenomenon was reproduced in 8% of 412 PMDT EEG from 4 databases. We designed the PaLOS index (PaLOSi) to indicate this phenomenon by decomposing the cross-spectra at different frequencies into the common principal component spaces. We found that the PaLOS biophysically implied a prominently dominant dipole in the source space which was implausible for the resting EEG. And it may be practically resulted from excessive preprocessing. Compared with the 1966 normative EEG cross-spectra, the HBN and the PMDT EEG with PaLOS presented generally much higher electrode pairwise coherences and higher similarity of coherence-based network patterns, which went against the known frequency dependent characteristic of coherence networks. We suggest the PaLOSi should lay in the range of 0.4-0.7 for large resting EEG quality assurance.
This paper explores the phenomenon of avoided level crossings in quantum annealing, a promising framework for quantum computing that may provide a quantum advantage for certain tasks. Quantum annealing involves letting a quantum system evolve according to the Schr\"odinger equation, with the goal of obtaining the optimal solution to an optimization problem through measurements of the final state. However, the continuous nature of quantum annealing makes analytical analysis challenging, particularly with regard to the instantaneous eigenenergies. The adiabatic theorem provides a theoretical result for the annealing time required to obtain the optimal solution with high probability, which is inversely proportional to the square of the minimum spectral gap. Avoided level crossings can create exponentially closing gaps, which can lead to exponentially long running times for optimization problems. In this paper, we use a perturbative expansion to derive a condition for the occurrence of an avoided level crossing during the annealing process. We then apply this condition to the MaxCut problem on bipartite graphs. We show that no exponentially small gaps arise for regular bipartite graphs, implying that QA can efficiently solve MaxCut in that case. On the other hand, we show that irregularities in the vertex degrees can lead to the satisfaction of the avoided level crossing occurrence condition. We provide numerical evidence to support this theoretical development, and discuss the relation between the presence of exponentially closing gaps and the failure of quantum annealing.
We propose an implementable, feedforward neural network-based structure preserving probabilistic numerical approximation for a generalized obstacle problem describing the value of a zero-sum differential game of optimal stopping with asymmetric information. The target solution depends on three variables: the time, the spatial (or state) variable, and a variable from a standard $(I-1)$-simplex which represents the probabilities with which the $I$ possible configurations of the game are played. The proposed numerical approximation preserves the convexity of the continuous solution as well as the lower and upper obstacle bounds. We show convergence of the fully-discrete scheme to the unique viscosity solution of the continuous problem and present a range of numerical studies to demonstrate its applicability.
We present HoVer-UNet, an approach to distill the knowledge of the multi-branch HoVerNet framework for nuclei instance segmentation and classification in histopathology. We propose a compact, streamlined single UNet network with a Mix Vision Transformer backbone, and equip it with a custom loss function to optimally encode the distilled knowledge of HoVerNet, reducing computational requirements without compromising performances. We show that our model achieved results comparable to HoVerNet on the public PanNuke and Consep datasets with a three-fold reduction in inference time. We make the code of our model publicly available at //github.com/DIAGNijmegen/HoVer-UNet.
In many applications, it is desired to obtain extreme eigenvalues and eigenvectors of large Hermitian matrices by efficient and compact algorithms. In particular, orthogonalization-free methods are preferred for large-scale problems for finding eigenspaces of extreme eigenvalues without explicitly computing orthogonal vectors in each iteration. For the top $p$ eigenvalues, the simplest orthogonalization-free method is to find the best rank-$p$ approximation to a positive semi-definite Hermitian matrix by algorithms solving the unconstrained Burer-Monteiro formulation. We show that the nonlinear conjugate gradient method for the unconstrained Burer-Monteiro formulation is equivalent to a Riemannian conjugate gradient method on a quotient manifold with the Bures-Wasserstein metric, thus its global convergence to a stationary point can be proven. Numerical tests suggest that it is efficient for computing the largest $k$ eigenvalues for large-scale matrices if the largest $k$ eigenvalues are nearly distributed uniformly.
The unstructured nature of data used in foundation model development is a challenge to systematic analyses for making data use and documentation decisions. From a Responsible AI perspective, these decisions often rely upon understanding how people are represented in data. We propose a framework designed to guide analysis of human representation in unstructured data and identify downstream risks. We apply the framework in two toy examples using the Common Crawl web text corpus (C4) and LAION-400M. We also propose a set of hypothetical action steps in service of dataset use, development, and documentation.
We propose an innovative and generic methodology to analyse individual and collective behaviour through individual trajectory data. The work is motivated by the analysis of GPS trajectories of fishing vessels collected from regulatory tracking data in the context of marine biodiversity conservation and ecosystem-based fisheries management. We build a low-dimensional latent representation of trajectories using convolutional neural networks as non-linear mapping. This is done by training a conditional variational auto-encoder taking into account covariates. The posterior distributions of the latent representations can be linked to the characteristics of the actual trajectories. The latent distributions of the trajectories are compared with the Bhattacharyya coefficient, which is well-suited for comparing distributions. Using this coefficient, we analyse the variation of the individual behaviour of each vessel during time. For collective behaviour analysis, we build proximity graphs and use an extension of the stochastic block model for multiple networks. This model results in a clustering of the individuals based on their set of trajectories. The application to French fishing vessels enables us to obtain groups of vessels whose individual and collective behaviours exhibit spatio-temporal patterns over the period 2014-2018.
High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decomposition or factorization, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, starting from tensor matricization, we aim to ``decode" estimation of a higher-order tensor factor model in the sense that, we recast it into mode-wise traditional high-dimensional vector/fiber factor models so as to deploy the conventional estimation of principle components analysis (PCA). Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from most popular Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extend. We establish the inferential theory of the proposed estimations and conduct rich simulation experiments under TuTFaM, and illustrate how the proposed estimations can work in tensor reconstruction, clustering for video and economic datasets, respectively.
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.