An increasingly common viewpoint is that protein dynamics data sets reside in a non-linear subspace of low conformational energy. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich structure to account for a wide range of geometries that can be modelled after an energy landscape. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised to data on a Riemannian manifold. In the context of protein dynamics, a conceptual challenge comes from the lack of a suitable smooth manifold and the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a novel local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold of point clouds modulo rigid body group actions and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. It performs exceptionally well on coarse-grained molecular dynamics simulated data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.
Certifying the positivity of trigonometric polynomials is of first importance for design problems in discrete-time signal processing. It is well known from the Riesz-Fej\'ez spectral factorization theorem that any trigonometric univariate polynomial positive on the unit circle can be decomposed as a Hermitian square with complex coefficients. Here we focus on the case of polynomials with Gaussian integer coefficients, i.e., with real and imaginary parts being integers. We design, analyze and compare, theoretically and practically,three hybrid numeric-symbolic algorithms computing weighted sums of Hermitian squares decompositions for trigonometric univariate polynomials positive on the unit circle with Gaussian coefficients. The numerical steps the first and second algorithm rely on are complex root isolation and semidefinite programming, respectively. An exact sum of Hermitian squares decomposition is obtained thanks to compensation techniques. The third algorithm, also based on complex semidefinite programming, is an adaptation of the rounding and projection algorithm by Peyrl and Parrilo. For all three algorithms, we prove bit complexity and output size estimates that are polynomial in the degree of the input and linear in the maximum bitsize of its coefficients. We compare their performance on randomly chosen benchmarks, and further design a certified finite impulse filter.
Recently, advancements in deep learning-based superpixel segmentation methods have brought about improvements in both the efficiency and the performance of segmentation. However, a significant challenge remains in generating superpixels that strictly adhere to object boundaries while conveying rich visual significance, especially when cross-surface color correlations may interfere with objects. Drawing inspiration from neural structure and visual mechanisms, we propose a biological network architecture comprising an Enhanced Screening Module (ESM) and a novel Boundary-Aware Label (BAL) for superpixel segmentation. The ESM enhances semantic information by simulating the interactive projection mechanisms of the visual cortex. Additionally, the BAL emulates the spatial frequency characteristics of visual cortical cells to facilitate the generation of superpixels with strong boundary adherence. We demonstrate the effectiveness of our approach through evaluations on both the BSDS500 dataset and the NYUv2 dataset.
In clinical and biomedical research, multiple high-dimensional datasets are nowadays routinely collected from omics and imaging devices. Multivariate methods, such as Canonical Correlation Analysis (CCA), integrate two (or more) datasets to discover and understand underlying biological mechanisms. For an explorative method like CCA, interpretation is key. We present a sparse CCA method based on soft-thresholding that produces near-orthogonal components, allows for browsing over various sparsity levels, and permutation-based hypothesis testing. Our soft-thresholding approach avoids tuning of a penalty parameter. Such tuning is computationally burdensome and may render unintelligible results. In addition, unlike alternative approaches, our method is less dependent on the initialisation. We examined the performance of our approach with simulations and illustrated its use on real cancer genomics data from drug sensitivity screens. Moreover, we compared its performance to Penalised Matrix Analysis (PMA), which is a popular alternative of sparse CCA with a focus on yielding interpretable results. Compared to PMA, our method offers improved interpretability of the results, while not compromising, or even improving, signal discovery. he software and simulation framework are available at //github.com/nuria-sv/toscca.
High-quality samples generated with score-based reverse diffusion algorithms provide evidence that deep neural networks (DNN) trained for denoising can learn high-dimensional densities, despite the curse of dimensionality. However, recent reports of memorization of the training set raise the question of whether these networks are learning the "true" continuous density of the data. Here, we show that two denoising DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, with a surprisingly small number of training images. This strong generalization demonstrates an alignment of powerful inductive biases in the DNN architecture and/or training algorithm with properties of the data distribution. We analyze these, demonstrating that the denoiser performs a shrinkage operation in a basis adapted to the underlying image. Examination of these bases reveals oscillating harmonic structures along contours and in homogeneous image regions. We show that trained denoisers are inductively biased towards these geometry-adaptive harmonic representations by demonstrating that they arise even when the network is trained on image classes such as low-dimensional manifolds, for which the harmonic basis is suboptimal. Additionally, we show that the denoising performance of the networks is near-optimal when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic.
Agent-based models are widely used to predict infectious disease spread. For these predictions, one needs to understand how each input parameter affects the result. Here, some parameters may affect the sensitivities of others, requiring the analysis of higher order coefficients through e.g. Sobol sensitivity analysis. The geographical structures of real-world regions are distinct in that they are difficult to reduce to single parameter values, making a unified sensitivity analysis intractable. Yet analyzing the importance of geographical structure on the sensitivity of other input parameters is important because a strong effect would justify the use of models with real-world geographical representations, as opposed to stylized ones. Here we perform a grouped Sobol's sensitivity analysis on COVID-19 spread simulations across a set of three diverse real-world geographical representations. We study the differences in both results and the sensitivity of non-geographical parameters across these geographies. By comparing Sobol indices of parameters across geographies, we find evidence that infection rate could have more sensitivity in regions where the population is segregated, while parameters like recovery period of mild cases are more sensitive in regions with mixed populations. We also show how geographical structure affects parameter sensitivity changes over time.
The elapsed time equation is an age-structured model that describes dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. In this work, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on previous history of the system, which is a more realistic assumption. Then we study the well-posedness and the numerical analysis of the elapsed time models. For existence and uniqueness we improve the previous works by relaxing some hypothesis on the nonlinearity, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non-linear problem. We also show some numerical simulations to compare the behavior of the system in the case of instantaneous transmission with the case of distributed delay under different parameters, leading to solutions with different asymptotic profiles.
Fitted finite element methods are constructed for a singularly perturbed convection-diffusion problem in two space dimensions. Exponential splines as basis functions are combined with Shishkin meshes to obtain a stable parameter-uniform numerical method. These schemes satisfy a discrete maximum principle. In the classical case, the numerical approximations converge, in the maximum pointwise norm, at a rate of second order and the approximations converge at a rate of first order for all values of the singular perturbation parameter.
Preference-based optimization algorithms are iterative procedures that seek the optimal calibration of a decision vector based only on comparisons between couples of different tunings. At each iteration, a human decision-maker expresses a preference between two calibrations (samples), highlighting which one, if any, is better than the other. The optimization procedure must use the observed preferences to find the tuning of the decision vector that is most preferred by the decision-maker, while also minimizing the number of comparisons. In this work, we formulate the preference-based optimization problem from a utility theory perspective. Then, we propose GLISp-r, an extension of a recent preference-based optimization procedure called GLISp. The latter uses a Radial Basis Function surrogate to describe the tastes of the decision-maker. Iteratively, GLISp proposes new samples to compare with the best calibration available by trading off exploitation of the surrogate model and exploration of the decision space. In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS, a popular procedure in the black-box optimization framework. Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem. We motivate this claim theoretically, with a proof of global convergence, and empirically, by comparing the performances of GLISp and GLISp-r on several benchmark optimization problems.
Network motifs are recurrent, small-scale patterns of interactions observed frequently in a system. They shed light on the interplay between the topology and the dynamics of complex networks across various domains. In this work, we focus on the problem of counting occurrences of small sub-hypergraph patterns in very large hypergraphs, where higher-order interactions connect arbitrary numbers of system units. We show how directly exploiting higher-order structures speeds up the counting process compared to traditional data mining techniques for exact motif discovery. Moreover, with hyperedge sampling, performance is further improved at the cost of small errors in the estimation of motif frequency. We evaluate our method on several real-world datasets describing face-to-face interactions, co-authorship and human communication. We show that our approximated algorithm allows us to extract higher-order motifs faster and on a larger scale, beyond the computational limits of an exact approach.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.