Positron Emission Tomography (PET) enables functional imaging of deep brain structures, but the bulk and weight of current systems preclude their use during many natural human activities, such as locomotion. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This requires a system to measure the motion of the head with respect to the imaging ring, for use by both the robotic system and the image reconstruction software. We report here the design, calibration, and experimental evaluation of a parallel string encoder mechanism for sensing this motion. Our results indicate that with kinematic calibration, the measurement system can achieve accuracy within 0.5mm, especially for small motions.
We study the complexity (that is, the weight of the multiplication table) of the elliptic normal bases introduced by Couveignes and Lercier. We give an upper bound on the complexity of these elliptic normal bases, and we analyze the weight of some special vectors related to the multiplication table of those bases. This analysis leads us to some perspectives on the search for low complexity normal bases from elliptic periods.
Recently, a family of unconventional integrators for ODEs with polynomial vector fields was proposed, based on the polarization of vector fields. The simplest instance is the by now famous Kahan discretization for quadratic vector fields. All these integrators seem to possess remarkable conservation properties. In particular, it has been proved that, when the underlying ODE is Hamiltonian, its polarization discretization possesses an integral of motion and an invariant volume form. In this note, we propose a new algebraic approach to derivation of the integrals of motion for polarization discretizations.
It is well known that for singular inconsistent range-symmetric linear systems, the generalized minimal residual (GMRES) method determines a least squares solution without breakdown. The reached least squares solution may be or not be the pseudoinverse solution. We show that a lift strategy can be used to obtain the pseudoinverse solution. In addition, we propose a new iterative method named RSMAR (minimum $\mathbf A$-residual) for range-symmetric linear systems $\mathbf A\mathbf x=\mathbf b$. At step $k$ RSMAR minimizes $\|\mathbf A\mathbf r_k\|$ in the $k$th Krylov subspace generated with $\{\mathbf A, \mathbf r_0\}$ rather than $\|\mathbf r_k\|$, where $\mathbf r_k$ is the $k$th residual vector and $\|\cdot\|$ denotes the Euclidean vector norm. We show that RSMAR and GMRES terminate with the same least squares solution when applied to range-symmetric linear systems. We provide two implementations for RSMAR. Our numerical experiments show that RSMAR is the most suitable method among GMRES-type methods for singular inconsistent range-symmetric linear systems.
We study the existence and uniqueness of Lp-bounded mild solutions for a class ofsemilinear stochastic evolutions equations driven by a real L\'evy processes withoutGaussian component not square integrable for instance the stable process through atruncation method by separating the big and small jumps together with the classicaland simple Banach fixed point theorem ; under local Lipschitz, Holder, linear growthconditions on the coefficients.
The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. In this paper, we propose a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a finer-grained analysis of the relative strengths and weaknesses between different survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against classical models and state-of-the-art methods, together with the new variational generative neural-network-based method (SurVED) proposed in this paper. The performance of the models is assessed using four publicly available datasets with varying levels of censoring. Using the C-index decomposition and synthetic censoring, the analysis shows that deep learning models utilize the observed events more effectively than other models. This allows them to keep a stable C-index in different censoring levels. In contrast to such deep learning methods, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events.
Dynamical systems across the sciences, from electrical circuits to ecological networks, undergo qualitative and often catastrophic changes in behavior, called bifurcations, when their underlying parameters cross a threshold. Existing methods predict oncoming catastrophes in individual systems but are primarily time-series-based and struggle both to categorize qualitative dynamical regimes across diverse systems and to generalize to real data. To address this challenge, we propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features. We focus on the paradigmatic case of the supercritical Hopf bifurcation, which is used to model periodic dynamics across a wide range of applications. Our convolutional attention method is trained with data augmentations that encourage the learning of topological invariants which can be used to detect bifurcation boundaries in unseen systems and to design models of biological systems like oscillatory gene regulatory networks. We further demonstrate our method's use in analyzing real data by recovering distinct proliferation and differentiation dynamics along pancreatic endocrinogenesis trajectory in gene expression space based on single-cell data. Our method provides valuable insights into the qualitative, long-term behavior of a wide range of dynamical systems, and can detect bifurcations or catastrophic transitions in large-scale physical and biological systems.
Study Objectives: Polysomnography (PSG) currently serves as the benchmark for evaluating sleep disorders. Its discomfort, impracticality for home-use, and introduction of bias in sleep quality assessment necessitate the exploration of less invasive, cost-effective, and portable alternatives. One promising contender is the in-ear-EEG sensor, which offers advantages in terms of comfort, fixed electrode positions, resistance to electromagnetic interference, and user-friendliness. This study aims to establish a methodology to assess the similarity between the in-ear-EEG signal and standard PSG. Methods: We assess the agreement between the PSG and in-ear-EEG derived hypnograms. We extract features in the time- and frequency- domain from PSG and in-ear-EEG 30-second epochs. We only consider the epochs where the PSG-scorers and the in-ear-EEG-scorers were in agreement. We introduce a methodology to quantify the similarity between PSG derivations and the single-channel in-ear-EEG. The approach relies on a comparison of distributions of selected features -- extracted for each sleep stage and subject on both PSG and the in-ear-EEG signals -- via a Jensen-Shannon Divergence Feature-based Similarity Index (JSD-FSI). Results: We found a high intra-scorer variability, mainly due to the uncertainty the scorers had in evaluating the in-ear-EEG signals. We show that the similarity between PSG and in-ear-EEG signals is high (JSD-FSI: 0.61 +/- 0.06 in awake, 0.60 +/- 0.07 in NREM and 0.51 +/- 0.08 in REM), and in line with the similarity values computed independently on standard PSG-channel-combinations. Conclusions: In-ear-EEG is a valuable solution for home-based sleep monitoring, however further studies with a larger and more heterogeneous dataset are needed.
This paper analyzes the stability of the class of Time-Accurate and Highly-Stable Explicit Runge-Kutta (TASE-RK) methods, introduced in 2021 by Bassenne et al. (J. Comput. Phys.) for the numerical solution of stiff Initial Value Problems (IVPs). Such numerical methods are easy to implement and require the solution of a limited number of linear systems per step, whose coefficient matrices involve the exact Jacobian $J$ of the problem. To significantly reduce the computational cost of TASE-RK methods without altering their consistency properties, it is possible to replace $J$ with a matrix $A$ (not necessarily tied to $J$) in their formulation, for instance fixed for a certain number of consecutive steps or even constant. However, the stability properties of TASE-RK methods strongly depend on this choice, and so far have been studied assuming $A=J$. In this manuscript, we theoretically investigate the conditional and unconditional stability of TASE-RK methods by considering arbitrary $A$. To this end, we first split the Jacobian as $J=A+B$. Then, through the use of stability diagrams and their connections with the field of values, we analyze both the case in which $A$ and $B$ are simultaneously diagonalizable and not. Numerical experiments, conducted on Partial Differential Equations (PDEs) arising from applications, show the correctness and utility of the theoretical results derived in the paper, as well as the good stability and efficiency of TASE-RK methods when $A$ is suitably chosen.
We consider the problem of efficiently simulating stochastic models of chemical kinetics. The Gillespie Stochastic Simulation algorithm (SSA) is often used to simulate these models, however, in many scenarios of interest, the computational cost quickly becomes prohibitive. This is further exasperated in the Bayesian inference context when estimating parameters of chemical models, as the intractability of the likelihood requires multiple simulations of the underlying system. To deal with issues of computational complexity in this paper, we propose a novel hybrid $\tau$-leap algorithm for simulating well-mixed chemical systems. In particular, the algorithm uses $\tau$-leap when appropriate (high population densities), and SSA when necessary (low population densities, when discrete effects become non-negligible). In the intermediate regime, a combination of the two methods, which leverages the properties of the underlying Poisson formulation, is employed. As illustrated through a number of numerical experiments the hybrid $\tau$ offers significant computational savings when compared to SSA without however sacrificing the overall accuracy. This feature is particularly welcomed in the Bayesian inference context, as it allows for parameter estimation of stochastic chemical kinetics at reduced computational cost.
We develop a novel and efficient discontinuous Galerkin spectral element method (DG-SEM) for the spherical rotating shallow water equations in vector invariant form. We prove that the DG-SEM is energy stable, and discretely conserves mass, vorticity, and linear geostrophic balance on general curvlinear meshes. These theoretical results are possible due to our novel entropy stable numerical DG fluxes for the shallow water equations in vector invariant form. We experimentally verify these results on a cubed sphere mesh. Additionally, we show that our method is robust, that is can be run stably without any dissipation. The entropy stable fluxes are sufficient to control the grid scale noise generated by geostrophic turbulence without the need for artificial stabilisation.