Longitudinal fMRI datasets hold great promise for the study of neurodegenerative diseases, but realizing their potential depends on extracting accurate fMRI-based brain measures in individuals over time. This is especially true for rare, heterogeneous and/or rapidly progressing diseases, which often involve small samples whose functional features may vary dramatically across subjects and over time, making traditional group-difference analyses of limited utility. One such disease is ALS, which results in extreme motor function loss and eventual death. Here, we analyze a rich longitudinal dataset containing 190 motor task fMRI scans from 16 ALS patients and 22 age-matched HCs. We propose a novel longitudinal extension to our cortical surface-based spatial Bayesian GLM, which has high power and precision to detect activations in individuals. Using a series of longitudinal mixed-effects models to subsequently study the relationship between activation and disease progression, we observe an inverted U-shaped trajectory: at relatively mild disability we observe enlarging activations, while at higher disability we observe severely diminished activation, reflecting progression toward complete motor function loss. We observe distinct trajectories depending on clinical progression rate, with faster progressors exhibiting more extreme hyper-activation and subsequent hypo-activation. These differential trajectories suggest that initial hyper-activation is likely attributable to loss of inhibitory neurons. By contrast, earlier studies employing more limited sampling designs and using traditional group-difference analysis approaches were only able to observe the initial hyper-activation, which was assumed to be due to a compensatory process. This study provides a first example of how surface-based spatial Bayesian modeling furthers scientific understanding of neurodegenerative disease.
A Bayesian Discrepancy Test (BDT) is proposed to evaluate the distance of a given hypothesis with respect to the available information (prior law and data). The proposed measure of evidence has properties of consistency and invariance. After having presented the similarities and differences between the BDT and other Bayesian tests, we proceed with the analysis of some multiparametric case studies, showing the properties of the BDT. Among them conceptual and interpretative simplicity, possibility of dealing with complex case studies.
Digitalization in seaports dovetails the IT infrastructure of various actors (e.g., shipping companies, terminals, customs, port authorities) to process complex workflows for shipping containers. The security of these workflows relies not only on the security of each individual actor but actors must also provide additional guarantees to other actors like, for instance, respecting obligations related to received data or checking the integrity of workflows observed so far. This paper analyses global security requirements (e.g., accountability, confidentiality) of the workflows and decomposes them - according to the way workflow data is stored and distributed - into requirements and obligations for the individual actors. Security mechanisms are presented to satisfy the resulting requirements, which together with the guarantees of all individual actors will guarantee the security of the overall workflow.
We propose a method to detect outliers in empirically observed trajectories on a discrete or discretized manifold modeled by a simplicial complex. Our approach is similar to spectral embeddings such as diffusion-maps and Laplacian eigenmaps, that construct vertex embeddings from the eigenvectors of the graph Laplacian associated with low eigenvalues. Here we consider trajectories as edge-flow vectors defined on a simplicial complex, a higher-order generalization of graphs, and use the Hodge 1-Laplacian of the simplicial complex to derive embeddings of these edge-flows. By projecting trajectory vectors onto the eigenspace of the Hodge 1-Laplacian associated to small eigenvalues, we can characterize the behavior of the trajectories relative to the homology of the complex, which corresponds to holes in the underlying space. This enables us to classify trajectories based on simply interpretable, low-dimensional statistics. We show how this technique can single out trajectories that behave (topologically) different compared to typical trajectories, and illustrate the performance of our approach with both synthetic and empirical data.
Arsenic (As) and other toxic elements contamination of groundwater in Bangladesh poses a major threat to millions of people on a daily basis. Understanding complex relationships between arsenic and other elements can provide useful insights for mitigating arsenic poisoning in drinking water and requires multivariate modeling of the elements. However, environmental monitoring of such contaminants often involves a substantial proportion of left-censored observations falling below a minimum detection limit (MDL). This problem motivates us to propose a multivariate spatial Bayesian model for left-censored data for investigating the abundance of arsenic in Bangladesh groundwater and for creating spatial maps of the contaminants. Inference about the model parameters is drawn using an adaptive Markov Chain Monte Carlo (MCMC) sampling. The computation time for the proposed model is of the same order as a multivariate Gaussian process model that does not impute the censored values. The proposed method is applied to the arsenic contamination dataset made available by the Bangladesh Water Development Board (BWDB). Spatial maps of arsenic, barium (Ba), and calcium (Ca) concentrations in groundwater are prepared using the posterior predictive means calculated on a fine lattice over Bangladesh. Our results indicate that Chittagong and Dhaka divisions suffer from excessive concentrations of arsenic and only the divisions of Rajshahi and Rangpur have safe drinking water based on recommendations by the World Health Organization (WHO).
We propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used disjointly, we synergize the two approaches via dynamics-regulated training. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator. By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
Poor laryngeal muscle coordination that results in abnormal glottal posturing is believed to be a primary etiologic factor in common voice disorders such as non-phonotraumatic vocal hyperfunction. Abnormal activity of antagonistic laryngeal muscles is hypothesized to play a key role in the alteration of normal vocal fold biomechanics that results in the dysphonia associated with such disorders. Current low-order models of the vocal folds are unsatisfactory to test this hypothesis since they do not capture the co-contraction of antagonist laryngeal muscle pairs. To address this limitation, a self-sustained triangular body-cover model with full intrinsic muscle control is introduced. The proposed scheme shows good agreement with prior studies using finite element models, excised larynges, and clinical studies in sustained and time-varying vocal gestures. Simulations of vocal fold posturing obtained with distinct antagonistic muscle activation yield clear differences in kinematic, aerodynamic and acoustic measures. The proposed tool is deemed sufficiently accurate and flexible for future comprehensive investigations of non-phonotraumatic vocal hyperfunction and other laryngeal motor control disorders.
Probabilistic linear discriminant analysis (PLDA) has broad application in open-set verification tasks, such as speaker verification. A key concern for PLDA is that the model is too simple (linear Gaussian) to deal with complicated data; however, the simplicity by itself is a major advantage of PLDA, as it leads to desirable generalization. An interesting research therefore is how to improve modeling capacity of PLDA while retaining the simplicity. This paper presents a decoupling approach, which involves a global model that is simple and generalizable, and a local model that is complex and expressive. While the global model holds a bird view on the entire data, the local model represents the details of individual classes. We conduct a preliminary study towards this direction and investigate a simple decoupling model including both the global and local models. The new model, which we call decoupled PLDA, is tested on a speaker verification task. Experimental results show that it consistently outperforms the vanilla PLDA when the model is based on raw speaker vectors. However, when the speaker vectors are processed by length normalization, the advantage of decoupled PLDA will be largely lost, suggesting future research on non-linear local models.
We develop a new method to find the number of volatility regimes in a nonstationary financial time series by applying unsupervised learning to its volatility structure. We use change point detection to partition a time series into locally stationary segments and then compute a distance matrix between segment distributions. The segments are clustered into a learned number of discrete volatility regimes via an optimization routine. Using this framework, we determine a volatility clustering structure for financial indices, large-cap equities, exchange-traded funds and currency pairs. Our method overcomes the rigid assumptions necessary to implement many parametric regime-switching models, while effectively distilling a time series into several characteristic behaviours. Our results provide significant simplification of these time series and a strong descriptive analysis of prior behaviours of volatility. Finally, we create and validate a dynamic trading strategy that learns the optimal match between the current distribution of a time series and its past regimes, thereby making online risk-avoidance decisions in the present.
In this study, we propose a method to model the local and global features of the drawing/grinding trajectory with hierarchical Variational Autoencoders (VAEs). By combining two separately trained VAE models in a hierarchical structure, it is possible to generate trajectories with high reproducibility for both local and global features. The hierarchical generation network enables the generation of higher-order trajectories with a relatively small amount of training data. The simulation and experimental results demonstrate the generalization performance of the proposed method. In addition, we confirmed that it is possible to generate new trajectories, which have never been learned in the past, by changing the combination of the learned models.
We develop a novel human trajectory prediction system that incorporates the scene information (Scene-LSTM) as well as individual pedestrian movement (Pedestrian-LSTM) trained simultaneously within static crowded scenes. We superimpose a two-level grid structure (grid cells and subgrids) on the scene to encode spatial granularity plus common human movements. The Scene-LSTM captures the commonly traveled paths that can be used to significantly influence the accuracy of human trajectory prediction in local areas (i.e. grid cells). We further design scene data filters, consisting of a hard filter and a soft filter, to select the relevant scene information in a local region when necessary and combine it with Pedestrian-LSTM for forecasting a pedestrian's future locations. The experimental results on several publicly available datasets demonstrate that our method outperforms related works and can produce more accurate predicted trajectories in different scene contexts.