In dynamic Magnetic Resonance Imaging (MRI), k-space is typically undersampled due to limited scan time, resulting in aliasing artifacts in the image domain. Hence, dynamic MR reconstruction requires not only modeling spatial frequency components in the x and y directions of k-space but also considering temporal redundancy. Most previous works rely on image-domain regularizers (priors) to conduct MR reconstruction. In contrast, we focus on interpolating the undersampled k-space before obtaining images with Fourier transform. In this work, we connect masked image modeling with k-space interpolation and propose a novel Transformer-based k-space Global Interpolation Network, termed k-GIN. Our k-GIN learns global dependencies among low- and high-frequency components of 2D+t k-space and uses it to interpolate unsampled data. Further, we propose a novel k-space Iterative Refinement Module (k-IRM) to enhance the high-frequency components learning. We evaluate our approach on 92 in-house 2D+t cardiac MR subjects and compare it to MR reconstruction methods with image-domain regularizers. Experiments show that our proposed k-space interpolation method quantitatively and qualitatively outperforms baseline methods. Importantly, the proposed approach achieves substantially higher robustness and generalizability in cases of highly-undersampled MR data.
Accurate analytical and numerical modeling of multiscale systems is a daunting task. The need to properly resolve spatial and temporal scales spanning multiple orders of magnitude pushes the limits of both our theoretical models as well as our computational capabilities. Rigorous upscaling techniques enable efficient computation while bounding/tracking errors and helping to make informed cost-accuracy tradeoffs. The biggest challenges arise when the applicability conditions of upscaled models break down. Here, we present a non-intrusive two-way (iterative bottom-up top-down) coupled hybrid model, applied to thermal runaway in battery packs, that combines fine-scale and upscaled equations in the same numerical simulation to achieve predictive accuracy while limiting computational costs. First, we develop two methods with different orders of accuracy to enforce continuity at the coupling boundary. Then, we derive weak (i.e., variational) formulations of the fine-scale and upscaled governing equations for finite element (FE) discretization and numerical implementation in FEniCS. We demonstrate that hybrid simulations can accurately predict the average temperature fields within error bounds determined a priori by homogenization theory. Finally, we demonstrate the computational efficiency of the hybrid algorithm against fine-scale simulations.
The application of Physics-Informed Neural Networks (PINNs) is investigated for the first time in solving the one-dimensional Countercurrent spontaneous imbibition (COUCSI) problem at both early and late time (i.e., before and after the imbibition front meets the no-flow boundary). We introduce utilization of Change-of-Variables as a technique for improving performance of PINNs. We formulated the COUCSI problem in three equivalent forms by changing the independent variables. The first describes saturation as function of normalized position X and time T; the second as function of X and Y=T^0.5; and the third as a sole function of Z=X/T^0.5 (valid only at early time). The PINN model was generated using a feed-forward neural network and trained based on minimizing a weighted loss function, including the physics-informed loss term and terms corresponding to the initial and boundary conditions. All three formulations could closely approximate the correct solutions, with water saturation mean absolute errors around 0.019 and 0.009 for XT and XY formulations and 0.012 for the Z formulation at early time. The Z formulation perfectly captured the self-similarity of the system at early time. This was less captured by XT and XY formulations. The total variation of saturation was preserved in the Z formulation, and it was better preserved with XY- than XT formulation. Redefining the problem based on the physics-inspired variables reduced the non-linearity of the problem and allowed higher solution accuracies, a higher degree of loss-landscape convexity, a lower number of required collocation points, smaller network sizes, and more computationally efficient solutions.
Cyber-Physical Systems (CPSs) play a central role in the behavior of a wide range of autonomous physical systems such as medical devices, autonomous vehicles, and smart homes, many of which are safety-critical. CPSs are often specified iteratively as a sequence of models at different levels that can be tested via simulation systems at early stages of their development cycle. One such model is a hybrid automaton; these are used frequently for CPS applications and have the advantage of encapsulating both continuous and discrete CPS behaviors. When testing CPSs, engineers can take advantage of these models to generate test cases that target both types of these behaviors. Moreover, since these models are constructed early in the development process for CPSs, they allow test cases to be generated early in that process for those CPSs, even before simulation models of the CPSs have been designed. One challenge when testing CPSs is that these systems may operate differently even under an identically applied test scenario. In such cases, we cannot employ test oracles that use predetermined deterministic behaviors; instead, test oracles should consider sets of desired behaviors in order to determine whether the CPS has behaved appropriately. In this paper we present a test case generation technique, HYTEST, that generates test cases based on hybrid models, accompanied by appropriate test oracles, for use in testing CPSs early in their development cycle. To evaluate the effectiveness and efficiency of HYTEST, we conducted an empirical study in which we applied the technique to several CPSs and measured its ability to detect faults in those CPSs and the amount of time required to perform the testing process. The results of the study show that HYTEST was able to detect faults more effectively and efficiently than the baseline techniques we compare it to.
Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. GOKU-UI not only broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), but also integrates attention mechanisms and a novel multiple shooting training strategy in the latent space. These modifications have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 16-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, our proposed enhancements markedly increased the model's effectiveness in capturing complex brain dynamics. This augmented version not only surpassed all baseline methods in the reconstruction task, but also demonstrated lower prediction error of future brain activity up to 15 seconds ahead. By training GOKU-UI on resting state fMRI data, we encoded whole-brain dynamics into a latent representation, learning a low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as the classification of mental states or psychiatric conditions. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning.
In typical in-hand manipulation tasks represented by object pivoting, the real-time perception of rotational slippage has been proven beneficial for improving the dexterity and stability of robotic hands. An effective strategy is to obtain the contact properties for measuring rotation angle through visuotactile sensing. However, existing methods for rotation estimation did not consider the impact of the incipient slip during the pivoting process, which introduces measurement errors and makes it hard to determine the boundary between stable contact and macro slip. This paper describes a generalized 2-d contact model under pivoting, and proposes a rotation measurement method based on the line-features in the stick region. The proposed method was applied to the Tac3D vision-based tactile sensors using continuous marker patterns. Experiments show that the rotation measurement system could achieve an average static measurement error of 0.17 degree and an average dynamic measurement error of 1.34 degree. Besides, the proposed method requires no training data and can achieve real-time sensing during the in-hand object pivoting.
Generalized metric spaces are obtained by weakening the requirements (e.g., symmetry) on the distance function and by allowing it to take values in structures (e.g., quantales) that are more general than the set of non-negative real numbers. Quantale-valued metric spaces have gained prominence due to their use in quantitative reasoning on programs/systems, and for defining various notions of behavioral metrics. We investigate imprecision and robustness in the framework of quantale-valued metric spaces, when the quantale is continuous. In particular, we study the relation between the robust topology, which captures robustness of analyses, and the Hausdorff-Smyth hemi-metric. To this end, we define a preorder-enriched monad $\mathsf{P}_S$, called the Hausdorff-Smyth monad, and when $Q$ is a continuous quantale and $X$ is a $Q$-metric space, we relate the topology induced by the metric on $\mathsf{P}_S(X)$ with the robust topology on the powerset $\mathsf{P}(X)$ defined in terms of the metric on $X$.
Due to severe societal and environmental impacts, wildfire prediction using multi-modal sensing data has become a highly sought-after data-analytical tool by various stakeholders (such as state governments and power utility companies) to achieve a more informed understanding of wildfire activities and plan preventive measures. A desirable algorithm should precisely predict fire risk and magnitude for a location in real time. In this paper, we develop a flexible spatio-temporal wildfire prediction framework using multi-modal time series data. We first predict the wildfire risk (the chance of a wildfire event) in real-time, considering the historical events using discrete mutually exciting point process models. Then we further develop a wildfire magnitude prediction set method based on the flexible distribution-free time-series conformal prediction (CP) approach. Theoretically, we prove a risk model parameter recovery guarantee, as well as coverage and set size guarantees for the CP sets. Through extensive real-data experiments with wildfire data in California, we demonstrate the effectiveness of our methods, as well as their flexibility and scalability in large regions.
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
In recent years, larger and deeper models are springing up and continuously pushing state-of-the-art (SOTA) results across various fields like natural language processing (NLP) and computer vision (CV). However, despite promising results, it needs to be noted that the computations required by SOTA models have been increased at an exponential rate. Massive computations not only have a surprisingly large carbon footprint but also have negative effects on research inclusiveness and deployment on real-world applications. Green deep learning is an increasingly hot research field that appeals to researchers to pay attention to energy usage and carbon emission during model training and inference. The target is to yield novel results with lightweight and efficient technologies. Many technologies can be used to achieve this goal, like model compression and knowledge distillation. This paper focuses on presenting a systematic review of the development of Green deep learning technologies. We classify these approaches into four categories: (1) compact networks, (2) energy-efficient training strategies, (3) energy-efficient inference approaches, and (4) efficient data usage. For each category, we discuss the progress that has been achieved and the unresolved challenges.
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.