Forecasting natural gas consumption, considering seasonality and trends, is crucial in planning its supply and consumption and optimizing the cost of obtaining it, mainly by industrial entities. However, in times of threats to its supply, it is also a critical element that guarantees the supply of this raw material to meet individual consumers' needs, ensuring society's energy security. This article introduces a novel multistep ahead forecasting of natural gas consumption with change point detection integration for model collection selection with continual learning capabilities using data stream processing. The performance of the forecasting models based on the proposed approach is evaluated in a complex real-world use case of natural gas consumption forecasting. We employed Hoeffding tree predictors as forecasting models and the Pruned Exact Linear Time (PELT) algorithm for the change point detection procedure. The change point detection integration enables selecting a different model collection for successive time frames. Thus, three model collection selection procedures (with and without an error feedback loop) are defined and evaluated for forecasting scenarios with various densities of detected change points. These models were compared with change point agnostic baseline approaches. Our experiments show that fewer change points result in a lower forecasting error regardless of the model collection selection procedure employed. Also, simpler model collection selection procedures omitting forecasting error feedback leads to more robust forecasting models suitable for continual learning tasks.
Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients.
Pulsative signals such as the electrocardiogram (ECG) are extensively collected as part of routine clinical care. However, noisy and poor-quality recordings, leading to missing values, are a major issue for signals collected using mobile health systems, decreasing the signal quality and affecting the automated downstream tasks. Recent studies have explored imputation of missing values for ECG with probabilistic time-series models. Nevertheless, in comparison with the deterministic models, their performance is still limited, as the variations across subjects and heart-beat relationships are not explicitly considered in the training objective. In this work, to improve the ECG imputation and forecasting accuracy with probabilistic models, we present an template-guided denoising diffusion probabilistic model, PulseDiff, which is conditioned an informative prior for a range of health conditions. Specifically, 1) we first extract a subject-level pulsative template from the observation as an informative prior of missing values, which captures the personal characteristics; 2) we then add beat-level stochastic shift terms on the template for prior augmentation, which considers the beat-level variance of positioning and amplitude; 3) we finally design a confidence score to consider the health condition of subject, which ensures our prior is provided in a safe way. Experiments with the PTBXL dataset reveal PulseDiff improves the performance of two strong DDPMs baseline models, CSDI and SSSD$^{S4}$, verifying our method guides the generation of DDPMs while managing the uncertainty. When combining with SSSD$^{S4}$, our PulseDiff method outperforms the leading deterministic model for short-interval missing data and is comparable for long-interval data loss.
Hypergraphs naturally represent group interactions, which are omnipresent in many domains: collaborations of researchers, co-purchases of items, and joint interactions of proteins, to name a few. In this work, we propose tools for answering the following questions: (Q1) what are the structural design principles of real-world hypergraphs? (Q2) how can we compare local structures of hypergraphs of different sizes? (Q3) how can we identify domains from which hypergraphs are? We first define hypergraph motifs (h-motifs), which describe the overlapping patterns of three connected hyperedges. Then, we define the significance of each h-motif in a hypergraph as its occurrences relative to those in properly randomized hypergraphs. Lastly, we define the characteristic profile (CP) as the vector of the normalized significance of every h-motif. Regarding Q1, we find that h-motifs' occurrences in 11 real-world hypergraphs from 5 domains are clearly distinguished from those of randomized hypergraphs. Then, we demonstrate that CPs capture local structural patterns unique to each domain, and thus comparing CPs of hypergraphs addresses Q2 and Q3. The concept of CP is extended to represent the connectivity pattern of each node or hyperedge as a vector, which proves useful in node classification and hyperedge prediction. Our algorithmic contribution is to propose MoCHy, a family of parallel algorithms for counting h-motifs' occurrences in a hypergraph. We theoretically analyze their speed and accuracy and show empirically that the advanced approximate version MoCHy-A+ is more accurate and faster than the basic approximate and exact versions, respectively. Furthermore, we explore ternary hypergraph motifs that extends h-motifs by taking into account not only the presence but also the cardinality of intersections among hyperedges. This extension proves beneficial for all previously mentioned applications.
The problem of 3-dimensional, convex rigid-body collision over a plane is fully investigated; this includes bodies with sharp corners that is resolved without the need for nonsmooth convex analysis of tangent and normal cones. In particular, using nonsmooth Lagrangian mechanics, the equations of motion and jump equations are derived, which are largely dependent on the collision detection function. Following the variational approach, a Lie group variational collision integrator (LGVCI) is systematically derived that is symplectic, momentum-preserving, and has excellent long-time, near energy conservation. Furthermore, systems with corner impacts are resolved adeptly using $\epsilon$-rounding on the sign distance function (SDF) of the body. Extensive numerical experiments are conducted to demonstrate the conservation properties of the LGVCI.
Tracking of inventory and rearrangement of misplaced items are some of the most labor-intensive tasks in a retail environment. While there have been attempts at using vision-based techniques for these tasks, they mostly use planogram compliance for detection of any anomalies, a technique that has been found lacking in robustness and scalability. Moreover, existing systems rely on human intervention to perform corrective actions after detection. In this paper, we present Co-AD, a Concept-based Anomaly Detection approach using a Vision Transformer (ViT) that is able to flag misplaced objects without using a prior knowledge base such as a planogram. It uses an auto-encoder architecture followed by outlier detection in the latent space. Co-AD has a peak success rate of 89.90% on anomaly detection image sets of retail objects drawn from the RP2K dataset, compared to 80.81% on the best-performing baseline of a standard ViT auto-encoder. To demonstrate its utility, we describe a robotic mobile manipulation pipeline to autonomously correct the anomalies flagged by Co-AD. This work is ultimately aimed towards developing autonomous mobile robot solutions that reduce the need for human intervention in retail store management.
In public health applications, spatial data collected are often recorded at different spatial scales and over different correlated variables. Spatial change of support is a key inferential problem in these applications and have become standard in univariate settings; however, it is less standard in multivariate settings. There are several existing multivariate spatial models that can be easily combined with multiscale spatial approach to analyze multivariate multiscale spatial data. In this paper, we propose three new models from such combinations for bivariate multiscale spatial data in a Bayesian context. In particular, we extend spatial random effects models, multivariate conditional autoregressive models, and ordered hierarchical models through a multiscale spatial approach. We run simulation studies for the three models and compare them in terms of prediction performance and computational efficiency. We motivate our models through an analysis of 2015 Texas annual average percentage receiving two blood tests from the Dartmouth Atlas Project.
Quantiles are useful characteristics of random variables that can provide substantial information on distributions compared with commonly used summary statistics such as means. In this paper, we propose a Bayesian quantile trend filtering method to estimate non-stationary trend of quantiles. We introduce general shrinkage priors to induce locally adaptive Bayesian inference on trends and mixture representation of the asymmetric Laplace likelihood. To quickly compute the posterior distribution, we develop calibrated mean-field variational approximations to guarantee that the frequentist coverage of credible intervals obtained from the approximated posterior is a specified nominal level. Simulation and empirical studies show that the proposed algorithm is computationally much more efficient than the Gibbs sampler and tends to provide stable inference results, especially for high/low quantiles.
We aim to maximize the energy efficiency, gauged as average energy cost per job, in a large-scale server farm with various storage or/and computing components modeled as parallel abstracted servers. Each server operates in multiple power modes characterized by potentially different service and energy consumption rates. The heterogeneity of servers and multiple power modes complicate the maximization problem, where optimal solutions are generally intractable. Relying on the Whittle relaxation technique, we resort to a near-optimal, scalable job-assignment policy. Under a mild condition related to the service and energy consumption rates of the servers, we prove that our proposed policy approaches optimality as the size of the entire system tends to infinity; that is, it is asymptotically optimal. For the non-asymptotic regime, we show the effectiveness of the proposed policy through numerical simulations, where the policy outperforms all the tested baselines, and we numerically demonstrate its robustness against heavy-tailed job-size distributions.
Textbooks are one of the main mediums for delivering high-quality education to students. In particular, explanatory and illustrative visuals play a key role in retention, comprehension and general transfer of knowledge. However, many textbooks lack these interesting visuals to support student learning. In this paper, we investigate the effectiveness of vision-language models to automatically enhance textbooks with images from the web. We collect a dataset of e-textbooks in the math, science, social science and business domains. We then set up a text-image matching task that involves retrieving and appropriately assigning web images to textbooks, which we frame as a matching optimization problem. Through a crowd-sourced evaluation, we verify that (1) while the original textbook images are rated higher, automatically assigned ones are not far behind, and (2) the precise formulation of the optimization problem matters. We release the dataset of textbooks with an associated image bank to inspire further research in this intersectional area of computer vision and NLP for education.
Early identification of abnormalities in plants is an important task for ensuring proper growth and achieving high yields from crops. Precision agriculture can significantly benefit from modern computer vision tools to make farming strategies addressing these issues efficient and effective. As farming lands are typically quite large, farmers have to manually check vast areas to determine the status of the plants and apply proper treatments. In this work, we consider the problem of automatically identifying abnormal regions in maize plants from images captured by a UAV. Using deep learning techniques, we have developed a methodology which can detect different levels of abnormality (i.e., low, medium, high or no abnormality) in maize plants independently of their growth stage. The primary goal is to identify anomalies at the earliest possible stage in order to maximize the effectiveness of potential treatments. At the same time, the proposed system can provide valuable information to human annotators for ground truth data collection by helping them to focus their attention on a much smaller set of images only. We have experimented with two different but complimentary approaches, the first considering abnormality detection as a classification problem and the second considering it as a regression problem. Both approaches can be generalized to different types of abnormalities and do not make any assumption about the abnormality occurring at an early plant growth stage which might be easier to detect due to the plants being smaller and easier to separate. As a case study, we have considered a publicly available data set which exhibits mostly Nitrogen deficiency in maize plants of various growth stages. We are reporting promising preliminary results with an 88.89\% detection accuracy of low abnormality and 100\% detection accuracy of no abnormality.