Diagnosing the changes of structural behaviors using monitoring data is an important objective of structural health monitoring (SHM). The changes in structural behaviors are usually manifested as the feature changes in monitored structural responses; thus, developing effective methods for automatically detecting such changes is of considerable significance. Existing methods for change detection in SHM are mainly used for scalar or vector data, thus incapable of detecting the changes of the features represented by complex data, e.g., the probability density functions (PDFs). Detecting the abrupt changes occurred in the distributions (represented by PDFs) associated with the feature variables extracted from SHM data are usually of crucial interest for structural condition assessment; however, the SHM community still lacks effective diagnostic tools for detecting such changes. In this study, a change-point detection method is developed in the functional data-analytic framework for PDF-valued sequence, and it is leveraged to diagnose the distributional information break encountered in structural condition assessment. A major challenge in PDF-valued data modeling or analysis is that the PDFs are special functional data subjecting to nonlinear constraints. To tackle this issue, the PDFs are embedded into the Bayes space, and the associated change-point model is constructed by using the linear structure of the Bayes space; then, a hypothesis testing procedure is presented for distributional change-point detection based on the isomorphic mapping between the Bayes space and a functional linear space. Comprehensive simulation studies are conducted to validate the effectiveness of the proposed method as well as demonstrate its superiority over the competing method. Finally, an application to real SHM data illustrates its practical utility in structural condition assessment.
Accelerometers enable an objective measurement of physical activity levels among groups of individuals in free-living environments, providing high-resolution detail about physical activity changes at different time scales. Current approaches used in the literature for analyzing such data typically employ summary measures such as total inactivity time or compositional metrics. However, at the conceptual level, these methods have the potential disadvantage of discarding important information from recorded data when calculating these summaries and metrics since these typically depend on cut-offs related to exercise intensity zones chosen subjectively or even arbitrarily. Furthermore, much of the data collected in these studies follow complex survey designs. Then, using specific estimation strategies adapted to a particular sampling mechanism is mandatory. The aim of this paper is two-fold. First, a new functional representation of a distributional nature accelerometer data is introduced to build a complete individualized profile of each subject's physical activity levels. Second, we extend two nonparametric functional regression models, kernel smoothing and kernel ridge regression, to handle survey data and obtain reliable conclusions about the influence of physical activity in the different analyses performed in the complex sampling design NHANES cohort and so, show representation advantages.
Change point analyses are concerned with identifying positions of an ordered stochastic process that undergo abrupt local changes of some underlying distribution. When multiple processes are observed, it is often the case that information regarding the change point positions is shared across the different processes. This work describes a method that takes advantage of this type of information. Since the number and position of change points can be described through a partition with contiguous clusters, our approach develops a joint model for these types of partitions. We describe computational strategies associated with our approach and illustrate improved performance in detecting change points through a small simulation study. We then apply our method to a financial data set of emerging markets in Latin America and highlight interesting insights discovered due to the correlation between change point locations among these economies.
Effective visualizations were evaluated to reveal relevant health patterns from multi-sensor real-time wearable devices that recorded vital signs from patients admitted to hospital with COVID-19. Furthermore, specific challenges associated with wearable health data visualizations, such as fluctuating data quality resulting from compliance problems, time needed to charge the device and technical problems are described. As a primary use case, we examined the detection and communication of relevant health patterns visible in the vital signs acquired by the technology. Customized heat maps and bar charts were used to specifically highlight medically relevant patterns in vital signs. A survey of two medical doctors, one clinical project manager and seven health data science researchers was conducted to evaluate the visualization methods. From a dataset of 84 hospitalized COVID-19 patients, we extracted one typical COVID-19 patient history and based on the visualizations showcased the health history of two noteworthy patients. The visualizations were shown to be effective, simple and intuitive in deducing the health status of patients. For clinical staff who are time-constrained and responsible for numerous patients, such visualization methods can be an effective tool to enable continuous acquisition and monitoring of patients' health statuses even remotely.
This paper considers a multiblock nonsmooth nonconvex optimization problem with nonlinear coupling constraints. By developing the idea of using the information zone and adaptive regime proposed in [J. Bolte, S. Sabach and M. Teboulle, Nonconvex Lagrangian-based optimization: Monitoring schemes and global convergence, Mathematics of Operations Research, 43: 1210--1232, 2018], we propose a multiblock alternating direction method of multipliers for solving this problem. We specify the update of the primal variables by employing a majorization minimization procedure in each block update. An independent convergence analysis is conducted to prove subsequential as well as global convergence of the generated sequence to a critical point of the augmented Lagrangian. We also establish iteration complexity and provide preliminary numerical results for the proposed algorithm.
With the continuous rise of the COVID-19 cases worldwide, it is imperative to ensure that all those vulnerable countries lacking vaccine resources can receive sufficient support to contain the risks. COVAX is such an initiative operated by the WHO to supply vaccines to the most needed countries. One critical problem faced by the COVAX is how to distribute the limited amount of vaccines to these countries in the most efficient and equitable manner. This paper aims to address this challenge by first proposing a data-driven risk assessment and prediction model and then developing a decision-making framework to support the strategic vaccine distribution. The machine learning-based risk prediction model characterizes how the risk is influenced by the underlying essential factors, e.g., the vaccination level among the population in each COVAX country. This predictive model is then leveraged to design the optimal vaccine distribution strategy that simultaneously minimizes the resulting risks while maximizing the vaccination coverage in these countries targeted by COVAX. Finally, we corroborate the proposed framework using case studies with real-world data.
Depth quantile functions (DQF) encode geometric information about a point cloud via functions of a single variable, whereas each observation in a data set can be associated with a single function. These functions can then be easily plotted. This is true regardless of the dimension of the data, and in fact holds for object data as well, provided a mapping to an RKHS exists. This visualization aspect proves valuable in the case of anomaly detection, where a universal definition of what constitutes an anomaly is lacking. A relationship drawn between anomalies and antimodes provides a strategy for identifying anomalous observations through visual examination of the DQF plot. The DQF in one dimension is explored, providing intuition for its behavior generally and connections to several existing methodologies are made clear. For higher dimensions and object data, the adaptive DQF is introduced and explored on several data sets with promising results.
Maximum likelihood estimates (MLEs) are asymptotically normally distributed, and this property is used in meta-analyses to test the heterogeneity of estimates, either for a single cluster or for several sub-groups. More recently, MLEs for associations between risk factors and diseases have been hierarchically clustered to search for diseases with shared underlying causes, but the approach needs an objective statistical criterion to determine the optimum number and composition of clusters. Conventional statistical tests are briefly reviewed, before considering the posterior distribution associated with partitioning data into clusters. The posterior distribution is calculated by marginalising out the unknown cluster centres, and is different to the likelihood associated with mixture models. The calculation is equivalent to that used to obtain the Bayesian Information Criterion (BIC), but is exact, without a Laplace approximation. The result includes a sum of squares term, and terms that depend on the number and composition of clusters, that penalise the number of free parameters in the model. The usual BIC is shown to be unsuitable for clustering applications unless the number of items in all clusters are sufficiently large.
We say that a continuous real-valued function $x$ admits the Hurst roughness exponent $H$ if the $p^{\text{th}}$ variation of $x$ converges to zero if $p>1/H$ and to infinity if $p<1/H$. For the sample paths of many stochastic processes, such as fractional Brownian motion, the Hurst roughness exponent exists and equals the standard Hurst parameter. In our main result, we provide a mild condition on the Faber--Schauder coefficients of $x$ under which the Hurst roughness exponent exists and is given as the limit of the classical Gladyshev estimates $\widehat H_n(x)$. This result can be viewed as a strong consistency result for the Gladyshev estimators in an entirely model-free setting, because no assumption whatsoever is made on the possible dynamics of the function $x$. Nonetheless, our proof is probabilistic and relies on a martingale that is hidden in the Faber--Schauder expansion of $x$. Since the Gladyshev estimators are not scale-invariant, we construct several scale-invariant estimators that are derived from the sequence $(\widehat H_n)_{n\in\mathbb N}$. We also discuss how a dynamic change in the Hurst roughness parameter of a time series can be detected. Finally, we extend our results to the case in which the $p^{\text{th}}$ variation of $x$ is defined over a sequence of unequally spaced partitions. Our results are illustrated by means of high-frequency financial time series.
Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an alert and hand over the control to humans when it detects unusual scenes or objects that it has never seen before and cannot make a safe decision. This problem first emerged in 2017 and since then has received increasing attention from the research community, leading to a plethora of methods developed, ranging from classification-based to density-based to distance-based ones. Meanwhile, several other problems are closely related to OOD detection in terms of motivation and methodology. These include anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD). Despite having different definitions and problem settings, these problems often confuse readers and practitioners, and as a result, some existing studies misuse terms. In this survey, we first present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD detection, and OD. Under our framework, these five problems can be seen as special cases or sub-tasks, and are easier to distinguish. Then, we conduct a thorough review of each of the five areas by summarizing their recent technical developments. We conclude this survey with open challenges and potential research directions.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.