Causal inference with spatial environmental data is often challenging due to the presence of interference: outcomes for observational units depend on some combination of local and non-local treatment. This is especially relevant when estimating the effect of power plant emissions controls on population health, as pollution exposure is dictated by (i) the location of point-source emissions, as well as (ii) the transport of pollutants across space via dynamic physical-chemical processes. In this work, we estimate the effectiveness of air quality interventions at coal-fired power plants in reducing two adverse health outcomes in Texas in 2016: pediatric asthma ED visits and Medicare all-cause mortality. We develop methods for causal inference with interference when the underlying network structure is not known with certainty and instead must be estimated from ancillary data. We offer a Bayesian, spatial mechanistic model for the interference mapping which we combine with a flexible non-parametric outcome model to marginalize estimates of causal effects over uncertainty in the structure of interference. Our analysis finds some evidence that emissions controls at upwind power plants reduce asthma ED visits and all-cause mortality, however accounting for uncertainty in the interference renders the results largely inconclusive.
Complex systems that consist of different kinds of entities that interact in different ways can be modeled by multilayer networks. This paper uses the tensor formalism with the Einstein tensor product to model this type of networks. Several centrality measures, that are well known for single-layer networks, are extended to multilayer networks using tensors and their properties are investigated. In particular, subgraph centrality based on the exponential and resolvent of a tensor are considered. Krylov subspace methods are introduced for computing approximations of different measures for large multilayer networks.
Many public health interventions are conducted in settings where individuals are connected to one another and the intervention assigned to randomly selected individuals may spill over to other individuals they are connected to. In these spillover settings, the effects of such interventions can be quantified in several ways. The average individual effect measures the intervention effect among those directly treated, while the spillover effect measures the effect among those connected to those directly treated. In addition, the overall effect measures the average intervention effect across the study population, over those directly treated along with those to whom the intervention spills over but who are not directly treated. Here, we develop methods for study design with the aim of estimating individual, spillover, and overall effects. In particular, we consider an egocentric network-based randomized design in which a set of index participants is recruited from the population and randomly assigned to treatment, while data are also collected from their untreated network members. We use the potential outcomes framework to define two clustered regression modeling approaches and clarify the underlying assumptions required to identify and estimate causal effects. We then develop sample size formulas for detecting individual, spillover, and overall effects. We investigate the roles of the intra-class correlation coefficient and the probability of treatment allocation on the required number of egocentric networks with a fixed number of network members for each egocentric network and vice-versa.
Markerless Human Pose Estimation (HPE) proved its potential to support decision making and assessment in many fields of application. HPE is often preferred to traditional marker-based Motion Capture systems due to the ease of setup, portability, and affordable cost of the technology. However, the exploitation of HPE in biomedical applications is still under investigation. This review aims to provide an overview of current biomedical applications of HPE. In this paper, we examine the main features of HPE approaches and discuss whether or not those features are of interest to biomedical applications. We also identify those areas where HPE is already in use and present peculiarities and trends followed by researchers and practitioners. We include here 25 approaches to HPE and more than 40 studies of HPE applied to motor development assessment, neuromuscolar rehabilitation, and gait & posture analysis. We conclude that markerless HPE offers great potential for extending diagnosis and rehabilitation outside hospitals and clinics, toward the paradigm of remote medical care.
Navigation of terrestrial robots is typically addressed either with localization and mapping (SLAM) followed by classical planning on the dynamically created maps, or by machine learning (ML), often through end-to-end training with reinforcement learning (RL) or imitation learning (IL). Recently, modular designs have achieved promising results, and hybrid algorithms that combine ML with classical planning have been proposed. Existing methods implement these combinations with hand-crafted functions, which cannot fully exploit the complementary nature of the policies and the complex regularities between scene structure and planning performance. Our work builds on the hypothesis that the strengths and weaknesses of neural planners and classical planners follow some regularities, which can be learned from training data, in particular from interactions. This is grounded on the assumption that, both, trained planners and the mapping algorithms underlying classical planning are subject to failure cases depending on the semantics of the scene and that this dependence is learnable: for instance, certain areas, objects or scene structures can be reconstructed easier than others. We propose a hierarchical method composed of a high-level planner dynamically switching between a classical and a neural planner. We fully train all neural policies in simulation and evaluate the method in both simulation and real experiments with a LoCoBot robot, showing significant gains in performance, in particular in the real environment. We also qualitatively conjecture on the nature of data regularities exploited by the high-level planner.
Poisson process models are defined in terms of their rates for outage and restore processes in power system resilience events. These outage and restore processes easily yield the performance curves that track the evolution of resilience events, and the area, nadir, and duration of the performance curves are standard resilience metrics. This letter analyzes typical resilience events by analyzing the area, nadir, and duration of mean performance curves. Explicit and intuitive formulas for these metrics are derived in terms of the Poisson process model parameters, and these parameters can be estimated from utility data. This clarifies the calculation of metrics of typical resilience events, and shows what they depend on. The metric formulas are derived with lognormal, exponential, or constant rates of restoration. The method is illustrated with a typical North American transmission event. Similarly nice formulas are obtained for the area metric for empirical power system data.
Structural condition identification based on monitoring data is important for automatic civil infrastructure asset management. Nevertheless, the monitoring data is almost always insufficient, because the real-time monitoring data of a structure only reflects a limited number of structural conditions, while the number of possible structural conditions is infinite. With insufficient monitoring data, the identification performance may significantly degrade. This study aims to tackle this challenge by proposing a deep transfer learning (TL) approach for structural condition identification. It effectively integrates physics-based and data-driven methods, by generating various training data based on the calibrated finite element (FE) model, pretraining a deep learning (DL) network, and transferring its embedded knowledge to the real monitoring/testing domain. Its performance is demonstrated in a challenging case, vibration-based condition identification of steel frame structures with bolted connection damage. The results show that even though the training data are from a different domain and with different types of labels, intrinsic physics can be learned through the pretraining process, and the TL results can be clearly improved, with the identification accuracy increasing from 81.8% to 89.1%. The comparative studies show that SHMnet with three convolutional layers stands out as the pretraining DL architecture, with 21.8% and 25.5% higher identification accuracy values over the other two networks, VGGnet-16 and ResNet-18. The findings of this study advance the potential application of the proposed approach towards expert-level condition identification based on limited real-world training data.
Comparative effectiveness research frequently addresses a time-to-event outcome and can require unique considerations in the presence of treatment noncompliance. Motivated by the challenges in addressing noncompliance in the ADAPTABLE pragmatic trial, we develop a multiply robust estimator to estimate the principal survival causal effects under the principal ignorability and monotonicity assumption. The multiply robust estimator involves several working models including that for the treatment assignment, the compliance strata, censoring, and time-to-event of interest. The proposed estimator is consistent even if one, and sometimes two, of the working models are misspecified. We apply the multiply robust method in the ADAPTABLE trial to evaluate the effect of low- versus high-dose aspirin assignment on patients' death and hospitalization from cardiovascular diseases. We find that, comparing to low-dose assignment, assignment to the high-dose leads to differential effects among always high-dose takers, compliers, and always low-dose takers. Such treatment effect heterogeneity contributes to the null intention-to-treatment effect, and suggests that policy makers should design personalized strategies based on potential compliance patterns to maximize treatment benefits to the entire study population. We further perform a formal sensitivity analysis for investigating the robustness of our causal conclusions under violation of two identification assumptions specific to noncompliance.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.
A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8% of benchmark performance using only two labeled MRI volumes for training, corresponding to only 4% (for ACDC) of the training data used to train the benchmark.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.