Understanding human mobility patterns is important in applications as diverse as urban planning, public health, and political organizing. One rich source of data on human mobility is taxi ride data. Using the city of Chicago as a case study, we examine data from taxi rides in 2016 with the goal of understanding how neighborhoods are interconnected. This analysis will provide a sense of which neighborhoods individuals are using taxis to travel between, suggesting regions to focus new public transit development efforts. Additionally, this analysis will map traffic circulation patterns and provide an understanding of where in the city people are traveling from and where they are heading to - perhaps informing traffic or road pollution mitigation efforts. For the first application, representing the data as an undirected graph will suffice. Transit lines run in both directions so simply a knowledge of which neighborhoods have high rates of taxi travel between them provides an argument for placing public transit along those routes. However, in order to understand the flow of people throughout a city, we must make a distinction between the neighborhood from which people are departing and the areas to which they are arriving - this requires methods that can deal with directed graphs. All developed codes can be found at //github.com/Nikunj-Gupta/Spectral-Clustering-Directed-Graphs.
The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.
A causal decomposition analysis allows researchers to determine whether the difference in a health outcome between two groups can be attributed to a difference in each group's distribution of one or more modifiable mediator variables. With this knowledge, researchers and policymakers can focus on designing interventions that target these mediator variables. Existing methods for causal decomposition analysis either focus on one mediator variable or assume that each mediator variable is conditionally independent given the group label and the mediator-outcome confounders. In this paper, we propose a flexible causal decomposition analysis method that can accommodate multiple correlated and interacting mediator variables, which are frequently seen in studies of health behaviors and studies of environmental pollutants. We extend a Monte Carlo-based causal decomposition analysis method to this setting by using a multivariate mediator model that can accommodate any combination of binary and continuous mediator variables. Furthermore, we state the causal assumptions needed to identify both joint and path-specific decomposition effects through each mediator variable. To illustrate the reduction in bias and confidence interval width of the decomposition effects under our proposed method, we perform a simulation study. We also apply our approach to examine whether differences in smoking status and dietary inflammation score explain any of the Black-White differences in incident diabetes using data from a national cohort study.
The use of numerical based multi-phase fluid flow simulation can significantly aid in the development of an effective remediation strategy for groundwater systems contaminated with Dense Non Aqueous Phase Liquid (DNAPL). Incorporating the lithological heterogeneities of the aquifer into the model domain is a crucial aspect in the development of robust numerical simulators. Previous research studies have attempted to incorporate lithological heterogeneities into the domain; however, most of these numerical simulators are based on Finite Volume Method (FVM) and Finite Difference Method (FDM) which have limited applicability in the field-scale aquifers. Finite Element Method (FEM) can be highly useful in developing the field-scale simulation of DNAPL infiltration due to its consistent accuracy on irregular study domain, and the availability of higher orders of basis functions. In this research work, FEM based model has been developed to simulate the DNAPL infiltration in a hypothetical field-scale aquifer. The model results demonstrate the effect of meso-scale heterogeneities, specifically clay lenses, on the migration and accumulation of Dense Non Aqueous Phase Liquid (DNAPL) within the aquifer. Furthermore, this research provides valuable insights for the development of an appropriate remediation strategy for a general contaminated aquifer.
Formation control of multi-agent systems has been a prominent research topic, spanning both theoretical and practical domains over the past two decades. Our study delves into the leader-follower framework, addressing two critical, previously overlooked aspects. Firstly, we investigate the impact of an unknown nonlinear manifold, introducing added complexity to the formation control challenge. Secondly, we address the practical constraint of limited follower sensing range, posing difficulties in accurately localizing the leader for followers. Our core objective revolves around employing Koopman operator theory and Extended Dynamic Mode Decomposition to craft a reliable prediction algorithm for the follower robot to anticipate the leader's position effectively. Our experimentation on an elliptical paraboloid manifold, utilizing two omni-directional wheeled robots, validates the prediction algorithm's effectiveness.
While there is wide agreement that physical activity is an important component of a healthy lifestyle, it is unclear how many people adhere to public health recommendations on physical activity. The Physical Activity Guidelines (PAG), published by the CDC, provide guidelines to American adults, but it is difficult to assess compliance with these guidelines. The PAG further complicate adherence assessment by recommending activity to occur in at least 10 minute bouts. To better understand the measurement capabilities of various instruments to quantify activity, and to propose an approach to evaluate activity relative to the PAG, researchers at Iowa State University administered the Physical Activity Measurement Survey (PAMS) to over 1,000 participants in four different Iowa counties. In this paper, we develop a two-part Bayesian measurement error model and apply it to the PAMS data in order to assess compliance to the PAG in the Iowa adult population. The model accurately accounts for the 10 minute bout requirement put forth in the PAG. The measurement error model corrects biased estimates and accounts for day to day variation in activity. The model is also applied to the nationally representative National Health and Nutrition Examination Survey.
The application of deep learning to non-stationary temporal datasets can lead to overfitted models that underperform under regime changes. In this work, we propose a modular machine learning pipeline for ranking predictions on temporal panel datasets which is robust under regime changes. The modularity of the pipeline allows the use of different models, including Gradient Boosting Decision Trees (GBDTs) and Neural Networks, with and without feature engineering. We evaluate our framework on financial data for stock portfolio prediction, and find that GBDT models with dropout display high performance, robustness and generalisability with reduced complexity and computational cost. We then demonstrate how online learning techniques, which require no retraining of models, can be used post-prediction to enhance the results. First, we show that dynamic feature projection improves robustness by reducing drawdown in regime changes. Second, we demonstrate that dynamical model ensembling based on selection of models with good recent performance leads to improved Sharpe and Calmar ratios of out-of-sample predictions. We also evaluate the robustness of our pipeline across different data splits and random seeds with good reproducibility.
We present a robust deep incremental learning framework for regression tasks on financial temporal tabular datasets which is built upon the incremental use of commonly available tabular and time series prediction models to adapt to distributional shifts typical of financial datasets. The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity to deliver robust performance under adverse situations such as regime changes, fat-tailed distributions, and low signal-to-noise ratios. As a detailed study, we demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions under different market regimes. We also show that the performance of XGBoost models with different number of boosting rounds in three scenarios (small, standard and large) is monotonically increasing with respect to model size and converges towards the generalisation upper bound. We also evaluate the robustness of the model under variability of different hyperparameters, such as model complexity and data sampling settings. Our model has low hardware requirements as no specialised neural architectures are used and each base model can be independently trained in parallel.
This study performs an ablation analysis of Vector Quantized Generative Adversarial Networks (VQGANs), concentrating on image-to-image synthesis utilizing a single NVIDIA A100 GPU. The current work explores the nuanced effects of varying critical parameters including the number of epochs, image count, and attributes of codebook vectors and latent dimensions, specifically within the constraint of limited resources. Notably, our focus is pinpointed on the vector quantization loss, keeping other hyperparameters and loss components (GAN loss) fixed. This was done to delve into a deeper understanding of the discrete latent space, and to explore how varying its size affects the reconstruction. Though, our results do not surpass the existing benchmarks, however, our findings shed significant light on VQGAN's behaviour for a smaller dataset, particularly concerning artifacts, codebook size optimization, and comparative analysis with Principal Component Analysis (PCA). The study also uncovers the promising direction by introducing 2D positional encodings, revealing a marked reduction in artifacts and insights into balancing clarity and overfitting.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related, and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the Predictive, Descriptive, Relevant (PDR) framework for discussing interpretations. The PDR framework provides three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post-hoc categories, with sub-groups including sparsity, modularity and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often under-appreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.