With billions of people facing moderate or severe food insecurity, the resilience of the global food supply will be of increasing concern due to the effects of climate change and geopolitical events. In this paper we describe a framework to better identify food security hotspots using a combination of remote sensing, deep learning, crop yield modeling, and causal modeling of the food distribution system. While we feel that the methods are adaptable to other regions of the world, we focus our analysis on the wheat breadbasket of northern India, which supplies a large percentage of the world's population. We present a quantitative analysis of deep learning domain adaptation methods for wheat farm identification based on curated remote sensing data from France. We model climate change impacts on crop yields using the existing crop yield modeling tool WOFOST and we identify key drivers of crop simulation error using a longitudinal penalized functional regression. A description of a system dynamics model of the food distribution system in India is also presented, along with results of food insecurity identification based on seeding this model with the predicted crop yields.
In the rapidly evolving cryptocurrency landscape, trust is a critical yet underexplored factor shaping market behaviors and driving user preferences between centralized exchanges (CEXs) and decentralized exchanges (DEXs). Despite its importance, trust remains challenging to measure, limiting the study of its effects on market dynamics. The collapse of FTX, a major CEX, provides a unique natural experiment to examine the measurable impacts of trust and its sudden erosion on the cryptocurrency ecosystem. This pivotal event raised questions about the resilience of centralized trust systems and accelerated shifts toward decentralized alternatives. This research investigates the impacts of the FTX collapse on user trust, focusing on token valuation, trading flows, and sentiment dynamics. Employing causal inference methods, including Regression Discontinuity Design (RDD) and Difference-in-Differences (DID), we reveal significant declines in WETH prices and NetFlow from CEXs to DEXs, signaling a measurable transfer of trust. Additionally, natural language processing methods, including topic modeling and sentiment analysis, uncover the complexities of user responses, highlighting shifts from functional discussions to emotional fragmentation in Binance's community, while Uniswap's sentiment exhibits a gradual upward trend. Despite data limitations and external influences, the findings underscore the intricate interplay between trust, sentiment, and market behavior in the cryptocurrency ecosystem. By bridging blockchain analytics, behavioral finance, and decentralized finance (DeFi), this study contributes to interdisciplinary research, offering a deeper understanding of distributed trust mechanisms and providing critical insights for future investigations into the socio-technical dimensions of trust in digital economies.
Search and recommendation ecosystems exhibit competition among content creators. This competition has been tackled in a variety of game-theoretic frameworks. Content creators generate documents with the aim of being recommended by a content ranker for various information needs. In order for the ecosystem, modeled as a content ranking game, to be effective and maximize user welfare, it should guarantee stability, where stability is associated with the existence of pure Nash equilibrium in the corresponding game. Moreover, if the contents' ranking algorithm possesses a game in which any best-response learning dynamics of the content creators converge to equilibrium of high welfare, the system is considered highly attractive. However, as classical content ranking algorithms, employed by search and recommendation systems, rank documents by their distance to information needs, it has been shown that they fail to provide such stability properties. As a result, novel content ranking algorithms have been devised. In this work, we offer an alternative approach: corpus enrichment with a small set of fixed dummy documents. It turns out that, with the right design, such enrichment can lead to pure Nash equilibrium and even to the convergence of any best-response dynamics to a high welfare result, where we still employ the classical/current content ranking approach. We show two such corpus enrichment techniques with tight bounds on the number of documents needed to obtain the desired results. Interestingly, our study is a novel extension of Borel's Colonel Blotto game.
Consider public health officials aiming to spread awareness about a new vaccine in a community interconnected by a social network. How can they distribute information with minimal resources, so as to avoid polarization and ensure community-wide convergence of opinion? To tackle such challenges, we initiate the study of sample complexity of opinion convergence in networks. Our framework is built on the recognized opinion formation game, where we regard the opinion of each agent as a data-derived model, unlike previous works that treat opinions as data-independent scalars. The opinion model for every agent is initially learned from its local samples and evolves game-theoretically as all agents communicate with neighbors and revise their models towards an equilibrium. Our focus is on the sample complexity needed to ensure that the opinions converge to an equilibrium such that the final model of every agent has low generalization error. Our paper has two main technical results. First, we present a novel polynomial time optimization framework to quantify the total sample complexity for arbitrary networks, when the underlying learning problem is (generalized) linear regression. Second, we leverage this optimization to study the network gain which measures the improvement of sample complexity when learning over a network compared to that in isolation. Towards this end, we derive network gain bounds for various network classes including cliques, star graphs, and random regular graphs. Additionally, our framework provides a method to study sample distribution within the network, suggesting that it is sufficient to allocate samples inversely to the degree. Empirical results on both synthetic and real-world networks strongly support our theoretical findings.
Fractures, particularly in the distal forearm, are among the most common injuries in children and adolescents, with approximately 800 000 cases treated annually in Germany. The AO/OTA system provides a structured fracture type classification, which serves as the foundation for treatment decisions. Although accurately classifying fractures can be challenging, current deep learning models have demonstrated performance comparable to that of experienced radiologists. While most existing approaches rely solely on radiographs, the potential impact of incorporating other additional modalities, such as automatic bone segmentation, fracture location, and radiology reports, remains underexplored. In this work, we systematically analyse the contribution of these three additional information types, finding that combining them with radiographs increases the AUROC from 91.71 to 93.25. Our code is available on GitHub.
In real-world driving scenarios, multiple states occur simultaneously due to individual differences and environmental factors, complicating the analysis and estimation of driver states. Previous studies, limited by experimental design and analytical methods, may not be able to disentangle the relationships among multiple driver states and environmental factors. This paper introduces the Double Machine Learning (DML) analysis method to the field of driver state analysis to tackle this challenge. To train and test the DML model, a driving simulator experiment with 42 participants was conducted. All participants drove SAE level-3 vehicles and conducted three types of cognitive tasks in a 3-hour driving experiment. Drivers' subjective cognitive load and drowsiness levels were collected throughout the experiment. Then, we isolated individual and environmental factors affecting driver state variations and the factors affecting drivers' physiological and eye-tracking metrics when they are under specific states. The results show that our approach successfully decoupled and inferred the complex causal relationships between multiple types of drowsiness and cognitive load. Additionally, we identified key physiological and eye-tracking indicators in the presence of multiple driver states and under the influence of a single state, excluding the influence of other driver states, environmental factors, and individual characteristics. Our causal inference analytical framework can offer new insights for subsequent analysis of drivers' states. Further, the updated causal relation graph based on the DML analysis can provide theoretical bases for driver state monitoring based on physiological and eye-tracking measures.
Interacting with the legal system and the government requires the assembly and analysis of various pieces of information that can be spread across different (paper) documents, such as forms, certificates and contracts (e.g. leases). This information is required in order to understand one's legal rights, as well as to fill out forms to file claims in court or obtain government benefits. However, finding the right information, locating the correct forms and filling them out can be challenging for laypeople. Large language models (LLMs) have emerged as a powerful technology that has the potential to address this gap, but still rely on the user to provide the correct information, which may be challenging and error-prone if the information is only available in complex paper documents. We present an investigation into utilizing multi-modal LLMs to analyze images of handwritten paper forms, in order to automatically extract relevant information in a structured format. Our initial results are promising, but reveal some limitations (e.g., when the image quality is low). Our work demonstrates the potential of integrating multi-modal LLMs to support laypeople and self-represented litigants in finding and assembling relevant information.
Knowledge graphs represent factual knowledge about the world as relationships between concepts and are critical for intelligent decision making in enterprise applications. New knowledge is inferred from the existing facts in the knowledge graphs by encoding the concepts and relations into low-dimensional feature vector representations. The most effective representations for this task, called Knowledge Graph Embeddings (KGE), are learned through neural network architectures. Due to their impressive predictive performance, they are increasingly used in high-impact domains like healthcare, finance and education. However, are the black-box KGE models adversarially robust for use in domains with high stakes? This thesis argues that state-of-the-art KGE models are vulnerable to data poisoning attacks, that is, their predictive performance can be degraded by systematically crafted perturbations to the training knowledge graph. To support this argument, two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time. These adversarial attacks target the task of predicting the missing facts in knowledge graphs using KGE models, and the evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones. The thesis contributions not only highlight and provide an opportunity to fix the security vulnerabilities of KGE models, but also help to understand the black-box predictive behaviour of KGE models.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.