Gene therapies aim to address the root causes of diseases, particularly those stemming from rare genetic defects that can be life-threatening or severely debilitating. While there has been notable progress in the development of gene therapies in recent years, understanding their long-term effectiveness remains challenging due to a lack of data on long-term outcomes, especially during the early stages of their introduction to the market. To address the critical question of estimating long-term efficacy without waiting for the completion of lengthy clinical trials, we propose a novel Bayesian framework. This framework selects pertinent data from external sources, often early-phase clinical trials with more comprehensive longitudinal efficacy data that could lead to an improved inference of the long-term efficacy outcome. We apply this methodology to predict the long-term factor IX (FIX) levels of HEMGENIX (etranacogene dezaparvovec), the first FDA-approved gene therapy to treat adults with severe Hemophilia B, in a phase 3 study. Our application showcases the capability of the framework to estimate the 5-year FIX levels following HEMGENIX therapy, demonstrating sustained FIX levels induced by HEMGENIX infusion. Additionally, we provide theoretical insights into the methodology by establishing its posterior convergence properties.
Multiplex graphs, characterised by their layered structure, exhibit informative interdependencies within layers that are crucial for understanding complex network dynamics. Quantifying the interaction and shared information among these layers is challenging due to the non-Euclidean structure of graphs. Our paper introduces a comprehensive theory of multivariate information measures for multiplex graphs. We introduce graphon mutual information for pairs of graphs and expand this to graphon interaction information for three or more graphs, including their conditional variants. We then define graphon total correlation and graphon dual total correlation, along with their conditional forms, and introduce graphon $O-$information. We discuss and quantify the concepts of synergy and redundancy in graphs for the first time, introduce consistent nonparametric estimators for these multivariate graphon information--theoretic measures, and provide their convergence rates. We also conduct a simulation study to illustrate our theoretical findings and demonstrate the relationship between the introduced measures, multiplex graph structure, and higher--order interdependecies. Real-world applications further show the utility of our estimators in revealing shared information and dependence structures in real-world multiplex graphs. This work not only answers fundamental questions about information sharing across multiple graphs but also sets the stage for advanced pattern analysis in complex networks.
Organizational decision-making is crucial for success, yet cognitive biases can significantly affect risk preferences, leading to suboptimal outcomes. Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences, including financial losses. This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences during decision-making. This research makes a novel contribution by automating the identification and explanation of risk seeking preferences using Cumulative Prospect theory (CPT) from Behavioral Economics. The ABI approach transforms theoretical insights into actionable, real-time guidance, making them accessible to a broader range of organizations and decision-makers without requiring specialized personnel. By contextualizing CPT concepts into business language, the approach facilitates widespread adoption and enhances decision-making processes with deep behavioral insights. Our systematic literature review identified significant gaps in existing methods, especially the lack of automated solutions with a concrete mechanism for automatically identifying risk seeking preferences, and the absence of formal knowledge representation, such as ontologies, for identifying and explaining the risk preferences. The ABI Approach addresses these gaps, offering a significant contribution to decision-making research and practice. Furthermore, it enables automatic collection of historical decision data with risk preferences, providing valuable insights for enhancing strategic management and long-term organizational performance. An experiment provided preliminary evidence on its effectiveness in helping decision-makers recognize their risk seeking preferences during decision-making in the loss domain.
Curvature serves as a potent and descriptive invariant, with its efficacy validated both theoretically and practically within graph theory. We employ a definition of generalized Ricci curvature proposed by Ollivier, which Lin and Yau later adapted to graph theory, known as Ollivier-Ricci curvature (ORC). ORC measures curvature using the Wasserstein distance, thereby integrating geometric concepts with probability theory and optimal transport. Jost and Liu previously discussed the lower bound of ORC by showing the upper bound of the Wasserstein distance. We extend the applicability of these bounds to discrete spaces with metrics on integers, specifically hypergraphs. Compared to prior work on ORC in hypergraphs by Coupette, Dalleiger, and Rieck, which faced computational challenges, our method introduces a simplified approach with linear computational complexity, making it particularly suitable for analyzing large-scale networks. Through extensive simulations and application to synthetic and real-world datasets, we demonstrate the significant improvements our method offers in evaluating ORC.
Social norms play a crucial role in guiding agents towards understanding and adhering to standards of behavior, thus reducing social conflicts within multi-agent systems (MASs). However, current LLM-based (or generative) MASs lack the capability to be normative. In this paper, we propose a novel architecture, named CRSEC, to empower the emergence of social norms within generative MASs. Our architecture consists of four modules: Creation & Representation, Spreading, Evaluation, and Compliance. This addresses several important aspects of the emergent processes all in one: (i) where social norms come from, (ii) how they are formally represented, (iii) how they spread through agents' communications and observations, (iv) how they are examined with a sanity check and synthesized in the long term, and (v) how they are incorporated into agents' planning and actions. Our experiments deployed in the Smallville sandbox game environment demonstrate the capability of our architecture to establish social norms and reduce social conflicts within generative MASs. The positive outcomes of our human evaluation, conducted with 30 evaluators, further affirm the effectiveness of our approach. Our project can be accessed via the following link: //github.com/sxswz213/CRSEC.
In the last two decades, single-arm trials (SATs) have been effectively used to study anticancer therapies in well-defined patient populations using durable response rates as an objective and interpretable clinical endpoints. With a growing trend of regulatory accelerated approval (AA) requiring randomized controlled trials (RCTs), some confusions have arisen about the roles of SATs in AA. This paper is intended to elucidate conditions under which an SAT may be considered reasonable for AA. Specifically, the paper describes (1) two necessary conditions for designing an SAT, (2) three sufficient conditions that help either optimize the study design or interpret the study results, (3) four conditions that demonstrate substantial evidence of clinical benefits of the drug, and (4) a plan of a confirmatory RCT to verify the clinical benefits. Some further considerations are discussed to help design a scientifically sound SAT and communicate with regulatory agencies. Conditions presented in this paper may serve as a set of references for sponsors using SATs for regulatory decision.
Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information, amplifying critical details to focus more on the essential aspects of data. Deep learning has employed attention to boost performance for many applications. Interestingly, the same attention design can suit processing different data modalities and can easily be incorporated into large networks. Furthermore, multiple complementary attention mechanisms can be incorporated in one network. Hence, attention techniques have become extremely attractive. However, the literature lacks a comprehensive survey specific to attention techniques to guide researchers in employing attention in their deep models. Note that, besides being demanding in terms of training data and computational resources, transformers only cover a single category in self-attention out of the many categories available. We fill this gap and provide an in-depth survey of 50 attention techniques categorizing them by their most prominent features. We initiate our discussion by introducing the fundamental concepts behind the success of attention mechanism. Next, we furnish some essentials such as the strengths and limitations of each attention category, describe their fundamental building blocks, basic formulations with primary usage, and applications specifically for computer vision. We also discuss the challenges and open questions related to attention mechanism in general. Finally, we recommend possible future research directions for deep attention.
Knowledge enhanced pre-trained language models (K-PLMs) are shown to be effective for many public tasks in the literature but few of them have been successfully applied in practice. To address this problem, we propose K-AID, a systematic approach that includes a low-cost knowledge acquisition process for acquiring domain knowledge, an effective knowledge infusion module for improving model performance, and a knowledge distillation component for reducing the model size and deploying K-PLMs on resource-restricted devices (e.g., CPU) for real-world application. Importantly, instead of capturing entity knowledge like the majority of existing K-PLMs, our approach captures relational knowledge, which contributes to better-improving sentence-level text classification and text matching tasks that play a key role in question answering (QA). We conducted a set of experiments on five text classification tasks and three text matching tasks from three domains, namely E-commerce, Government, and Film&TV, and performed online A/B tests in E-commerce. Experimental results show that our approach is able to achieve substantial improvement on sentence-level question answering tasks and bring beneficial business value in industrial settings.
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.