It has become increasingly common nowadays to collect observations of feature and response pairs from different environments. As a consequence, one has to apply learned predictors to data with a different distribution due to distribution shifts. One principled approach is to adopt the structural causal models to describe training and test models, following the invariance principle which says that the conditional distribution of the response given its predictors remains the same across environments. However, this principle might be violated in practical settings when the response is intervened. A natural question is whether it is still possible to identify other forms of invariance to facilitate prediction in unseen environments. To shed light on this challenging scenario, we introduce invariant matching property (IMP) which is an explicit relation to capture interventions through an additional feature. This leads to an alternative form of invariance that enables a unified treatment of general interventions on the response. We analyze the asymptotic generalization errors of our method under both the discrete and continuous environment settings, where the continuous case is handled by relating it to the semiparametric varying coefficient models. We present algorithms that show competitive performance compared to existing methods over various experimental settings including a COVID dataset.
Transport coefficients, such as the mobility, thermal conductivity and shear viscosity, are quantities of prime interest in statistical physics. At the macroscopic level, transport coefficients relate an external forcing of magnitude $\eta$, with $\eta \ll 1$, acting on the system to an average response expressed through some steady-state flux. In practice, steady-state averages involved in the linear response are computed as time averages over a realization of some stochastic differential equation. Variance reduction techniques are of paramount interest in this context, as the linear response is scaled by a factor of $1/\eta$, leading to large statistical error. One way to limit the increase in the variance is to allow for larger values of $\eta$ by increasing the range of values of the forcing for which the nonlinear part of the response is sufficiently small. In theory, one can add an extra forcing to the physical perturbation of the system, called synthetic forcing, as long as this extra forcing preserves the invariant measure of the reference system. The aim is to find synthetic perturbations allowing to reduce the nonlinear part of the response as much as possible. We present a mathematical framework for quantifying the quality of synthetic forcings, in the context of linear response theory, and discuss various possible choices for them. Our findings are illustrated with numerical results in low-dimensional systems.
We compare measures of concordance that arise as Pearson's linear correlation coefficient between two random variables transformed so that they follow the so-called concordance-inducing distributions. The class of such transformed rank correlations includes Spearman's rho, Blomqvist's beta and van der Waerden's coefficient. When only the standard axioms of measures of concordance are required, it is not always clear which transformed rank correlation is most suitable to use. To address this question, we compare measures of concordance in terms of their best and worst asymptotic variances of some canonical estimators over a certain set of dependence structures. A simple criterion derived from this approach is that concordance-inducing distributions with smaller fourth moment are more preferable. In particular, we show that Blomqvist's beta is the optimal transformed rank correlation in this sense, and Spearman's rho outperforms van der Waerden's coefficient. Moreover, we find that Kendall's tau, although it is not a transformed rank correlation of that nature, shares a certain optimal structure with Blomqvist's beta.
The rapid expansion of the Internet of Things and the emergence of edge computing-based applications has led to a new wave of cyber-attacks, with intensity and complexity that has never been seen before. Historically most research has focused on Intrusion Detection Systems (IDS), however due to the volume and speed of this new generation of cyber-attacks it is no longer sufficient to solely detect attacks and leave the response to security analysts. Consequently, research into Intrusion Response Systems (IRS) is accelerating rapidly. As such, new intrusion response approaches, methods and systems have been investigated, prototyped, and deployed. This paper is intended to provide a comprehensive review of the state of the art of IRSs. Specifically, a taxonomy to characterize the lifecycle of IRSs ranging from response selection to response deployment and response implementation is presented. A 10-phase structure to organize the core technical constituents of IRSs is also presented. Following this, an extensive review and analysis of the literature on IRSs published during the past decade is provided, and further classifies them into corresponding phases based on the proposed taxonomy and phase structure. This study provides a new way of classifying IRS research, thus offering in-depth insights into the latest discoveries and findings. In addition, through critical analysis and comparison, expert views, guidance and best practices on intrusion response approaches, system development and standardization are presented, upon which future research challenges and directions are postulated.
Our goal is to improve reliability of Machine Learning (ML) systems deployed in the wild. ML models perform exceedingly well when test examples are similar to train examples. However, real-world applications are required to perform on any distribution of test examples. Current ML systems can fail silently on test examples with distribution shifts. In order to improve reliability of ML models due to covariate or domain shift, we propose algorithms that enable models to: (a) generalize to a larger family of test distributions, (b) evaluate accuracy under distribution shifts, (c) adapt to a target distribution. We study causes of impaired robustness to domain shifts and present algorithms for training domain robust models. A key source of model brittleness is due to domain overfitting, which our new training algorithms suppress and instead encourage domain-general hypotheses. While we improve robustness over standard training methods for certain problem settings, performance of ML systems can still vary drastically with domain shifts. It is crucial for developers and stakeholders to understand model vulnerabilities and operational ranges of input, which could be assessed on the fly during the deployment, albeit at a great cost. Instead, we advocate for proactively estimating accuracy surfaces over any combination of prespecified and interpretable domain shifts for performance forecasting. We present a label-efficient estimation to address estimation over a combinatorial space of domain shifts. Further, when a model's performance on a target domain is found to be poor, traditional approaches adapt the model using the target domain's resources. Standard adaptation methods assume access to sufficient labeled resources, which may be impractical for deployed models. We initiate a study of lightweight adaptation techniques with only unlabeled data resources with a focus on language applications.
Adversarial training aims to reduce the problematic susceptibility of modern neural networks to small data perturbations. Surprisingly, overfitting is a major concern in adversarial training of neural networks despite being mostly absent in standard training. We provide here theoretical evidence for this peculiar ``robust overfitting'' phenomenon. Subsequently, we advance a novel loss function which we show both theoretically as well as empirically to enjoy a certified level of robustness against data evasion and poisoning attacks while ensuring guaranteed generalization. We indicate through careful numerical experiments that our resulting holistic robust (HR) training procedure yields SOTA performance in terms of adversarial error loss. Finally, we indicate that HR training can be interpreted as a direct extension of adversarial training and comes with a negligible additional computational burden.
Over the past few years, different types of data-driven Artificial Intelligence (AI) techniques have been widely adopted in various domains of science for generating predictive models. However, because of their black-box nature, it is crucial to establish trust in these models before accepting them as accurate. One way of achieving this goal is through the implementation of a post-hoc interpretation scheme that can put forward the reasons behind a black-box model's prediction. In this work, we propose a classical thermodynamics inspired approach for this purpose: Thermodynamically Explainable Representations of AI and other black-box Paradigms (TERP). TERP works by constructing a linear, local surrogate model that approximates the behaviour of the black-box model within a small neighborhood around the instance being explained. By employing a simple forward feature selection algorithm, TERP assigns an interpretability score to all the possible surrogate models. Compared to existing methods, TERP improves interpretability by selecting an optimal interpretation from these models by drawing simple parallels with classical thermodynamics. To validate TERP as a generally applicable method, we successfully demonstrate how it can be used to obtain interpretations of a wide range of black-box model architectures including deep learning Autoencoders, Recurrent neural networks and Convolutional neural networks applied to different domains including molecular simulations, image, and text classification respectively.
Identifying a biomarker or treatment-dose threshold that marks a specified level of risk is an important problem, especially in clinical trials. This risk, viewed as a function of thresholds and possibly adjusted for covariates, we call the threshold-response function. Extending the work of Donovan, Hudgens and Gilbert (2019), we propose a nonparametric efficient estimator for the covariate-adjusted threshold-response function, which utilizes machine learning and Targeted Minimum-Loss Estimation (TMLE). We additionally propose a more general estimator, based on sequential regression, that also applies when there is outcome missingness. We show that the threshold-response for a given threshold may be viewed as the expected outcome under a stochastic intervention where all participants are given a treatment dose above the threshold. We prove the estimator is efficient and characterize its asymptotic distribution. A method to construct simultaneous 95% confidence bands for the threshold-response function and its inverse is given. Furthermore, we discuss how to adjust our estimator when the treatment or biomarker is missing-at-random, as is the case in clinical trials with biased sampling designs, using inverse-probability-weighting. The methods are assessed in a diverse set of simulation settings with rare outcomes and cumulative case-control sampling. The methods are employed to estimate neutralizing antibody thresholds for virologically confirmed dengue risk in the CYD14 and CYD15 dengue vaccine trials.
Recent advances in representation learning have demonstrated an ability to represent information from different modalities such as video, text, and audio in a single high-level embedding vector. In this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual objects or spoken words. Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities. Beyond the shared embedding space, we propose a Cross-Modal Code Matching objective that forces the representations from different views (modalities) to have a similar distribution over the discrete embedding space such that cross-modal objects/actions localization can be performed without direct supervision. In our experiments we show that the proposed discretized multi-modal fine-grained representation (e.g., pixel/word/frame) can complement high-level summary representations (e.g., video/sentence/waveform) for improved performance on cross-modal retrieval tasks. We also observe that the discretized representation uses individual clusters to represent the same semantic concept across modalities.
Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL from the perspective of algorithmic modeling, applications and theoretical analyses. For algorithmic modeling, we give a definition of MTL and then classify different MTL algorithms into five categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach and decomposition approach as well as discussing the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, we review online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works in this paper. Finally, we present theoretical analyses and discuss several future directions for MTL.
Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.