We present a method for assessing the sensitivity of the true causal effect to unmeasured confounding. The method requires the analyst to set two intuitive parameters. Otherwise, the method is assumption-free. The method returns an interval that contains the true causal effect, and whose bounds are arbitrarily sharp, i.e. practically attainable. We show experimentally that our bounds can be tighter than those obtained by the method of Ding and VanderWeele (2016a) which, moreover, requires to set one more parameter than our method. Finally, we extend our method to bound the natural direct and indirect effects when there are measured mediators and unmeasured exposure-outcome confounding.
Let $\sigma$ be a first-order signature and let $\mathbf{W}_n$ be the set of all $\sigma$-structures with domain $[n] = \{1, \ldots, n\}$. We can think of each structure in $\mathbf{W}_n$ as representing a "possible (state of the) world". By an inference framework we mean a class $\mathbf{F}$ of pairs $(\mathbb{P}, L)$, where $\mathbb{P} = (\mathbb{P}_n : n = 1, 2, 3, \ldots)$ and each $\mathbb{P}_n$ is a probability distribution on $\mathbb{W}_n$, and $L$ is a logic with truth values in the unit interval $[0, 1]$. From the point of view of probabilistic and logical expressivity one may consider an inference framework as optimal if it allows any pair $(\mathbb{P}, L)$ where $\mathbb{P} = (\mathbb{P}_n : n = 1, 2, 3, \ldots)$ is a sequence of probability distributions on $\mathbb{W}_n$ and $L$ is a logic. But from the point of view of using a pair $(\mathbb{P}, L)$ from such an inference framework for making inferences on $\mathbb{W}_n$ when $n$ is large we face the problem of computational complexity. This motivates looking for an "optimal" trade-off (in a given context) between expressivity and computational efficiency. We define a notion that an inference framework is "asymptotically at least as expressive" as another inference framework. This relation is a preorder and we describe a (strict) partial order on the equivalence classes of some inference frameworks that in our opinion are natural in the context of machine learning and artificial intelligence. The results have bearing on issues concerning efficient learning and probabilistic inference, but are also new instances of results in finite model theory about "almost sure elimination" of extra syntactic features (e.g quantifiers) beyond the connectives. Often such a result has a logical convergence law as a corollary.
Mixed-dimensional elliptic equations exhibiting a hierarchical structure are commonly used to model problems with high aspect ratio inclusions, such as flow in fractured porous media. We derive general abstract estimates based on the theory of functional a posteriori error estimates, for which guaranteed upper bounds for the primal and dual variables and two-sided bounds for the primal-dual pair are obtained. We improve on the abstract results obtained with the functional approach by proposing four different ways of estimating the residual errors based on the extent the approximate solution has conservation properties, i.e.: (1) no conservation, (2) subdomain conservation, (3) grid-level conservation, and (4) exact conservation. This treatment results in sharper and fully computable estimates when mass is conserved either at the grid level or exactly, with a comparable structure to those obtained from grid-based a posteriori techniques. We demonstrate the practical effectiveness of our theoretical results through numerical experiments using four different discretization methods for synthetic problems and applications based on benchmarks of flow in fractured porous media.
We study approaches for compressing the empirical measure in the context of finite dimensional reproducing kernel Hilbert spaces (RKHSs).In this context, the empirical measure is contained within a natural convex set and can be approximated using convex optimization methods. Such an approximation gives under certain conditions rise to a coreset of data points. A key quantity that controls how large such a coreset has to be is the size of the largest ball around the empirical measure that is contained within the empirical convex set. The bulk of our work is concerned with deriving high probability lower bounds on the size of such a ball under various conditions. We complement this derivation of the lower bound by developing techniques that allow us to apply the compression approach to concrete inference problems such as kernel ridge regression. We conclude with a construction of an infinite dimensional RKHS for which the compression is poor, highlighting some of the difficulties one faces when trying to move to infinite dimensional RKHSs.
In a sports competition, a team might lose a powerful incentive to exert full effort if its final rank does not depend on the outcome of the matches still to be played. Therefore, the organiser should reduce the probability of such a situation to the extent possible. Our paper provides a classification scheme to identify these weakly (where one team is indifferent) or strongly (where both teams are indifferent) stakeless games. A statistical model is estimated to simulate the UEFA Champions League groups and compare the candidate schedules used in the 2021/22 season according to the competitiveness of the matches played in the last round(s). The option followed in four of the eight groups is found to be optimal under a wide set of parameters. Minimising the number of strongly stakeless matches is verified to be a likely goal in the computer draw of the fixture that remains hidden from the public.
To minimize property loss and death count in terror attacks and other emergent scenarios, attention given to timely and effective evacuation cannot be enough. Due to limited evacuation resource, i.e., number of available exits, there exists interdependence among pedestrians such as cooperation, competition and herd effect. Thus human factors - more specifically, pedestrians' behavior in emergency evacuation - play a significant role in evacuation research. Effective evacuation can only be reached when route planning are considered in conjunction with psychological dynamics, which is often ignored. As another drawback, previous research assumes the environment including available exits as stationary. However, we note that during emergency, some exits which are not often utilized in normal times are opened, which potentially helps if pedestrians are aware of them. In this paper, we analyze the effect of pedestrians' behavior, i.e., herd effect and knowledge of changing environment with Cellular Automata (CA) simulation. Results of the simulation show the harmful effect of herd effect as well as highlight the importance of timely informing pedestrians of environmental change. Accordingly, we propose policy and procedural recommendations for emergency management of large, crowded structures. Our future work includes considering more human factors and applying our model to log data provided by videos in public venues, which can further show effectiveness of our model in real scenarios.
We study the problem of testing whether a function $f: \mathbb{R}^n \to \mathbb{R}$ is a polynomial of degree at most $d$ in the \emph{distribution-free} testing model. Here, the distance between functions is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathbb{R}^n$ from which we can draw samples. In contrast to previous work, we do not assume that $\mathcal{D}$ has finite support. We design a tester that given query access to $f$, and sample access to $\mathcal{D}$, makes $(d/\varepsilon)^{O(1)}$ many queries to $f$, accepts with probability $1$ if $f$ is a polynomial of degree $d$, and rejects with probability at least $2/3$ if every degree-$d$ polynomial $P$ disagrees with $f$ on a set of mass at least $\varepsilon$ with respect to $\mathcal{D}$. Our result also holds under mild assumptions when we receive only a polynomial number of bits of precision for each query to $f$, or when $f$ can only be queried on rational points representable using a logarithmic number of bits. Along the way, we prove a new stability theorem for multivariate polynomials that may be of independent interest.
We describe a numerical algorithm for approximating the equilibrium-reduced density matrix and the effective (mean force) Hamiltonian for a set of system spins coupled strongly to a set of bath spins when the total system (system+bath) is held in canonical thermal equilibrium by weak coupling with a "super-bath". Our approach is a generalization of now standard typicality algorithms for computing the quantum expectation value of observables of bare quantum systems via trace estimators and Krylov subspace methods. In particular, our algorithm makes use of the fact that the reduced system density, when the bath is measured in a given random state, tends to concentrate about the corresponding thermodynamic averaged reduced system density. Theoretical error analysis and numerical experiments are given to validate the accuracy of our algorithm. Further numerical experiments demonstrate the potential of our approach for applications including the study of quantum phase transitions and entanglement entropy for long-range interaction systems.
Recently, numerous studies have demonstrated the presence of bias in machine learning powered decision-making systems. Although most definitions of algorithmic bias have solid mathematical foundations, the corresponding bias detection techniques often lack statistical rigor, especially for non-iid data. We fill this gap in the literature by presenting a rigorous non-parametric testing procedure for bias according to Predictive Rate Parity, a commonly considered notion of algorithmic bias. We adapt traditional asymptotic results for non-parametric estimators to test for bias in the presence of dependence commonly seen in user-level data generated by technology industry applications and illustrate how these approaches can be leveraged for mitigation. We further propose modifications of this methodology to address bias measured through marginal outcome disparities in classification settings and extend notions of predictive rate parity to multi-objective models. Experimental results on real data show the efficacy of the proposed detection and mitigation methods.
Estimating counterfactual outcomes over time from observational data is relevant for many applications (e.g., personalized medicine). Yet, state-of-the-art methods build upon simple long short-term memory (LSTM) networks, thus rendering inferences for complex, long-range dependencies challenging. In this paper, we develop a novel Causal Transformer for estimating counterfactual outcomes over time. Our model is specifically designed to capture complex, long-range dependencies among time-varying confounders. For this, we combine three transformer subnetworks with separate inputs for time-varying covariates, previous treatments, and previous outcomes into a joint network with in-between cross-attentions. We further develop a custom, end-to-end training procedure for our Causal Transformer. Specifically, we propose a novel counterfactual domain confusion loss to address confounding bias: it aims to learn adversarial balanced representations, so that they are predictive of the next outcome but non-predictive of the current treatment assignment. We evaluate our Causal Transformer based on synthetic and real-world datasets, where it achieves superior performance over current baselines. To the best of our knowledge, this is the first work proposing transformer-based architecture for estimating counterfactual outcomes from longitudinal data.
We recall some of the history of the information-theoretic approach to deriving core results in probability theory and indicate parts of the recent resurgence of interest in this area with current progress along several interesting directions. Then we give a new information-theoretic proof of a finite version of de Finetti's classical representation theorem for finite-valued random variables. We derive an upper bound on the relative entropy between the distribution of the first $k$ in a sequence of $n$ exchangeable random variables, and an appropriate mixture over product distributions. The mixing measure is characterised as the law of the empirical measure of the original sequence, and de Finetti's result is recovered as a corollary. The proof is nicely motivated by the Gibbs conditioning principle in connection with statistical mechanics, and it follows along an appealing sequence of steps. The technical estimates required for these steps are obtained via the use of a collection of combinatorial tools known within information theory as `the method of types.'