Causal reversibility blends reversibility and causality for concurrent systems. It indicates that an action can be undone provided that all of its consequences have been undone already, thus making it possible to bring the system back to a past consistent state. Time reversibility is instead considered in the field of stochastic processes, mostly for efficient analysis purposes. A performance model based on a continuous-time Markov chain is time reversible if its stochastic behavior remains the same when the direction of time is reversed. We bridge these two theories of reversibility by showing the conditions under which causal reversibility and time reversibility are both ensured by construction. This is done in the setting of a stochastic process calculus, which is then equipped with a variant of stochastic bisimilarity accounting for both forward and backward directions.
As a pivotal component to attaining generalizable solutions in human intelligence, reasoning provides great potential for reinforcement learning (RL) agents' generalization towards varied goals by summarizing part-to-whole arguments and discovering cause-and-effect relations. However, how to discover and represent causalities remains a huge gap that hinders the development of causal RL. In this paper, we augment Goal-Conditioned RL (GCRL) with Causal Graph (CG), a structure built upon the relation between objects and events. We novelly formulate the GCRL problem into variational likelihood maximization with CG as latent variables. To optimize the derived objective, we propose a framework with theoretical performance guarantees that alternates between two steps: using interventional data to estimate the posterior of CG; using CG to learn generalizable models and interpretable policies. Due to the lack of public benchmarks that verify generalization capability under reasoning, we design nine tasks and then empirically show the effectiveness of the proposed method against five baselines on these tasks. Further theoretical analysis shows that our performance improvement is attributed to the virtuous cycle of causal discovery, transition modeling, and policy training, which aligns with the experimental evidence in extensive ablation studies.
This paper deals with inference in a class of stable but nearly-unstable processes. Autoregressive processes are considered, in which the bridge between stability and instability is expressed by a time-varying companion matrix $A_{n}$ with spectral radius $\rho(A_{n}) < 1$ satisfying $\rho(A_{n}) \rightarrow 1$. This framework is particularly suitable to understand unit root issues by focusing on the inner boundary of the unit circle. Consistency is established for the empirical covariance and the OLS estimation together with asymptotic normality under appropriate hypotheses when $A$, the limit of $A_n$, has a real spectrum, and a particular case is deduced when $A$ also contains complex eigenvalues. The asymptotic process is integrated with either one unit root (located at 1 or $-1$), or even two unit roots located at 1 and $-1$. Finally, a set of simulations illustrate the asymptotic behavior of the OLS. The results are essentially proved by $L^2$ computations and the limit theory of triangular arrays of martingales.
We provide a new variational definition for the spread of an orbital under periodic boundary conditions (PBCs) that is continuous with respect to the gauge, consistent in the thermodynamic limit, well-suited to diffuse orbitals, and systematically adaptable to schemes computing localized Wannier functions. Existing definitions do not satisfy all these desiderata, partly because they depend on an "orbital center"-an ill-defined concept under PBCs. Based on this theoretical development, we showcase a robust and efficient (10x-70x fewer iterations) localization scheme across a range of materials.
Given a randomized experiment with binary outcomes, exact confidence intervals for the average causal effect of the treatment can be computed through a series of permutation tests. This approach requires minimal assumptions and is valid for all sample sizes, as it does not rely on large-sample approximations such as the central limit theorem. We show that these confidence intervals can be found in $O(n \log n)$ permutation tests in the case of balanced designs, where the treatment and control groups have equal sizes, and $O(n^2)$ permutation tests in the general case. Prior to this work, the most efficient known constructions required $O(n^2)$ such tests in the balanced case [Li and Ding, 2016], and $O(n^4)$ tests in the general case [Rigdon and Hudgens, 2015]. Our results thus facilitate exact inference as a viable option for randomized experiments far larger than those accessible by previous methods.
Geometrically continuous splines are piecewise polynomial functions defined on a collection of patches which are stitched together through transition maps. They are called $G^{r}$-splines if, after composition with the transition maps, they are continuously differentiable functions to order $r$ on each pair of patches with stitched boundaries. This type of splines has been used to represent smooth shapes with complex topology for which (parametric) spline functions on fixed partitions are not sufficient. In this article, we develop new algebraic tools to analyze $G^r$-spline spaces. We define $G^{r}$-domains and transition maps using an algebraic approach, and establish an algebraic criterion to determine whether a piecewise function is $G^r$-continuous on the given domain. In the proposed framework, we construct a chain complex whose top homology is isomorphic to the $G^{r}$-spline space. This complex generalizes Billera-Schenck-Stillman homological complex used to study parametric splines. Additionally, we show how previous constructions of $G^r$-splines fit into this new algebraic framework, and present an algorithm to construct a bases for $G^r$-spline spaces. We illustrate how our algebraic approach works with concrete examples, and prove a dimension formula for the $G^r$-spline space in terms of invariants to the chain complex. In some special cases, explicit dimension formulas in terms of the degree of splines are also given.
Interpreting the inner function of neural networks is crucial for the trustworthy development and deployment of these black-box models. Prior interpretability methods focus on correlation-based measures to attribute model decisions to individual examples. However, these measures are susceptible to noise and spurious correlations encoded in the model during the training phase (e.g., biased inputs, model overfitting, or misspecification). Moreover, this process has proven to result in noisy and unstable attributions that prevent any transparent understanding of the model's behavior. In this paper, we develop a robust interventional-based method grounded by causal analysis to capture cause-effect mechanisms in pre-trained neural networks and their relation to the prediction. Our novel approach relies on path interventions to infer the causal mechanisms within hidden layers and isolate relevant and necessary information (to model prediction), avoiding noisy ones. The result is task-specific causal explanatory graphs that can audit model behavior and express the actual causes underlying its performance. We apply our method to vision models trained on classification tasks. On image classification tasks, we provide extensive quantitative experiments to show that our approach can capture more stable and faithful explanations than standard attribution-based methods. Furthermore, the underlying causal graphs reveal the neural interactions in the model, making it a valuable tool in other applications (e.g., model repair).
Obtaining human-interpretable explanations of large, general-purpose language models is an urgent goal for AI safety. However, it is just as important that our interpretability methods are faithful to the causal dynamics underlying model behavior and able to robustly generalize to unseen inputs. Distributed Alignment Search (DAS) is a powerful gradient descent method grounded in a theory of causal abstraction that uncovered perfect alignments between interpretable symbolic algorithms and small deep learning models fine-tuned for specific tasks. In the present paper, we scale DAS significantly by replacing the remaining brute-force search steps with learned parameters -- an approach we call DAS. This enables us to efficiently search for interpretable causal structure in large language models while they follow instructions. We apply DAS to the Alpaca model (7B parameters), which, off the shelf, solves a simple numerical reasoning problem. With DAS, we discover that Alpaca does this by implementing a causal model with two interpretable boolean variables. Furthermore, we find that the alignment of neural representations with these variables is robust to changes in inputs and instructions. These findings mark a first step toward deeply understanding the inner-workings of our largest and most widely deployed language models.
"Data is the new oil", in short, data would be the essential source of the ongoing fourth industrial revolution, which has led some commentators to assimilate too quickly the quantity of data to a source of wealth in itself, and consider the development of big data as an quasi direct cause of profit. Human resources management is not escaping this trend, and the accumulation of large amounts of data on employees is perceived by some entrepreneurs as a necessary and sufficient condition for the construction of predictive models of complex work behaviors such as absenteeism or job performance. In fact, the analogy is somewhat misleading: unlike oil, there are no major issues here concerning the production of data (whose flows are generated continuously and at low cost by various information systems), but rather their ''refining'', i.e. the operations necessary to transform this data into a useful product, namely into knowledge. This transformation is where the methodological challenges of data valuation lie, both for practitioners and for academic researchers. Considerations on the methods applicable to take advantage of the possibilities offered by these massive data are relatively recent, and often highlight the disruptive aspect of the current ''data deluge'' to point out that this evolution would be the source of a revival of empiricism in a ''fourth paradigm'' based on the intensive and ''agnostic'' exploitation of massive amounts of data in order to bring out new knowledge, following a purely inductive logic. Although we do not adopt this speculative point of view, it is clear that data-driven approaches are scarce in quantitative HRM studies. However, there are well-established methods, particularly in the field of data mining, which are based on inductive approaches. This area of quantitative analysis with an inductive aim is still relatively unexplored in HRM ( apart from typological analyses). The objective of this paper is first to give an overview of data driven methods that can be used for HRM research, before proposing an empirical illustration which consists in an exploratory research combining a latent profile analysis and an exploration by Gaussian graphical models.
This paper systematizes log based Transparency Enhancing Technologies. Based on established work on transparency from multiple disciplines we outline the purpose, usefulness, and pitfalls of transparency. We outline the mechanisms that allow log based transparency enhancing technologies to be implemented, in particular logging mechanisms, sanitisation mechanisms and the trade-offs with privacy, data release and query mechanisms, and how transparency relates to the external mechanisms that can provide the ability to contest a system and hold system operators accountable. We illustrate the role these mechanisms play with two case studies, Certificate Transparency and cryptocurrencies, and show the role that transparency plays in their function as well as the issues these systems face in delivering transparency.
Even when the causal graph underlying our data is unknown, we can use observational data to narrow down the possible values that an average treatment effect (ATE) can take by (1) identifying the graph up to a Markov equivalence class; and (2) estimating that ATE for each graph in the class. While the PC algorithm can identify this class under strong faithfulness assumptions, it can be computationally prohibitive. Fortunately, only the local graph structure around the treatment is required to identify the set of possible ATE values, a fact exploited by local discovery algorithms to improve computational efficiency. In this paper, we introduce Local Discovery using Eager Collider Checks (LDECC), a new local causal discovery algorithm that leverages unshielded colliders to orient the treatment's parents differently from existing methods. We show that there exist graphs where LDECC exponentially outperforms existing local discovery algorithms and vice versa. Moreover, we show that LDECC and existing algorithms rely on different faithfulness assumptions, leveraging this insight to weaken the assumptions for identifying the set of possible ATE values.