In traditional semantics for classical logic and its extensions, such as modal logic, propositions are interpreted as subsets of a set, as in discrete duality, or as clopen sets of a Stone space, as in topological duality. A point in such a set can be viewed as a "possible world," with the key property of a world being primeness--a world makes a disjunction true only if it makes one of the disjuncts true--which classically implies totality--for each proposition, a world either makes the proposition true or makes its negation true. This chapter surveys a more general approach to logical semantics, known as possibility semantics, which replaces possible worlds with possibly partial "possibilities." In classical possibility semantics, propositions are interpreted as regular open sets of a poset, as in set-theoretic forcing, or as compact regular open sets of an upper Vietoris space, as in the recent theory of "choice-free Stone duality." The elements of these sets, viewed as possibilities, may be partial in the sense of making a disjunction true without settling which disjunct is true. We explain how possibilities may be used in semantics for classical logic and modal logics and generalized to semantics for intuitionistic logics. The goals are to overcome or deepen incompleteness results for traditional semantics, to avoid the nonconstructivity of traditional semantics, and to provide richer structures for the interpretation of new languages.
We introduce the causal responders detection (CARD), a novel method for responder analysis that identifies treated subjects who significantly respond to a treatment. Leveraging recent advances in conformal prediction, CARD employs machine learning techniques to accurately identify responders while controlling the false discovery rate in finite sample sizes. Additionally, we incorporate a propensity score adjustment to mitigate bias arising from non-random treatment allocation, enhancing the robustness of our method in observational settings. Simulation studies demonstrate that CARD effectively detects responders with high power in diverse scenarios.
As an alternative to Kripke models, simplicial complexes are a versatile semantic primitive on which to interpret epistemic logic. Given a set of vertices, a simplicial complex is a downward closed set of subsets, called simplexes, of the vertex set. A maximal simplex is called a facet. Impure simplicial complexes represent that some agents (processes) are dead. It is known that impure simplicial complexes categorically correspond to so-called partial epistemic (Kripke) models. In this contribution, we define a notion of bisimulation to compare impure simplicial complexes and show that it has the Hennessy-Milner property. These results are for a logical language including atoms that express whether agents are alive or dead. Without these atoms no reasonable standard notion of bisimulation exists, as we amply justify by counterexamples, because such a restricted language is insufficiently expressive.
We propose a novel approach for estimating conditional or parametric expectations in the setting where obtaining samples or evaluating integrands is costly. Through the framework of probabilistic numerical methods (such as Bayesian quadrature), our novel approach allows to incorporates prior information about the integrands especially the prior smoothness knowledge about the integrands and the conditional expectation. As a result, our approach provides a way of quantifying uncertainty and leads to a fast convergence rate, which is confirmed both theoretically and empirically on challenging tasks in Bayesian sensitivity analysis, computational finance and decision making under uncertainty.
This paper studies delegation in a model of discrete choice. In the delegation problem, an uninformed principal must consult an informed agent to make a decision. Both the agent and principal have preferences over the decided-upon action which vary based on the state of the world, and which may not be aligned. The principal may commit to a mechanism, which maps reports of the agent to actions. When this mechanism is deterministic, it can take the form of a menu of actions, from which the agent simply chooses upon observing the state. In this case, the principal is said to have delegated the choice of action to the agent. We consider a setting where the decision being delegated is a choice of a utility-maximizing action from a set of several options. We assume the shared portion of the agent's and principal's utilities is drawn from a distribution known to the principal, and that utility misalignment takes the form of a known bias for or against each action. We provide tight approximation analyses for simple threshold policies under three increasingly general sets of assumptions. With independently-distributed utilities, we prove a $3$-approximation. When the agent has an outside option the principal cannot rule out, the constant approximation fails, but we prove a $\log \rho/\log\log \rho$-approximation, where $\rho$ is the ratio of the maximum value to the optimal utility. We also give a weaker but tight bound that holds for correlated values, and complement our upper bounds with hardness results. One special case of our model is utility-based assortment optimization, for which our results are new.
Many problems in Physics and Chemistry are formulated as the minimization of a functional. Therefore, methods for solving these problems typically require differentiating maps whose input and/or output are functions -- commonly referred to as variational differentiation. Such maps are not addressed at the mathematical level by the chain rule, which underlies modern symbolic and algorithmic differentiation (AD) systems. Although there are algorithmic solutions such as tracing and reverse accumulation, they do not provide human readability and introduce strict programming constraints that bottleneck performance, especially in high-performance computing (HPC) environments. In this manuscript, we propose a new computer theoretic model of differentiation by combining the pullback of the $\mathbf{B}$ and $\mathbf{C}$ combinators from the combinatory logic. Unlike frameworks based on the chain rule, this model differentiates a minimal complete basis for the space of computable functions. Consequently, the model is capable of analytic backpropagation and variational differentiation while supporting complex numbers. To demonstrate the generality of this approach we build a system named CombDiff, which can differentiate nontrivial variational problems such as Hartree-Fock (HF) theory and multilayer perceptrons.
Dataset distillation aims to condense large datasets into a small number of synthetic examples that can be used as drop-in replacements when training new models. It has applications to interpretability, neural architecture search, privacy, and continual learning. Despite strong successes in supervised domains, such methods have not yet been extended to reinforcement learning, where the lack of a fixed dataset renders most distillation methods unusable. Filling the gap, we formalize behaviour distillation, a setting that aims to discover and then condense the information required for training an expert policy into a synthetic dataset of state-action pairs, without access to expert data. We then introduce Hallucinating Datasets with Evolution Strategies (HaDES), a method for behaviour distillation that can discover datasets of just four state-action pairs which, under supervised learning, train agents to competitive performance levels in continuous control tasks. We show that these datasets generalize out of distribution to training policies with a wide range of architectures and hyperparameters. We also demonstrate application to a downstream task, namely training multi-task agents in a zero-shot fashion. Beyond behaviour distillation, HaDES provides significant improvements in neuroevolution for RL over previous approaches and achieves SoTA results on one standard supervised dataset distillation task. Finally, we show that visualizing the synthetic datasets can provide human-interpretable task insights.
The generation of equilibrium samples of molecular systems has been a long-standing problem in statistical physics. Boltzmann Generators are a generative machine learning method that addresses this issue by learning a transformation via a normalizing flow from a simple prior distribution to the target Boltzmann distribution of interest. Recently, flow matching has been employed to train Boltzmann Generators for small molecular systems in Cartesian coordinates. We extend this work and propose a first framework for Boltzmann Generators that are transferable across chemical space, such that they predict zero-shot Boltzmann distributions for test molecules without being retrained for these systems. These transferable Boltzmann Generators allow approximate sampling from the target distribution of unseen systems, as well as efficient reweighting to the target Boltzmann distribution. The transferability of the proposed framework is evaluated on dipeptides, where we show that it generalizes efficiently to unseen systems. Furthermore, we demonstrate that our proposed architecture enhances the efficiency of Boltzmann Generators trained on single molecular systems.
We introduce a novel estimator for predicting outcomes in the presence of hidden confounding across different distributional settings without relying on regularization or a known causal structure. Our approach is based on parametrizing the dependence of the covariates with response noise, ensuring optimal prediction and favorable asymptotic properties. We achieve identifiability under lean assumptions that have direct empirical translation, enabling the incorporation of causal parameters into a generative model that replicates the true conditional distribution of a test environment. This method achieves probabilistic alignment with test distributions uniformly across interventions, offering robust predictions without the need for worst-case optimization or specific assumptions about the strength of perturbations at test. Our findings represent a significant advancement in the statistical understanding of causality, providing a robust and flexible framework for predictive modeling in varied domains.
Over the past century, the focus of scientific practices has shifted from purely intellectual exploration to problem-solving, leading to uneven development in scientific knowledge. Our analysis of 41 million research articles over the past six decades reveals this trend of uneven development, with atypical papers representing complementary innovation becoming the majority and displacement papers representing substitutive innovation decreasing to the minority. While AI can enhance human memory capacity, it may not necessarily accelerate progress in canonical concepts without changing the agenda of science and its organization.
Unclonable cryptography utilizes the principles of quantum mechanics to addresses cryptographic tasks that are impossible classically. We introduce a novel unclonable primitive in the context of secret sharing, called unclonable secret sharing (USS). In a USS scheme, there are $n$ shareholders, each holding a share of a classical secret represented as a quantum state. They can recover the secret once all parties (or at least $t$ parties) come together with their shares. Importantly, it should be infeasible to copy their own shares and send the copies to two non-communicating parties, enabling both of them to recover the secret. Our work initiates a formal investigation into the realm of unclonable secret sharing, shedding light on its implications, constructions, and inherent limitations. ** Connections: We explore the connections between USS and other quantum cryptographic primitives such as unclonable encryption and position verification, showing the difficulties to achieve USS in different scenarios. **Limited Entanglement: In the case where the adversarial shareholders do not share any entanglement or limited entanglement, we demonstrate information-theoretic constructions for USS. **Large Entanglement: If we allow the adversarial shareholders to have unbounded entanglement resources (and unbounded computation), we prove that unclonable secret sharing is impossible. On the other hand, in the quantum random oracle model where the adversary can only make a bounded polynomial number of queries, we show a construction secure even with unbounded entanglement. Furthermore, even when these adversaries possess only a polynomial amount of entanglement resources, we establish that any unclonable secret sharing scheme with a reconstruction function implementable using Cliffords and logarithmically many T-gates is also unattainable.