Bayesian model-averaged hypothesis testing is an important technique in regression because it addresses the problem that the evidence one variable directly affects an outcome often depends on which other variables are included in the model. This problem is caused by confounding and mediation, and is pervasive in big data settings with thousands of variables. However, model-averaging is under-utilized in fields, like epidemiology, where classical statistical approaches dominate. Here we show that simultaneous Bayesian and frequentist model-averaged hypothesis testing is possible in large samples, for a family of priors. We show that Bayesian model-averaged regression is a closed testing procedure, and use the theory of regular variation to derive interchangeable posterior odds and $p$-values that jointly control the Bayesian false discovery rate (FDR), the frequentist type I error rate, and the frequentist familywise error rate (FWER). These results arise from an asymptotic chi-squared distribution for the model-averaged deviance, under the null hypothesis. We call the approach 'Doublethink'. In a related manuscript (Arning, Fryer and Wilson, 2024), we apply it to discovering direct risk factors for COVID-19 hospitalization in UK Biobank, and we discuss its broader implications for bridging the differences between Bayesian and frequentist hypothesis testing.
Prediction is a central problem in Statistics, and there is currently a renewed interest for the so-called predictive approach in Bayesian statistics. What is the latter about? One has to return on foundational concepts, which we do in this paper, moving from the role of exchangeability and reviewing forms of partial exchangeability for more structured data, with the aim of discussing their use and implications in Bayesian statistics. There we show the underlying concept that, in Bayesian statistics, a predictive rule is meant as a learning rule - how one conveys past information to information on future events. This concept has implications on the use of exchangeability and generally invests all statistical problems, also in inference. It applies to classic contexts and to less explored situations, such as the use of predictive algorithms that can be read as Bayesian learning rules. The paper offers a historical overview, but also includes a few new results, presents some recent developments and poses some open questions.
Evaluations of model editing currently only use the `next few token' completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing (\textbf{\textit{LEME}}) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.
While score-based generative models (SGMs) have achieved remarkable success in enormous image generation tasks, their mathematical foundations are still limited. In this paper, we analyze the approximation and generalization of SGMs in learning a family of sub-Gaussian probability distributions. We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure. We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution in total variation with a dimension-independent rate. We illustrate our theory through examples, which include certain mixtures of Gaussians. An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process, which is interesting in its own right.
Instrumental variables are widely used in econometrics and epidemiology for identifying and estimating causal effects when an exposure of interest is confounded by unmeasured factors. Despite this popularity, the assumptions invoked to justify the use of instruments differ substantially across the literature. Similarly, statistical approaches for estimating the resulting causal quantities vary considerably, and often rely on strong parametric assumptions. In this work, we compile and organize structural conditions that nonparametrically identify conditional average treatment effects, average treatment effects among the treated, and local average treatment effects, with a focus on identification formulae invoking the conditional Wald estimand. Moreover, we build upon existing work and propose nonparametric efficient estimators of functionals corresponding to marginal and conditional causal contrasts resulting from the various identification paradigms. We illustrate the proposed methods on an observational study examining the effects of operative care on adverse events for cholecystitis patients, and a randomized trial assessing the effects of market participation on political views.
Based on interactions between individuals and others and references to social norms, this study reveals the impact of heterogeneity in time preference on wealth distribution and inequality. We present a novel approach that connects the interactions between microeconomic agents that generate heterogeneity to the dynamic equations for capital and consumption in macroeconomic models. Using this approach, we estimate the impact of changes in the discount rate due to microeconomic interactions on capital, consumption and utility and the degree of inequality. The results show that intercomparisons with others regarding consumption significantly affect capital, i.e. wealth inequality. Furthermore, the impact on utility is never small and social norms can reduce this impact. Our supporting evidence shows that the quantitative results of inequality calculations correspond to survey data from cohort and cross-cultural studies. This study's micro-macro connection approach can be deployed to connect microeconomic interactions, such as exchange, interest and debt, redistribution, mutual aid and time preference, to dynamic macroeconomic models.
Inner products of neural network feature maps arises in a wide variety of machine learning frameworks as a method of modeling relations between inputs. This work studies the approximation properties of inner products of neural networks. It is shown that the inner product of a multi-layer perceptron with itself is a universal approximator for symmetric positive-definite relation functions. In the case of asymmetric relation functions, it is shown that the inner product of two different multi-layer perceptrons is a universal approximator. In both cases, a bound is obtained on the number of neurons required to achieve a given accuracy of approximation. In the symmetric case, the function class can be identified with kernels of reproducing kernel Hilbert spaces, whereas in the asymmetric case the function class can be identified with kernels of reproducing kernel Banach spaces. Finally, these approximation results are applied to analyzing the attention mechanism underlying Transformers, showing that any retrieval mechanism defined by an abstract preorder can be approximated by attention through its inner product relations. This result uses the Debreu representation theorem in economics to represent preference relations in terms of utility functions.
We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at //github.com/GFNOrg/gfn-diffusion as a base for future work on diffusion models for amortized inference.
While diversity has become a debated issue in design, very little research exists on positive use-cases for diversity beyond scholarly criticism. The current work addresses this gap through the case of a diversity-aware chatbot, exploring what benefits a diversity-aware chatbot could bring to people and how do people interpret diversity when being presented with it. In this paper, we motivate a Q&A chatbot as a technology probe and deploy it in two student communities within a study. During the study, we collected contextual data on people's expectations and perceptions when presented with diversity during the study. Our key findings show that people seek out others with shared niche interests, or their search is driven by exploration and inspiration when presented with diversity. Although interacting with chatbots is limited, participants found the engagement novel and interesting to motivate future research.
We adopt the integral definition of the fractional Laplace operator and study an optimal control problem on Lipschitz domains that involves a fractional elliptic partial differential equation (PDE) as state equation and a control variable that enters the state equation as a coefficient; pointwise constraints on the control variable are considered as well. We establish the existence of optimal solutions and analyze first and, necessary and sufficient, second order optimality conditions. Regularity estimates for optimal variables are also analyzed. We develop two finite element discretization strategies: a semidiscrete scheme in which the control variable is not discretized, and a fully discrete scheme in which the control variable is discretized with piecewise constant functions. For both schemes, we analyze the convergence properties of discretizations and derive error estimates.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.