Considering that both the entropy-based market information and the Hurst exponent are useful tools for determining whether the efficient market hypothesis holds for a given asset, we study the link between the two approaches. We thus provide a theoretical expression for the market information when log-prices follow either a fractional Brownian motion or its stationary extension using the Lamperti transform. In the latter model, we show that a Hurst exponent close to 1/2 can lead to a very high informativeness of the time series, because of the stationarity mechanism. In addition, we introduce a multiscale method to get a deeper interpretation of the entropy and of the market information, depending on the size of the information set. Applications to Bitcoin, CAC 40 index, Nikkei 225 index, and EUR/USD FX rate, using daily or intraday data, illustrate the methodological content.
Matching is a popular nonparametric covariate adjustment strategy in empirical health services research. Matching helps construct two groups comparable in many baseline covariates but different in some key aspects under investigation. In health disparities research, it is desirable to understand the contributions of various modifiable factors, like income and insurance type, to the observed disparity in access to health services between different groups. To single out the contributions from the factors of interest, we propose a statistical matching methodology that constructs nested matched comparison groups from, for instance, White men, that resemble the target group, for instance, black men, in some selected covariates while remaining identical to the white men population before matching in the remaining covariates. Using the proposed method, we investigated the disparity gaps between white men and black men in the US in prostate-specific antigen (PSA) screening based on the 2020 Behavioral Risk Factor Surveillance System (BFRSS) database. We found a widening PSA screening rate as the white matched comparison group increasingly resembles the black men group and quantified the contribution of modifiable factors like socioeconomic status. Finally, we provide code that replicates the case study and a tutorial that enables users to design customized matched comparison groups satisfying multiple criteria.
Confidence intervals based on the central limit theorem (CLT) are a cornerstone of classical statistics. Despite being only asymptotically valid, they are ubiquitous because they permit statistical inference under very weak assumptions, and can often be applied to problems even when nonasymptotic inference is impossible. This paper introduces time-uniform analogues of such asymptotic confidence intervals. To elaborate, our methods take the form of confidence sequences (CS) -- sequences of confidence intervals that are uniformly valid over time. CSs provide valid inference at arbitrary stopping times, incurring no penalties for "peeking" at the data, unlike classical confidence intervals which require the sample size to be fixed in advance. Existing CSs in the literature are nonasymptotic, and hence do not enjoy the aforementioned broad applicability of asymptotic confidence intervals. Our work bridges the gap by giving a definition for "asymptotic CSs", and deriving a universal asymptotic CS that requires only weak CLT-like assumptions. While the CLT approximates the distribution of a sample average by that of a Gaussian at a fixed sample size, we use strong invariance principles (stemming from the seminal 1960s work of Strassen and improvements by Koml\'os, Major, and Tusn\'ady) to uniformly approximate the entire sample average process by an implicit Gaussian process. As an illustration of our theory, we derive asymptotic CSs for the average treatment effect using efficient estimators in observational studies (for which no nonasymptotic bounds can exist even in the fixed-time regime) as well as randomized experiments, enabling causal inference that can be continuously monitored and adaptively stopped.
Collaborative filtering (CF) has become a popular method for developing recommender systems (RSs) where ratings of a user for new items are predicted based on her past preferences and available preference information of other users. Despite the popularity of CF-based methods, their performance is often greatly limited by the sparsity of observed entries. In this study, we explore the data augmentation and refinement aspects of Maximum Margin Matrix Factorization (MMMF), a widely accepted CF technique for rating predictions, which has not been investigated before. We exploit the inherent characteristics of CF algorithms to assess the confidence level of individual ratings and propose a semi-supervised approach for rating augmentation based on self-training. We hypothesize that any CF algorithm's predictions with low confidence are due to some deficiency in the training data and hence, the performance of the algorithm can be improved by adopting a systematic data augmentation strategy. We iteratively use some of the ratings predicted with high confidence to augment the training data and remove low-confidence entries through a refinement process. By repeating this process, the system learns to improve prediction accuracy. Our method is experimentally evaluated on several state-of-the-art CF algorithms and leads to informative rating augmentation, improving the performance of the baseline approaches.
Making inference with spatial extremal dependence models can be computationally burdensome since they involve intractable and/or censored likelihoods. Building on recent advances in likelihood-free inference with neural Bayes estimators, that is, neural networks that approximate Bayes estimators, we develop highly efficient estimators for censored peaks-over-threshold models that encode censoring information in the neural network architecture. Our new method provides a paradigm shift that challenges traditional censored likelihood-based inference methods for spatial extremal dependence models. Our simulation studies highlight significant gains in both computational and statistical efficiency, relative to competing likelihood-based approaches, when applying our novel estimators to make inference with popular extremal dependence models, such as max-stable, $r$-Pareto, and random scale mixture process models. We also illustrate that it is possible to train a single neural Bayes estimator for a general censoring level, precluding the need to retrain the network when the censoring level is changed. We illustrate the efficacy of our estimators by making fast inference on hundreds-of-thousands of high-dimensional spatial extremal dependence models to assess extreme particulate matter 2.5 microns or less in diameter (PM2.5) concentration over the whole of Saudi Arabia.
Causal regularization was introduced as a stable causal inference strategy in a two-environment setting in Kania and Wit [2022] for linear structural equations models (SEMs). We start with observing that causal regularizer can be extended to several shifted environments and non-linear SEMs. We derive the multi-environment casual regularizer in the population setting. We propose its plug-in estimator, and study its concentration in measure behavior. Although the variance of the plug-in estimator is not well-defined in general, we instead study its conditional variance both with respect to a natural filtration of the empirical as well as conditioning with respect to certain events. We also study generalizations where we consider conditional expectations of higher central absolute moments of the estimator. The results presented here are also new in the prior setting of Kania and Wit [2022] as well as in Rothenhausler et al. [2021].
Many models of learning in teams assume that team members can share solutions or learn concurrently. However, these assumptions break down in multidisciplinary teams where team members often complete distinct, interrelated pieces of larger tasks. Such contexts make it difficult for individuals to separate the performance effects of their own actions from the actions of interacting neighbors. In this work, we show that individuals can overcome this challenge by learning from network neighbors through mediating artifacts (like collective performance assessments). When neighbors' actions influence collective outcomes, teams with different networks perform relatively similarly to one another. However, varying a team's network can affect performance on tasks that weight individuals' contributions by network properties. Consequently, when individuals innovate (through ``exploring'' searches), dense networks hurt performance slightly by increasing uncertainty. In contrast, dense networks moderately help performance when individuals refine their work (through ``exploiting'' searches) by efficiently finding local optima. We also find that decentralization improves team performance across a battery of 34 tasks. Our results offer design principles for multidisciplinary teams within which other forms of learning prove more difficult.
Kleene's computability theory based on the S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's 'machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^n$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied based on well-known theorems, the weakest among which seems to be the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^2$, while the former are computable in $\exists^3$ but not in weaker oracles. Of course, there is a great divide or abyss separating $\exists^2$ and $\exists^3$ and we identify slight variations of our new non-normal functionals that are again computable in $\exists^2$, i.e. fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, bounded variation, and semi-continuity from real analysis.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.