We show how the relatively initial or relatively terminal fixed points for a well-behaved functor $F$ form a pair of adjoint functors between $F$-coalgebras and $F$-algebras. We use the language of locally presentable categories to find sufficient conditions for existence of this adjunction. We show that relative fixed points may be characterized as (co)equalizers of the free (co)monad on $F$. In particular, when $F$ is a polynomial functor on $\mathsf{Set}$ the relative fixed points are a quotient or subset of the free term algebra or the cofree term coalgebra. We give examples of the relative fixed points for polynomial functors and an example which is the Sierpinski carpet. Lastly, we prove a general preservation result for relative fixed points.
The Collatz hypothesis is a theorem of the algorithmic theory of natural numbers. We prove the (algorithmic) formula that expresses the halting property of Collatz algorithm. The observation that Collatz's theorem cannot be proved in any elementary number theory completes the main result.
In this work we extend the shifted Laplacian approach to the elastic Helmholtz equation. The shifted Laplacian multigrid method is a common preconditioning approach for the discretized acoustic Helmholtz equation. In some cases, like geophysical seismic imaging, one needs to consider the elastic Helmholtz equation, which is harder to solve: it is three times larger and contains a nullity-rich grad-div term. These properties make the solution of the equation more difficult for multigrid solvers. The key idea in this work is combining the shifted Laplacian with approaches for linear elasticity. We provide local Fourier analysis and numerical evidence that the convergence rate of our method is independent of the Poisson's ratio. Moreover, to better handle the problem size, we complement our multigrid method with the domain decomposition approach, which works in synergy with the local nature of the shifted Laplacian, so we enjoy the advantages of both methods without sacrificing performance. We demonstrate the efficiency of our solver on 2D and 3D problems in heterogeneous media.
Capturing the extremal behaviour of data often requires bespoke marginal and dependence models which are grounded in rigorous asymptotic theory, and hence provide reliable extrapolation into the upper tails of the data-generating distribution. We present a modern toolbox of four methodological frameworks, motivated by modern extreme value theory, that can be used to accurately estimate extreme exceedance probabilities or the corresponding level in either a univariate or multivariate setting. Our frameworks were used to facilitate the winning contribution of Team Yalla to the data competition organised for the 13th International Conference on Extreme Value Analysis (EVA2023). This competition comprised seven teams competing across four separate sub-challenges, with each requiring the modelling of data simulated from known, yet highly complex, statistical distributions, and extrapolation far beyond the range of the available samples in order to predict probabilities of extreme events. Data were constructed to be representative of real environmental data, sampled from the fantasy country of "Utopia".
The strong convergence of the semi-implicit Euler-Maruyama (EM) method for stochastic differential equations with non-linear coefficients driven by a class of L\'evy processes is investigated. The dependence of the convergence order of the numerical scheme on the parameters of the class of L\'evy processes is discovered, which is different from existing results. In addition, the existence and uniqueness of numerical invariant measure of the semi-implicit EM method is studied and its convergence to the underlying invariant measure is also proved. Numerical examples are provided to confirm our theoretical results.
We prove a tight parallel repetition theorem for $3$-message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of $4$-message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent $3$-message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07]. As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary.
Given an integer partition of $n$, we consider the impartial combinatorial game LCTR in which moves consist of removing either the left column or top row of its Young diagram. We show that for both normal and mis\`ere play, the optimal strategy can consist mostly of mirroring the opponent's moves. We also establish that both LCTR and Downright are domestic as well as returnable, and on the other hand neither tame nor forced. For both games, those structural observations allow for computing the Sprague-Grundy value any position in $O(\log(n))$ time, assuming that the time unit allows for reading an integer, or performing a basic arithmetic operation. This improves on the previously known bound of $O(n)$ due to Ili\'c (2019). We also cover some other complexity measures of both games, such as state-space complexity, and number of leaves and nodes in the corresponding game tree.
We consider the asymptotic properties of Approximate Bayesian Computation (ABC) for the realistic case of summary statistics with heterogeneous rates of convergence. We allow some statistics to converge faster than the ABC tolerance, other statistics to converge slower, and cover the case where some statistics do not converge at all. We give conditions for the ABC posterior to converge, and provide an explicit representation of the shape of the ABC posterior distribution in our general setting; in particular, we show how the shape of the posterior depends on the number of slow statistics. We then quantify the gain brought by the local linear post-processing step.
Consider the family of power divergence statistics based on $n$ trials, each leading to one of $r$ possible outcomes. This includes the log-likelihood ratio and Pearson's statistic as important special cases. It is known that in certain regimes (e.g., when $r$ is of order $n^2$ and the allocation is asymptotically uniform as $n\to\infty$) the power divergence statistic converges in distribution to a linear transformation of a Poisson random variable. We establish explicit error bounds in the Kolmogorov (or uniform) metric to complement this convergence result, which may be applied for any values of $n$, $r$ and the index parameter $\lambda$ for which such a finite-sample bound is meaningful. We further use this Poisson approximation result to derive error bounds in Gaussian approximation of the power divergence statistics.
Orthogonal meta-learners, such as DR-learner, R-learner and IF-learner, are increasingly used to estimate conditional average treatment effects. They improve convergence rates relative to na\"{\i}ve meta-learners (e.g., T-, S- and X-learner) through de-biasing procedures that involve applying standard learners to specifically transformed outcome data. This leads them to disregard the possibly constrained outcome space, which can be particularly problematic for dichotomous outcomes: these typically get transformed to values that are no longer constrained to the unit interval, making it difficult for standard learners to guarantee predictions within the unit interval. To address this, we construct orthogonal meta-learners for the prediction of counterfactual outcomes which respect the outcome space. As such, the obtained i-learner or imputation-learner is more generally expected to outperform existing learners, even when the outcome is unconstrained, as we confirm empirically in simulation studies and an analysis of critical care data. Our development also sheds broader light onto the construction of orthogonal learners for other estimands.
When studying an outcome $Y$ that is weakly-positive but can equal zero (e.g. earnings), researchers frequently estimate an average treatment effect (ATE) for a "log-like" transformation that behaves like $\log(Y)$ for large $Y$ but is defined at zero (e.g. $\log(1+Y)$, $\mathrm{arcsinh}(Y)$). We argue that ATEs for log-like transformations should not be interpreted as approximating percentage effects, since unlike a percentage, they depend on the units of the outcome. In fact, we show that if the treatment affects the extensive margin, one can obtain a treatment effect of any magnitude simply by re-scaling the units of $Y$ before taking the log-like transformation. This arbitrary unit-dependence arises because an individual-level percentage effect is not well-defined for individuals whose outcome changes from zero to non-zero when receiving treatment, and the units of the outcome implicitly determine how much weight the ATE for a log-like transformation places on the extensive margin. We further establish a trilemma: when the outcome can equal zero, there is no treatment effect parameter that is an average of individual-level treatment effects, unit-invariant, and point-identified. We discuss several alternative approaches that may be sensible in settings with an intensive and extensive margin, including (i) expressing the ATE in levels as a percentage (e.g. using Poisson regression), (ii) explicitly calibrating the value placed on the intensive and extensive margins, and (iii) estimating separate effects for the two margins (e.g. using Lee bounds). We illustrate these approaches in three empirical applications.