In the product $L_1\times L_2$ of two Kripke complete consistent logics, local tabularity of $L_1$ and $L_2$ is necessary for local tabularity of $L_1\times L_2$. However, it is not sufficient: the product of two locally tabular logics can be not locally tabular. We provide extra semantic and axiomatic conditions which give criteria of local tabularity of the product of two locally tabular logics. Then we apply them to identify new families of locally tabular products.
Given a composite null $ \mathcal P$ and composite alternative $ \mathcal Q$, when and how can we construct a p-value whose distribution is exactly uniform under the null, and stochastically smaller than uniform under the alternative? Similarly, when and how can we construct an e-value whose expectation exactly equals one under the null, but its expected logarithm under the alternative is positive? We answer these basic questions, and other related ones, when $ \mathcal P$ and $ \mathcal Q$ are convex polytopes (in the space of probability measures). We prove that such constructions are possible if and only if $ \mathcal Q$ does not intersect the span of $ \mathcal P$. If the p-value is allowed to be stochastically larger than uniform under $P\in \mathcal P$, and the e-value can have expectation at most one under $P\in \mathcal P$, then it is achievable whenever $ \mathcal P$ and $ \mathcal Q$ are disjoint. More generally, even when $ \mathcal P$ and $ \mathcal Q$ are not polytopes, we characterize the existence of a bounded nontrivial e-variable whose expectation exactly equals one under any $P \in \mathcal P$. The proofs utilize recently developed techniques in simultaneous optimal transport. A key role is played by coarsening the filtration: sometimes, no such p-value or e-value exists in the richest data filtration, but it does exist in some reduced filtration, and our work provides the first general characterization of this phenomenon. We also provide an iterative construction that explicitly constructs such processes, and under certain conditions it finds the one that grows fastest under a specific alternative $Q$. We discuss implications for the construction of composite nonnegative (super)martingales, and end with some conjectures and open problems.
Many flexible families of positive random variables exhibit non-closed forms of the density and distribution functions and this feature is considered unappealing for modelling purposes. However, such families are often characterized by a simple expression of the corresponding Laplace transform. Relying on the Laplace transform, we propose to carry out parameter estimation and goodness-of-fit testing for a general class of non-standard laws. We suggest a novel data-driven inferential technique, providing parameter estimators and goodness-of-fit tests, whose large-sample properties are derived. The implementation of the method is specifically considered for the positive stable and Tweedie distributions. A Monte Carlo study shows good finite-sample performance of the proposed technique for such laws.
Inference for functional linear models in the presence of heteroscedastic errors has received insufficient attention given its practical importance; in fact, even a central limit theorem has not been studied in this case. At issue, conditional mean estimates have complicated sampling distributions due to the infinite dimensional regressors, where truncation bias and scaling issues are compounded by non-constant variance under heteroscedasticity. As a foundation for distributional inference, we establish a central limit theorem for the estimated conditional mean under general dependent errors, and subsequently we develop a paired bootstrap method to provide better approximations of sampling distributions. The proposed paired bootstrap does not follow the standard bootstrap algorithm for finite dimensional regressors, as this version fails outside of a narrow window for implementation with functional regressors. The reason owes to a bias with functional regressors in a naive bootstrap construction. Our bootstrap proposal incorporates debiasing and thereby attains much broader validity and flexibility with truncation parameters for inference under heteroscedasticity; even when the naive approach may be valid, the proposed bootstrap method performs better numerically. The bootstrap is applied to construct confidence intervals for centered projections and for conducting hypothesis tests for the multiple conditional means. Our theoretical results on bootstrap consistency are demonstrated through simulation studies and also illustrated with a real data example.
We introduce a simple natural deduction system for reasoning with judgments of the form "there exists a proof of $\varphi$" to explore the notion of judgmental existence following Martin-L\"{o}f's methodology of distinguishing between judgments and propositions. In this system, the existential judgment can be internalized into a modal notion of propositional existence that is closely related to truncation modality, a key tool for obtaining proof irrelevance, and lax modality. We provide a computational interpretation in the style of the Curry-Howard isomorphism for the existence modality and show that the corresponding system has some desirable properties such as strong normalization or subject reduction.
We prove lower bounds for the randomized approximation of the embedding $\ell_1^m \rightarrow \ell_\infty^m$ based on algorithms that use arbitrary linear (hence non-adaptive) information provided by a (randomized) measurement matrix $N \in \mathbb{R}^{n \times m}$. These lower bounds reflect the increasing difficulty of the problem for $m \to \infty$, namely, a term $\sqrt{\log m}$ in the complexity $n$. This result implies that non-compact operators between arbitrary Banach spaces are not approximable using non-adaptive Monte Carlo methods. We also compare these lower bounds for non-adaptive methods with upper bounds based on adaptive, randomized methods for recovery for which the complexity $n$ only exhibits a $(\log\log m)$-dependence. In doing so we give an example of linear problems where the error for adaptive vs. non-adaptive Monte Carlo methods shows a gap of order $n^{1/2} ( \log n)^{-1/2}$.
We investigate the geometry of a family of log-linear statistical models called quasi-independence models. The toric fiber product is useful for understanding the geometry of parameter inference in these models because the maximum likelihood degree is multiplicative under the TFP. We define the coordinate toric fiber product, or cTFP, and give necessary and sufficient conditions under which a quasi-independence model is a cTFP of lower-order models. We show that the vanishing ideal of every 2-way quasi-independence model with ML-degree 1 can be realized as an iterated toric fiber product of linear ideals. We also classify which Lawrence lifts of 2-way quasi-independence models are cTFPs and give a necessary condition under which a $k$-way model has ML-degree 1 using its facial submodels.
Given an unconditional diffusion model $\pi(x, y)$, using it to perform conditional simulation $\pi(x \mid y)$ is still largely an open question and is typically achieved by learning conditional drifts to the denoising SDE after the fact. In this work, we express conditional simulation as an inference problem on an augmented space corresponding to a partial SDE bridge. This perspective allows us to implement efficient and principled particle Gibbs and pseudo-marginal samplers marginally targeting the conditional distribution $\pi(x \mid y)$. Contrary to existing methodology, our methods do not introduce any additional approximation to the unconditional diffusion model aside from the Monte Carlo error. We showcase the benefits and drawbacks of our approach on a series of synthetic and real data examples.
Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.
We consider a wide class of generalized Radon transforms $\mathcal R$, which act in $\mathbb{R}^n$ for any $n\ge 2$ and integrate over submanifolds of any codimension $N$, $1\le N\le n-1$. Also, we allow for a fairly general reconstruction operator $\mathcal A$. The main requirement is that $\mathcal A$ be a Fourier integral operator with a phase function, which is linear in the phase variable. We consider the task of image reconstruction from discrete data $g_{j,k} = (\mathcal R f)_{j,k} + \eta_{j,k}$. We show that the reconstruction error $N_\epsilon^{\text{rec}}=\mathcal A \eta_{j,k}$ satisfies $N^{\text{rec}}(\check x;x_0)=\lim_{\epsilon\to0}N_\epsilon^{\text{rec}}(x_0+\epsilon\check x)$, $\check x\in D$. Here $x_0$ is a fixed point, $D\subset\mathbb{R}^n$ is a bounded domain, and $\eta_{j,k}$ are independent, but not necessarily identically distributed, random variables. $N^{\text{rec}}$ and $N_\epsilon^{\text{rec}}$ are viewed as continuous random functions of the argument $\check x$ (random fields), and the limit is understood in the sense of probability distributions. Under some conditions on the first three moments of $\eta_{j,k}$ (and some other not very restrictive conditions on $x_0$ and $\mathcal A$), we prove that $N^{\text{rec}}$ is a zero mean Gaussian random field and explicitly compute its covariance. We also present a numerical experiment with a cone beam transform in $\mathbb{R}^3$, which shows an excellent match between theoretical predictions and simulated reconstructions.
Similar to the notion of h-adaptivity, where the discretization resolution is adaptively changed, I propose the notion of model adaptivity, where the underlying model (the governing equations) is adaptively changed in space and time. Specifically, this work introduces a hybrid and adaptive coupling of a 3D bulk fluid flow model with a 2D thin film flow model. As a result, this work extends the applicability of existing thin film flow models to complex scenarios where, for example, bulk flow develops into thin films after striking a surface. At each location in space and time, the proposed framework automatically decides whether a 3D model or a 2D model must be applied. Using a meshless approach for both 3D and 2D models, at each particle, the decision to apply a 2D or 3D model is based on the user-prescribed resolution and a local principal component analysis. When a particle needs to be changed from a 3D model to 2D, or vice versa, the discretization is changed, and all relevant data mapping is done on-the-fly. Appropriate two-way coupling conditions and mass conservation considerations between the 3D and 2D models are also developed. Numerical results show that this model adaptive framework shows higher flexibility and compares well against finely resolved 3D simulations. In an actual application scenario, a 3 factor speed up is obtained, while maintaining the accuracy of the solution.