Can the $\lambda$-calculus be considered a reasonable computational model? Can we use it for measuring the time $\textit{and}$ space consumption of algorithms? While the literature contains positive answers about time, much less is known about space. This paper presents a new reasonable space cost model for the $\lambda$-calculus, based on a variant over the Krivine abstract machine. For the first time, this cost model is able to accommodate logarithmic space. Moreover, we study the time behavior of our machine and show how to transport our results to the call-by-value $\lambda$-calculus.
The number of independent sets in regular bipartite expander graphs can be efficiently approximated by expressing it as the partition function of a suitable polymer model and truncating its cluster expansion. While this approach has been extensively used for graphs, surprisingly little is known about analogous questions in the context of hypergraphs. In this work, we apply this method to asymptotically determine the number of independent sets in regular $k$-partite $k$-uniform hypergraphs which satisfy natural expansion properties. The resulting formula depends only on the local structure of the hypergraph, making it computationally efficient. In particular, we provide a simple closed-form expression for linear hypergraphs.
We address the regression problem for a general function $f:[-1,1]^d\to \mathbb R$ when the learner selects the training points $\{x_i\}_{i=1}^n$ to achieve a uniform error bound across the entire domain. In this setting, known historically as nonparametric regression, we aim to establish a sample complexity bound that depends solely on the function's degree of smoothness. Assuming periodicity at the domain boundaries, we introduce PADUA, an algorithm that, with high probability, provides performance guarantees optimal up to constant or logarithmic factors across all problem parameters. Notably, PADUA is the first parametric algorithm with optimal sample complexity for this setting. Due to this feature, we prove that, differently from the non-parametric state of the art, PADUA enjoys optimal space complexity in the prediction phase. To validate these results, we perform numerical experiments over functions coming from real audio data, where PADUA shows comparable performance to state-of-the-art methods, while requiring only a fraction of the computational time.
We characterise the behaviour of the maximum Diaconis-Ylvisaker prior penalized likelihood estimator in high-dimensional logistic regression, where the number of covariates is a fraction $\kappa \in (0,1)$ of the number of observations $n$, as $n \to \infty$. We derive the estimator's aggregate asymptotic behaviour under this proportional asymptotic regime, when covariates are independent normal random variables with mean zero and the linear predictor has asymptotic variance $\gamma^2$. From this foundation, we devise adjusted $Z$-statistics, penalized likelihood ratio statistics, and aggregate asymptotic results with arbitrary covariate covariance. While the maximum likelihood estimate asymptotically exists only for a narrow range of $(\kappa, \gamma)$ values, the maximum Diaconis-Ylvisaker prior penalized likelihood estimate not only exists always but is also directly computable using maximum likelihood routines. Thus, our asymptotic results also hold for $(\kappa, \gamma)$ values where results for maximum likelihood are not attainable, with no overhead in implementation or computation. We study the estimator's shrinkage properties, compare it to alternative estimation methods that can operate with proportional asymptotics, and present procedures for the estimation of unknown constants that describe the asymptotic behaviour of our estimator. We also provide a conjecture about the behaviour of our estimator when an intercept parameter is present in the model. We present results from extensive numerical studies to demonstrate the theoretical advances and strong evidence to support the conjecture, and illustrate the methodology we put forward through the analysis of a real-world data set on digit recognition.
Intelligent robots need to interact with diverse objects across various environments. The appearance and state of objects frequently undergo complex transformations depending on the object properties, e.g., phase transitions. However, in the vision community, segmenting dynamic objects with phase transitions is overlooked. In light of this, we introduce the concept of phase in segmentation, which categorizes real-world objects based on their visual characteristics and potential morphological and appearance changes. Then, we present a new benchmark, Multi-Phase, Multi-Transition, and Multi-Scenery Video Object Segmentation (M3-VOS), to verify the ability of models to understand object phases, which consists of 479 high-resolution videos spanning over 10 distinct everyday scenarios. It provides dense instance mask annotations that capture both object phases and their transitions. We evaluate state-of-the-art methods on M3-VOS, yielding several key insights. Notably, current appearance based approaches show significant room for improvement when handling objects with phase transitions. The inherent changes in disorder suggest that the predictive performance of the forward entropy-increasing process can be improved through a reverse entropy-reducing process. These findings lead us to propose ReVOS, a new plug-and-play model that improves its performance by reversal refinement. Our data and code will be publicly available
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied by the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our Definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.
Tensor data are multi-dimension arrays. Low-rank decomposition-based regression methods with tensor predictors exploit the structural information in tensor predictors while significantly reducing the number of parameters in tensor regression. We propose a method named NA$_0$CT$^2$ (Noise Augmentation for $\ell_0$ regularization on Core Tensor in Tucker decomposition) to regularize the parameters in tensor regression (TR), coupled with Tucker decomposition. We establish theoretically that NA$_0$CT$^2$ achieves exact $\ell_0$ regularization on the core tensor from the Tucker decomposition in linear TR and generalized linear TR. To our knowledge, NA$_0$CT$^2$ is the first Tucker decomposition-based regularization method in TR to achieve $\ell_0$ in core tensors. NA$_0$CT$^2$ is implemented through an iterative procedure and involves two straightforward steps in each iteration -- generating noisy data based on the core tensor from the Tucker decomposition of the updated parameter estimate and running a regular GLM on noise-augmented data on vectorized predictors. We demonstrate the implementation of NA$_0$CT$^2$ and its $\ell_0$ regularization effect in both simulation studies and real data applications. The results suggest that NA$_0$CT$^2$ can improve predictions compared to other decomposition-based TR approaches, with or without regularization and it identifies important predictors though not designed for that purpose.
We introduce and study a purely syntactic notion of lax cones and $(\infty,\infty)$-limits on finite computads in \texttt{CaTT}, a type theory for $(\infty,\infty)$-categories due to Finster and Mimram. Conveniently, finite computads are precisely the contexts in \texttt{CaTT}. We define a cone over a context to be a context, which is obtained by induction over the list of variables of the underlying context. In the case where the underlying context is globular we give an explicit description of the cone and conjecture that an analogous description continues to hold also for general contexts. We use the cone to control the types of the term constructors for the universal cone. The implementation of the universal property follows a similar line of ideas. Starting with a cone as a context, a set of context extension rules produce a context with the shape of a transfor between cones, i.e.~a higher morphism between cones. As in the case of cones, we use this context as a template to control the types of the term constructor required for universal property.
We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bounded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.
Let $P$ be a set of $n$ points in the plane. We consider a variation of the classical Erd\H{o}s-Szekeres problem, presenting efficient algorithms with $O(n^3)$ running time and $O(n^2)$ space complexity that compute: (1) A subset $S$ of $P$ such that the boundary of the rectilinear convex hull of $S$ has the maximum number of points from $P$, (2) a subset $S$ of $P$ such that the boundary of the rectilinear convex hull of $S$ has the maximum number of points from $P$ and its interior contains no element of $P$, (3) a subset $S$ of $P$ such that the rectilinear convex hull of $S$ has maximum area and its interior contains no element of $P$, and (4) when each point of $P$ is assigned a weight, positive or negative, a subset $S$ of $P$ that maximizes the total weight of the points in the rectilinear convex hull of $S$. We also revisit the problems of computing a maximum-area orthoconvex polygon and computing a maximum-area staircase polygon, amidst a point set in a rectangular domain. We obtain new and simpler algorithms to solve both problems with the same complexity as in the state of the art.
In general $n$-dimensional simplicial meshes, we propose a family of interior penalty nonconforming finite element methods for $2m$-th order partial differential equations, where $m \geq 0$ and $n \geq 1$. For this family of nonconforming finite elements, the shape function space consists of polynomials with a degree not greater than $m$, making it minimal. This family of finite element spaces exhibits natural inclusion properties, analogous to those in the corresponding Sobolev spaces in the continuous case. By applying interior penalty to the bilinear form, we establish quasi-optimal error estimates in the energy norm. Due to the weak continuity of the nonconforming finite element spaces, the interior penalty terms in the bilinear form take a simple form, and an interesting property is that the penalty parameter needs only to be a positive constant of $\mathcal{O}(1)$. These theoretical results are further validated by numerical tests.