The notion of $\alpha$-equivalence between $\lambda$-terms is commonly used to identify terms that are considered equal. However, due to the primitive treatment of free variables, this notion falls short when comparing subterms occurring within a larger context. Depending on the usage of the Barendregt convention (choosing different variable names for all involved binders), it will equate either too few or too many subterms. We introduce a formal notion of context-sensitive $\alpha$-equivalence, where two open terms can be compared within a context that resolves their free variables. We show that this equivalence coincides exactly with the notion of bisimulation equivalence. Furthermore, we present an efficient $O(n\log n)$ runtime algorithm that identifies $\lambda$-terms modulo context-sensitive $\alpha$-equivalence, improving upon a previously established $O(n\log^2 n)$ bound for a hashing modulo ordinary $\alpha$-equivalence by Maziarz et al. Hashing $\lambda$-terms is useful in many applications that require common subterm elimination and structure sharing. We employ the algorithm to obtain a large-scale, densely packed, interconnected graph of mathematical knowledge from the Coq proof assistant for machine learning purposes.
We develop a sparse hierarchical $hp$-finite element method ($hp$-FEM) for the Helmholtz equation with rotationally invariant variable coefficients posed on a two-dimensional disk or annulus. The mesh is an inner disk cell (omitted if on an annulus domain) and concentric annuli cells. The discretization preserves the Fourier mode decoupling of rotationally invariant operators, such as the Laplacian, which manifests as block diagonal mass and stiffness matrices. Moreover, the matrices have a sparsity pattern independent of the order of the discretization and admit an optimal complexity factorization. The sparse $hp$-FEM can handle radial discontinuities in the right-hand side and in rotationally invariant Helmholtz coefficients. We consider examples such as a high-frequency Helmholtz equation with radial discontinuities, the time-dependent Schr\"odinger equation, and an extension to a three-dimensional cylinder domain, with a quasi-optimal solve, via the Alternating Direction Implicit (ADI) algorithm.
A graph $G=(V,E)$ is a star-$k$-PCG if there exists a weight function $w: V \rightarrow R^+$ and $k$ mutually exclusive intervals $I_1, I_2, \ldots I_k$, such that there is an edge $uv \in E$ if and only if $w(u)+w(v) \in \bigcup_i I_i$. These graphs are related to two important classes of graphs: PCGs and multithreshold graphs. It is known that for any graph $G$ there exists a $k$ such that $G$ is a star-$k$-PCG. Thus, for a given graph $G$ it is interesting to know which is the minimum $k$ such that $G$ is a star-$k$-PCG. We define this minimum $k$ as the star number of the graph, denoted by $\gamma(G)$. Here we investigate the star number of simple graph classes, such as graphs of small size, caterpillars, cycles and grids. Specifically, we determine the exact value of $\gamma(G)$ for all the graphs with at most 7 vertices. By doing so we show that the smallest graphs with star number 2 are only 4 and have exactly 5 vertices; the smallest graphs with star number 3 are only 3 and have exactly 7 vertices. Next, we provide a construction showing that the star number of caterpillars is one. Moreover, we show that the star number of cycles and two dimensional grid graphs is 2 and that the star number of $4$-dimensional grids is at least 3. Finally, we conclude with numerous open problems.
We consider the online hitting set problem for the range space $\Sigma=(\cal X,\cal R)$, where the point set $\cal X$ is known beforehand, but the set $\cal R$ of geometric objects is not known in advance. Here, objects from $\cal R$ arrive one by one. The objective of the problem is to maintain a hitting set of the minimum cardinality by taking irrevocable decisions. In this paper, we consider the problem when objects are unit balls or unit hypercubes in $\mathbb{R}^d$, and the points from $\mathbb{Z}^d$ are used for hitting them. First, we address the case when objects are unit intervals in $\mathbb{R}$ and present an optimal deterministic algorithm with a competitive ratio of~$2$. Then, we consider the case when objects are unit balls. For hitting unit balls in $\mathbb{R}^2$ and $\mathbb{R}^3$, we present $4$ and $14$-competitive deterministic algorithms, respectively. On the other hand, for hitting unit balls in $\mathbb{R}^d$, we propose an $O(d^4)$-competitive deterministic algorithm, and we demonstrate that}, for $d<4$, the competitive ratio of any deterministic algorithm is at least $d+1$. In the end, we explore the case where objects are unit hypercubes. For hitting unit hypercubes in $\mathbb{R}^2$ and $\mathbb{R}^3$, we obtain $4$ and $8$-competitive deterministic algorithms, respectively. For hitting unit hypercubes in $\mathbb{R}^d$ ($d\geq 3$), we present an $O(d^2)$-competitive randomized algorithm. Furthermore, we prove that the competitive ratio of any deterministic algorithm for the problem is at least $d+1$ for any $d\in\mathbb{N}$.
An independent set in a graph G is a set of pairwise non-adjacent vertices. A graph $G$ is bipartite if its vertex set can be partitioned into two independent sets. In the Odd Cycle Transversal problem, the input is a graph $G$ along with a weight function $w$ associating a rational weight with each vertex, and the task is to find a smallest weight vertex subset $S$ in $G$ such that $G - S$ is bipartite; the weight of $S$, $w(S) = \sum_{v\in S} w(v)$. We show that Odd Cycle Transversal is polynomial-time solvable on graphs excluding $P_5$ (a path on five vertices) as an induced subgraph. The problem was previously known to be polynomial-time solvable on $P_4$-free graphs and NP-hard on $P_6$-free graphs [Dabrowski, Feghali, Johnson, Paesani, Paulusma and Rz\k{a}\.zewski, Algorithmica 2020]. Bonamy, Dabrowski, Feghali, Johnson and Paulusma [Algorithmica 2019] posed the existence of a polynomial-time algorithm on $P_5$-free graphs as an open problem, this was later re-stated by Rz\k{a}\.zewski [Dagstuhl Reports, 9(6): 2019] and by Chudnovsky, King, Pilipczuk, Rz\k{a}\.zewski, and Spirkl [SIDMA 2021], who gave an algorithm with running time $n^{O(\sqrt{n})}$.
We consider information update systems on a gossip network, which consists of a single source and $n$ receiver nodes. The source encrypts the information into $n$ distinct keys with version stamps, sending a unique key to each node. For decryption in a $(k, n)$-Threshold Signature Scheme, each receiver node requires at least $k+1$ different keys with the same version, shared over peer-to-peer connections. We consider two different schemes: a memory scheme (in which the nodes keep the source's current and previous encrypted messages) and a memoryless scheme (in which the nodes are allowed to only keep the source's current message). We measure the ''timeliness'' of information updates by using the version age of information. Our work focuses on determining closed-form expressions for the time average age of information in a heterogeneous random graph. Our work not only allows to verify the expected outcome that a memory scheme results in a lower average age compared to a memoryless scheme, but also provides the quantitative difference between the two. In our numerical results, we quantify the value of memory and demonstrate that the advantages of memory diminish with infrequent source updates, frequent gossipping between nodes, or a decrease in $k$ for a fixed number of nodes.
We show, in one dimension, that an $hp$-Finite Element Method ($hp$-FEM) discretisation can be solved in optimal complexity because the discretisation has a special sparsity structure that ensures that the \emph{reverse Cholesky factorisation} -- Cholesky starting from the bottom right instead of the top left -- remains sparse. Moreover, computing and inverting the factorisation almost entirely trivially parallelises across the different elements. By incorporating this approach into an Alternating Direction Implicit (ADI) method \`a la Fortunato and Townsend (2020) we can solve, within a prescribed tolerance, an $hp$-FEM discretisation of the (screened) Poisson equation on a rectangle, in parallel, with quasi-optimal complexity: $O(N^2 \log N)$ operations where $N$ is the maximal total degrees of freedom in each dimension. When combined with fast Legendre transforms we can also solve nonlinear time-evolution partial differential equations in a quasi-optimal complexity of $O(N^2 \log^2 N)$ operations, which we demonstrate on the (viscid) Burgers' equation.
We prove that the values of a generalized $\psi$-estimator (introduced by Barczy and P\'ales in 2022) on samples of arbitrary length but having only two different observations uniquely determine the values of the estimator on any sample of arbitrary length without any restriction on the number of different observations. In other words, samples of arbitrary length but having only two different observations form a determining class for generalized $\psi$-estimators. We also obtain a similar statement for the comparison of generalized $\psi$-estimators using comparative functions, and, as a corollary of this result, we derive the Schweitzer's inequality (also called Kantorovich's inequality).
We consider composition orderings for linear functions of one variable. Given $n$ linear functions $f_1,\dots,f_n$ and a constant $c$, the objective is to find a permutation $\sigma$ that minimizes/maximizes $f_{\sigma(n)}\circ\dots\circ f_{\sigma(1)}(c)$. It was first studied in the area of time-dependent scheduling, and known to be solvable in $O(n\log n)$ time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show several interesting properties on optimal composition orderings such as the equivalence between local and global optimality. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem for general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication orderings as a generalization of composition of linear functions. Given $n$ matrices $M_1,\dots,M_n\in\mathbb{R}^{m\times m}$ and two vectors $w,y\in\mathbb{R}^m$, where $m$ denotes a positive integer, the objective is to find a permutation $\sigma$ that minimizes/maximizes $w^\top M_{\sigma(n)}\dots M_{\sigma(1)} y$. The problem is also viewed as a generalization of flow shop scheduling through a limit. By this extension, we show that the multiplication ordering problem for $2\times 2$ matrices is solvable in $O(n\log n)$ time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we finally prove that three possible natural generalizations are NP-hard: 1) when $m=2$, 2) when $m\geq 3$, and 3) the target version of the problem.
In the $\ell_p$-subspace sketch problem, we are given an $n\times d$ matrix $A$ with $n>d$, and asked to build a small memory data structure $Q(A,\epsilon)$ so that, for any query vector $x\in\mathbb{R}^d$, we can output a number in $(1\pm\epsilon)\|Ax\|_p^p$ given only $Q(A,\epsilon)$. This problem is known to require $\tilde{\Omega}(d\epsilon^{-2})$ bits of memory for $d=\Omega(\log(1/\epsilon))$. However, for $d=o(\log(1/\epsilon))$, no data structure lower bounds were known. We resolve the memory required to solve the $\ell_p$-subspace sketch problem for any constant $d$ and integer $p$, showing that it is $\Omega(\epsilon^{-2(d-1)/(d+2p)})$ bits and $\tilde{O} (\epsilon^{-2(d-1)/(d+2p)})$ words. This shows that one can beat the $\Omega(\epsilon^{-2})$ lower bound, which holds for $d = \Omega(\log(1/\epsilon))$, for any constant $d$. We also show how to implement the upper bound in a single pass stream, with an additional multiplicative $\operatorname{poly}(\log \log n)$ factor and an additive $\operatorname{poly}(\log n)$ cost in the memory. Our bounds can be applied to point queries for SVMs with additive error, yielding an optimal bound of $\tilde{\Theta}(\epsilon^{-2d/(d+3)})$ for every constant $d$. This is a near-quadratic improvement over the $\Omega(\epsilon^{-(d+1)/(d+3)})$ lower bound of (Andoni et al. 2020). Our techniques rely on a novel connection to low dimensional techniques from geometric functional analysis.
We propose a novel dynamical model for blood alcohol concentration that incorporates $\psi$-Caputo fractional derivatives. Using the generalized Laplace transform technique, we successfully derive an analytic solution for both the alcohol concentration in the stomach and the alcohol concentration in the blood of an individual. These analytical formulas provide us a straightforward numerical scheme, which demonstrates the efficacy of the $\psi$-Caputo derivative operator in achieving a better fit to real experimental data on blood alcohol levels available in the literature. In comparison to existing classical and fractional models found in the literature, our model outperforms them significantly. Indeed, by employing a simple yet non-standard kernel function $\psi(t)$, we are able to reduce the error by more than half, resulting in an impressive gain improvement of 59 percent.