Given a graph $G$, the number of its vertices is represented by $n(G)$, while the number of its edges is denoted as $m(G)$. An independent set in a graph is a set of vertices where no two vertices are adjacent to each other and the size of the maximum independent set is denoted by $\alpha(G)$. A matching in a graph refers to a set of edges where no two edges share a common vertex and the maximum matching size is denoted by $\mu(G)$. If $\alpha(G) + \mu(G) = n(G)$, then the graph $G$ is called a K\"{o}nig-Egerv\'{a}ry graph. Considering a graph $G$ with a degree sequence $d_1 \leq d_2 \leq \cdots \leq d_n$, the annihilation number $a(G)$ is defined as the largest integer $k$ such that the sum of the first $k$ degrees in the sequence is less than or equal to $m(G)$ (Pepper, 2004). It is a known fact that $\alpha(G)$ is less than or equal to $a(G)$ for any graph $G$. Our goal is to estimate the difference between these two parameters. Specifically, we prove a series of inequalities, including $a(G) - \alpha(G) \leq \frac{\mu(G) - 1}{2}$ for trees, $a(G) - \alpha(G) \leq 2 + \mu(G) - 2\sqrt{1 + \mu(G)}$ for bipartite graphs and $a(G) - \alpha(G) \leq \mu(G) - 2$ for K\"{o}nig-Egerv\'{a}ry graphs. Furthermore, we demonstrate that these inequalities serve as tight upper bounds for the difference between the annihilation and independence numbers, regardless of the assigned value for $\mu(G)$.
Distributed maximization of a submodular function in the MapReduce model has received much attention, culminating in two frameworks that allow a centralized algorithm to be run in the MR setting without loss of approximation, as long as the centralized algorithm satisfies a certain consistency property - which had only been shown to be satisfied by the standard greedy and continous greedy algorithms. A separate line of work has studied parallelizability of submodular maximization in the adaptive complexity model, where each thread may have access to the entire ground set. For the size-constrained maximization of a monotone and submodular function, we show that several sublinearly adaptive algorithms satisfy the consistency property required to work in the MR setting, which yields highly practical parallelizable and distributed algorithms. Also, we develop the first linear-time distributed algorithm for this problem with constant MR rounds. Finally, we provide a method to increase the maximum cardinality constraint for MR algorithms at the cost of additional MR rounds.
We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An $(s,z)$-temporal separator is a set of vertices whose removal disconnects vertex $s$ from vertex $z$ for every time step in a temporal graph. The $(s,z)$-Temporal Separator problem asks to find the minimum size of an $(s,z)$-temporal separator for the given temporal graph. We introduce a generalization of this problem called the $(s,z,t)$-Temporal Separator problem, where the goal is to find a smallest subset of vertices whose removal eliminates all temporal paths from $s$ to $z$ which take less than $t$ time steps. Let $\tau$ denote the number of time steps over which the temporal graph is defined (we consider discrete time steps). We characterize the set of parameters $\tau$ and $t$ when the problem is $\mathcal{NP}$-hard and when it is polynomial time solvable. Then we present a $\tau$-approximation algorithm for the $(s,z)$-Temporal Separator problem and convert it to a $\tau^2$-approximation algorithm for the $(s,z,t)$-Temporal Separator problem. We also present an inapproximability lower bound of $\Omega(\ln(n) + \ln(\tau))$ for the $(s,z,t)$-Temporal Separator problem assuming that $\mathcal{NP}\not\subset\mbox{\sc Dtime}(n^{\log\log n})$. Then we consider three special families of graphs: (1) graphs of branchwidth at most $2$, (2) graphs $G$ such that the removal of $s$ and $z$ leaves a tree, and (3) graphs of bounded pathwidth. We present polynomial-time algorithms to find a minimum $(s,z,t)$-temporal separator for (1) and (2). As for (3), we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the $(s,z,t)$-Temporal Separator problem where the temporal graph has bounded pathwidth.
Consider a set of points sampled independently near a smooth compact submanifold of Euclidean space. We provide mathematically rigorous bounds on the number of sample points required to estimate both the dimension and the tangent spaces of that manifold with high confidence. The algorithm for this estimation is Local PCA, a local version of principal component analysis. Our results accommodate for noisy non-uniform data distribution with the noise that may vary across the manifold, and allow simultaneous estimation at multiple points. Crucially, all of the constants appearing in our bound are explicitly described. The proof uses a matrix concentration inequality to estimate covariance matrices and a Wasserstein distance bound for quantifying nonlinearity of the underlying manifold and non-uniformity of the probability measure.
We consider the problem of identifying, from statistics, a distribution of discrete random variables $X_1,\ldots,X_n$ that is a mixture of $k$ product distributions. The best previous sample complexity for $n \in O(k)$ was $(1/\zeta)^{O(k^2 \log k)}$ (under a mild separation assumption parameterized by $\zeta$). The best known lower bound was $\exp(\Omega(k))$. It is known that $n\geq 2k-1$ is necessary and sufficient for identification. We show, for any $n\geq 2k-1$, how to achieve sample complexity and run-time complexity $(1/\zeta)^{O(k)}$. We also extend the known lower bound of $e^{\Omega(k)}$ to match our upper bound across a broad range of $\zeta$. Our results are obtained by combining (a) a classic method for robust tensor decomposition, (b) a novel way of bounding the condition number of key matrices called Hadamard extensions, by studying their action only on flattened rank-1 tensors.
Join-preserving maps on the discrete time scale $\omega^+$, referred to as time warps, have been proposed as graded modalities that can be used to quantify the growth of information in the course of program execution. The set of time warps forms a simple distributive involutive residuated lattice -- called the time warp algebra -- that is equipped with residual operations relevant to potential applications. In this paper, we show that although the time warp algebra generates a variety that lacks the finite model property, it nevertheless has a decidable equational theory. We also describe an implementation of a procedure for deciding equations in this algebra, written in the OCaml programming language, that makes use of the Z3 theorem prover.
A Stackelberg Vertex Cover game is played on an undirected graph $\mathcal{G}$ where some of the vertices are under the control of a \emph{leader}. The remaining vertices are assigned a fixed weight. The game is played in two stages. First, the leader chooses prices for the vertices under her control. Afterward, the second player, called \emph{follower}, selects a min weight vertex cover in the resulting weighted graph. That is, the follower selects a subset of vertices $C^*$ such that every edge has at least one endpoint in $C^*$ of minimum weight w.r.t.\ to the fixed weights, and the prices set by the leader. Stackelberg Vertex Cover (StackVC) describes the leader's optimization problem to select prices in the first stage of the game so as to maximize her revenue, which is the cumulative price of all her (priceable) vertices that are contained in the follower's solution. Previous research showed that StackVC is \textsf{NP}-hard on bipartite graphs, but solvable in polynomial time in the special case of bipartite graphs, where all priceable vertices belong to the same side of the bipartition. In this paper, we investigate StackVC on paths and present a dynamic program with linear time and space complexity.
An introductory exposition of the virtual element method (VEM) is provided. The intent is to make this method more accessible to those unfamiliar with VEM. Familiarity with the finite element method for solving 2D linear elasticity problems is assumed. Derivations relevant to successful implementation are covered. Some theory is covered, but the focus here is on implementation and results. Examples are given that illustrate the utility of the method. Numerical results are provided to help researchers implement and verify their own results.
We propose an original approach to investigate the linearity of Gray codes obtained from $\mathbb{Z}_{2^L}$-additive codes by introducing two related binary codes: the associated and concatenated. Once they are defined, one could perform a straightforward analysis of the Schur product between their codewords and determine the linearity of the respective Gray code. This work expands on earlier contributions from the literature, where the linearity was established with respect to the kernel of a code and/or operations on $\mathbb{Z}_{2^L}$. The $\mathbb{Z}_{2^L}$-additive codes we apply the Gray map and check the linearity are the well-known Hadamard, simplex, MacDonald, Kerdock, and Preparata codes. We also present a family of Reed-Muller codes that yield to linear Gray codes and perform a computational verification of our proposed method applied to other $\mathbb{Z}_{2^L}$-additive codes.
We study the existence of optimal and p-optimal proof systems for classes in the Boolean hierarchy over $\mathrm{NP}$. Our main results concern $\mathrm{DP}$, i.e., the second level of this hierarchy: If all sets in $\mathrm{DP}$ have p-optimal proof systems, then all sets in $\mathrm{coDP}$ have p-optimal proof systems. The analogous implication for optimal proof systems fails relative to an oracle. As a consequence, we clarify such implications for all classes $\mathcal{C}$ and $\mathcal{D}$ in the Boolean hierarchy over $\mathrm{NP}$: either we can prove the implication or show that it fails relative to an oracle. Furthermore, we show that the sets $\mathrm{SAT}$ and $\mathrm{TAUT}$ have p-optimal proof systems, if and only if all sets in the Boolean hierarchy over $\mathrm{NP}$ have p-optimal proof systems which is a new characterization of a conjecture studied by Pudl\'ak.
Measuring presence is critical to improving user involvement and performance in Mixed Reality (MR). \emph{Presence}, a crucial aspect of MR, is traditionally gauged using subjective questionnaires, leading to a lack of time-varying responses and susceptibility to user bias. Inspired by the existing literature on the relationship between presence and human performance, the proposed methodology systematically measures a user's reaction time to a visual stimulus as they interact within a manipulated MR environment. We explore the user reaction time as a quantity that can be easily measured using the systemic tools available in modern MR devices. We conducted an exploratory study (N=40) with two experiments designed to alter the users' sense of presence by manipulating \emph{place illusion} and \emph{plausibility illusion}. We found a significant correlation between presence scores and reaction times with a correlation coefficient -0.65, suggesting that users with a higher sense of presence responded more swiftly to stimuli. We develop a model that estimates a user's presence level using the reaction time values with high accuracy of up to 80\%. While our study suggests that reaction time can be used as a measure of presence, further investigation is needed to improve the accuracy of the model.