We consider the classical Shiryaev--Roberts martingale diffusion, $(R_t)_{t\ge0}$, restricted to the interval $[0,A]$, where $A>0$ is a preset absorbing boundary. We take yet another look at the well-known phenomenon of quasi-stationarity (time-invariant probabilistic behavior, conditional on no absorbtion hitherto) exhibited by the diffusion in the temporal limit, as $t\to+\infty$, for each $A>0$. We obtain new upper- and lower-bounds for the quasi-stationary distribution's probability density function (pdf), $q_{A}(x)$; the bounds vary in the trade-off between simplicity and tightness. The bounds imply directly the expected result that $q_{A}(x)$ converges to the pdf, $h(x)$, of the diffusion's stationary distribution, as $A\to+\infty$; the convergence is pointwise, for all $x\ge0$. The bounds also yield an explicit upperbound for the gap between $q_{A}(x)$ and $h(x)$ for a fixed $x$. By virtue of integration the bounds for the pdf $q_{A}(x)$ translate into new bounds for the corresponding cumulative distribution function (cdf), $Q_{A}(x)$. All of our results are established explicitly, using certain latest monotonicity properties of the modified Bessel $K$ function involved in the exact closed-form formula for $q_{A}(x)$ recently obtained by Polunchenko (2017). We conclude with a discussion of potential applications of our results in quickest change-point detection: our bounds allow for a very accurate performance analysis of the so-called randomized Shiryaev--Roberts--Pollak change-point detection procedure.
In this paper, we considier the limiting distribution of the maximum interpoint Euclidean distance $M_n=\max _{1 \leq i<j \leq n}\left\|\boldsymbol{X}_i-\boldsymbol{X}_j\right\|$, where $\boldsymbol{X}_1, \boldsymbol{X}_2, \ldots, \boldsymbol{X}_n$ be a random sample coming from a $p$-dimensional population with dependent sub-gaussian components. When the dimension tends to infinity with the sample size, we proves that $M_n^2$ under a suitable normalization asymptotically obeys a Gumbel type distribution. The proofs mainly depend on the Stein-Chen Poisson approximation method and high dimensional Gaussian approximation.
In this paper, we describe an algorithm for approximating functions of the form $f(x) = < \sigma(\mu), x^\mu >$ over $[0,1] \subset \mathbb{R}$, where $\sigma(\mu)$ is some distribution supported on $[a,b]$, with $0 <a < b < \infty$. One example from this class of functions is $x^c (\log{x})^m=(-1)^m < \delta^{(m)}(\mu-c), x^\mu >$, where $a\leq c \leq b$ and $m \geq 0$ is an integer. Given the desired accuracy $\epsilon$ and the values of $a$ and $b$, our method determines a priori a collection of non-integer powers $t_1$, $t_2$, $\ldots$, $t_N$, so that the functions are approximated by series of the form $f(x)\approx \sum_{j=1}^N c_j x^{t_j}$, and a set of collocation points $x_1$, $x_2$, $\ldots$, $x_N$, such that the expansion coefficients can be found by collocating the function at these points. We prove that our method has a small uniform approximation error which is proportional to $\epsilon$ multiplied by some small constants. We demonstrate the performance of our algorithm with several numerical experiments, and show that the number of singular powers and collocation points grows as $N=O(\log{\frac{1}{\epsilon}})$.
Pomset logic and BV are both logics that extend multiplicative linear logic (with Mix) with a third connective that is self-dual and non-commutative. Whereas pomset logic originates from the study of coherence spaces and proof nets, BV originates from the study of series-parallel orders, cographs, and proof systems. Both logics enjoy a cut-admissibility result, but for neither logic can this be done in the sequent calculus. Provability in pomset logic can be checked via a proof net correctness criterion and in BV via a deep inference proof system. It has long been conjectured that these two logics are the same. In this paper we show that this conjecture is false. We also investigate the complexity of the two logics, exhibiting a huge gap between the two. Whereas provability in BV is NP-complete, provability in pomset logic is $\Sigma_2^p$-complete. We also make some observations with respect to possible sequent systems for the two logics.
We consider relational semantics (R-models) for the Lambek calculus extended with intersection and explicit constants for zero and unit. For its variant without constants and a restriction which disallows empty antecedents, Andreka and Mikulas (1994) prove strong completeness. We show that it fails without this restriction, but, on the other hand, prove weak completeness for non-standard interpretation of constants. For the standard interpretation, even weak completeness fails. The weak completeness result extends to an infinitary setting, for so-called iterative divisions (Kleene star under division). We also prove strong completeness results for product-free fragments.
In the Multiagent Path Finding problem (MAPF for short), we focus on efficiently finding non-colliding paths for a set of $k$ agents on a given graph $G$, where each agent seeks a path from its source vertex to a target. An important measure of the quality of the solution is the length of the proposed schedule $\ell$, that is, the length of a longest path (including the waiting time). In this work, we propose a systematic study under the parameterized complexity framework. The hardness results we provide align with many heuristics used for this problem, whose running time could potentially be improved based on our fixed-parameter tractability results. We show that MAPF is W[1]-hard with respect to $k$ (even if $k$ is combined with the maximum degree of the input graph). The problem remains NP-hard in planar graphs even if the maximum degree and the makespan$\ell$ are fixed constants. On the positive side, we show an FPT algorithm for $k+\ell$. As we delve further, the structure of~$G$ comes into play. We give an FPT algorithm for parameter $k$ plus the diameter of the graph~$G$. The MAPF problem is W[1]-hard for cliquewidth of $G$ plus $\ell$ while it is FPT for treewidth of $G$ plus $\ell$.
Given a graph $G$, an integer $k\geq 0$, and a non-negative integral function $f:V(G) \rightarrow \mathcal{N}$, the {\sc Vector Domination} problem asks whether a set $S$ of vertices, of cardinality $k$ or less, exists in $G$ so that every vertex $v \in V(G)-S$ has at least $f(v)$ neighbors in $S$. The problem generalizes several domination problems and it has also been shown to generalize Bounded-Degree Vertex Deletion. In this paper, the parameterized version of Vector Domination is studied when the input graph is planar. A linear problem kernel is presented.
We provide an algorithm that maintains, against an adaptive adversary, a $(1-\varepsilon)$-approximate maximum matching in $n$-node $m$-edge general (not necessarily bipartite) undirected graph undergoing edge deletions with high probability with (amortized) $O(\mathrm{poly}(\varepsilon^{-1}, \log n))$ time per update. We also obtain the same update time for maintaining a fractional approximate weighted matching (and hence an approximation to the value of the maximum weight matching) and an integral approximate weighted matching in dense graphs. Our unweighted result improves upon the prior state-of-the-art which includes a $\mathrm{poly}(\log{n}) \cdot 2^{O(1/\varepsilon^2)}$ update time [Assadi-Bernstein-Dudeja 2022] and an $O(\sqrt{m} \varepsilon^{-2})$ update time [Gupta-Peng 2013], and our weighted result improves upon the $O(\sqrt{m}\varepsilon^{-O(1/\varepsilon)}\log{n})$ update time due to [Gupta-Peng 2013]. To obtain our results, we generalize a recent optimization approach to dynamic algorithms from [Jambulapati-Jin-Sidford-Tian 2022]. We show that repeatedly solving entropy-regularized optimization problems yields a lazy updating scheme for fractional decremental problems with a near-optimal number of updates. To apply this framework we develop optimization methods compatible with it and new dynamic rounding algorithms for the matching polytope.
We study the seeded domino problem, the recurring domino problem and the $k$-SAT problem on finitely generated groups. These problems are generalization of their original versions on $\mathbb{Z}^2$ that were shown to be undecidable using the domino problem. We show that the seeded and recurring domino problems on a group are invariant under changes in the generating set, are many-one reduced from the respective problems on subgroups, and are positive equivalent to the problems on finite index subgroups. This leads to showing that the recurring domino problem is decidable for free groups. Coupled with the invariance properties, we conjecture that the only groups in which the seeded and recurring domino problems are decidable are virtually free groups. In the case of the $k$-SAT problem, we introduce a new generalization that is compatible with decision problems on finitely generated groups. We show that the subgroup membership problem many-one reduces to the $2$-SAT problem, that in certain cases the $k$-SAT problem many one reduces to the domino problem, and finally that the domino problem reduces to $3$-SAT for the class of scalable groups.
Digital quantum simulation has broad applications in approximating unitary evolutions of Hamiltonians. In practice, many simulation tasks for quantum systems focus on quantum states in the low-energy subspace instead of the entire Hilbert space. In this paper, we systematically investigate the complexity of digital quantum simulation based on product formulas in the low-energy subspace. We show that the simulation error depends on the effective low-energy norm of the Hamiltonian for a variety of digital quantum simulation algorithms and quantum systems, allowing improvements over the previous complexities for full unitary simulations even for imperfect state preparations. In particular, for simulating spin models in the low-energy subspace, we prove that randomized product formulas such as qDRIFT and random permutation require smaller step complexities. This improvement also persists in symmetry-protected digital quantum simulations. We prove a similar improvement in simulating the dynamics of power-law quantum interactions. We also provide a query lower bound for general digital quantum simulations in the low-energy subspace.
Given a set of $n$ points in the Euclidean plane, the $k$-MinSumRadius problem asks to cover this point set using $k$ disks with the objective of minimizing the sum of the radii of the disks. After a long line of research on related problems, it was finally discovered that this problem admits a polynomial time algorithm [GKKPV~'12]; however, the running time of this algorithm is $O(n^{881})$, and its relevance is thereby mostly of theoretical nature. A practically and structurally interesting special case of the $k$-MinSumRadius problem is that of small $k$. For the $2$-MinSumRadius problem, a near-quadratic time algorithm with expected running time $O(n^2 \log^2 n \log^2 \log n)$ was given over 30 years ago [Eppstein~'92]. We present the first improvement of this result, namely, a near-linear time algorithm to compute the $2$-MinSumRadius that runs in expected $O(n \log^2 n \log^2 \log n)$ time. We generalize this result to any constant dimension $d$, for which we give an $O(n^{2-1/(\lceil d/2\rceil + 1) + \varepsilon})$ time algorithm. Additionally, we give a near-quadratic time algorithm for $3$-MinSumRadius in the plane that runs in expected $O(n^2 \log^2 n \log^2 \log n)$ time. All of these algorithms rely on insights that uncover a surprisingly simple structure of optimal solutions: we can specify a linear number of lines out of which one separates one of the clusters from the remaining clusters in an optimal solution.