We study the problem of computing the Voronoi diagram of a set of $n^2$ points with $O(\log n)$-bit coordinates in the Euclidean plane in a substantially sublinear in $n$ number of rounds in the congested clique model with $n$ nodes. Recently, Jansson et al. have shown that if the points are uniformly at random distributed in a unit square then their Voronoi diagram within the square can be computed in $O(1)$ rounds with high probability (w.h.p.). We show that if a very weak smoothness condition is satisfied by an input set of $n^2$ points with $O(\log n)$-bit coordinates in the unit square then the Voronoi diagram of the point set within the unit square can be computed in $O(\log n)$ rounds in this model.
The Heilbronn triangle problem asks for the placement of $n$ points in a unit square that maximizes the smallest area of a triangle formed by any three of those points. In $1972$, Schmidt considered a natural generalization of this problem. He asked for the placement of $n$ points in a unit square that maximizes the smallest area of the convex hull formed by any four of those points. He showed a lower bound of $\Omega(n^{-3/2})$, which was improved to $\Omega(n^{-3/2}\log{n})$ by Leffman. A trivial upper bound of $3/n$ could be obtained, and Schmidt asked if this could be improved asymptotically. However, despite several efforts, no asymptotic improvement over the trivial upper bound was known for the last $50$ years, and the problem started to get the tag of being notoriously hard. Szemer{\'e}di posed the question of whether one can, at least, improve the constant in this trivial upper bound. In this work, we answer this question by proving an upper bound of $2/n+o(1/n)$. We also extend our results to any convex hulls formed by $k\geq 4$ points.
Differentially private computation often begins with a bound on some $d$-dimensional statistic's $\ell_p$ sensitivity. For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space. Writing down a closed-form description of this optimal norm is often straightforward. However, running the $K$-norm mechanism reduces to uniformly sampling the norm's unit ball; this ball is a $d$-dimensional convex body, so general sampling algorithms can be slow. Turning to concentrated differential privacy, elliptic Gaussian noise offers similar improvement over spherical Gaussian noise. Once the shape of this ellipse is determined, sampling is easy; however, identifying the best such shape may be hard. This paper solves both problems for the simple statistics of sum, count, and vote. For each statistic, we provide a sampler for the optimal $K$-norm mechanism that runs in time $\tilde O(d^2)$ and derive a closed-form expression for the optimal shape of elliptic Gaussian noise. The resulting algorithms all yield meaningful accuracy improvements while remaining fast and simple enough to be practical. More broadly, we suggest that problem-specific sensitivity space analysis may be an overlooked tool for private additive noise.
In the study of Hilbert schemes, the integer partition $\lambda$ helps researchers identify some geometric and combinatorial properties of the scheme in question. To aid researchers in extracting such information from a Hilbert polynomial, we describe an efficient algorithm which can identify if $p(x)\in\mathbb{Q}[x]$ is a Hilbert polynomial and if so, recover the integer partition $\lambda$ associated with it.
M${}^{\natural}$-concave functions, a.k.a. gross substitute valuation functions, play a fundamental role in many fields, including discrete mathematics and economics. In practice, perfect knowledge of M${}^{\natural}$-concave functions is often unavailable a priori, and we can optimize them only interactively based on some feedback. Motivated by such situations, we study online M${}^{\natural}$-concave function maximization problems, which are interactive versions of the problem studied by Murota and Shioura (1999). For the stochastic bandit setting, we present $O(T^{-1/2})$-simple regret and $O(T^{2/3})$-regret algorithms under $T$ times access to unbiased noisy value oracles of M${}^{\natural}$-concave functions. A key to proving these results is the robustness of the greedy algorithm to local errors in M${}^{\natural}$-concave function maximization, which is one of our main technical results. While we obtain those positive results for the stochastic setting, another main result of our work is an impossibility in the adversarial setting. We prove that, even with full-information feedback, no algorithms that run in polynomial time per round can achieve $O(T^{1-c})$ regret for any constant $c > 0$ unless $\mathsf{P} = \mathsf{NP}$. Our proof is based on a reduction from the matroid intersection problem for three matroids, which would be a novel idea in the context of online learning.
We present a method for computing actions of the exponential-like $\varphi$-functions for a Kronecker sum $K$ of $d$ arbitrary matrices $A_\mu$. It is based on the approximation of the integral representation of the $\varphi$-functions by Gaussian quadrature formulas combined with a scaling and squaring technique. The resulting algorithm, which we call PHIKS, evaluates the required actions by means of $\mu$-mode products involving exponentials of the small sized matrices $A_\mu$, without forming the large sized matrix $K$ itself. PHIKS, which profits from the highly efficient level 3 BLAS, is designed to compute different $\varphi$-functions applied on the same vector or a linear combination of actions of $\varphi$-functions applied on different vectors. In addition, thanks to the underlying scaling and squaring techniques, the desired quantities are available simultaneously at suitable time scales. All these features allow the effective usage of PHIKS in the exponential integration context. In fact, our newly designed method has been tested on popular exponential Runge--Kutta integrators of stiff order from one to four, in comparison with state-of-the-art algorithms for computing actions of $\varphi$-functions. The numerical experiments with discretized semilinear evolutionary 2D or 3D advection--diffusion--reaction, Allen--Cahn, and Brusselator equations show the superiority of the proposed $\mu$-mode approach.
We show that computing the total variation distance between two product distributions is $\#\mathsf{P}$-complete. This is in stark contrast with other distance measures such as Kullback-Leibler, Chi-square, and Hellinger, which tensorize over the marginals leading to efficient algorithms.
We present fully abstract encodings of the call-by-name and call-by-value $\lambda$-calculus into HOcore, a minimal higher-order process calculus with no name restriction. We consider several equivalences on the $\lambda$-calculus side -- normal-form bisimilarity, applicative bisimilarity, and contextual equivalence -- that we internalize into abstract machines in order to prove full abstraction of the encodings. We also demonstrate that this technique scales to the $\lambda\mu$-calculus, i.e., a standard extension of the $\lambda$-calculus with control operators.
The problem of computing $\alpha$-capacity for $\alpha>1$ is equivalent to that of computing the correct decoding exponent. Various algorithms for computing them have been proposed, such as Arimoto and Jitsumatsu--Oohama algorithm. In this study, we propose a novel alternating optimization algorithm for computing the $\alpha$-capacity for $\alpha>1$ based on a variational characterization of the Augustin--Csisz{\'a}r mutual information. A comparison of the convergence performance of these algorithms is demonstrated through numerical examples.
For a fixed positive integer $d \geq 2$, a distance-$d$ independent set (D$d$IS) of a graph is a vertex subset whose distance between any two members is at least $d$. Imagine that there is a token placed on each member of a D$d$IS. Two D$d$ISs are adjacent under Token Sliding ($\mathsf{TS}$) if one can be obtained from the other by moving a token from one vertex to one of its unoccupied adjacent vertices. Under Token Jumping ($\mathsf{TJ}$), the target vertex needs not to be adjacent to the original one. The Distance-$d$ Independent Set Reconfiguration (D$d$ISR) problem under $\mathsf{TS}/\mathsf{TJ}$ asks if there is a corresponding sequence of adjacent D$d$ISs that transforms one given D$d$IS into another. The problem for $d = 2$, also known as the Independent Set Reconfiguration problem, has been well-studied in the literature and its computational complexity on several graph classes has been known. In this paper, we study the computational complexity of D$d$ISR on different graphs under $\mathsf{TS}$ and $\mathsf{TJ}$ for any fixed $d \geq 3$. On chordal graphs, we show that D$d$ISR under $\mathsf{TJ}$ is in $\mathtt{P}$ when $d$ is even and $\mathtt{PSPACE}$-complete when $d$ is odd. On split graphs, there is an interesting complexity dichotomy: D$d$ISR is $\mathtt{PSPACE}$-complete for $d = 2$ but in $\mathtt{P}$ for $d=3$ under $\mathsf{TS}$, while under $\mathsf{TJ}$ it is in $\mathtt{P}$ for $d = 2$ but $\mathtt{PSPACE}$-complete for $d = 3$. Additionally, certain well-known hardness results for $d = 2$ on perfect graphs and planar graphs of maximum degree three and bounded bandwidth can be extended for $d \geq 3$.
This paper introduces a novel class of PICOD($t$) problems referred to as $g$-group complete-$S$ PICOD($t$) problems. It constructs a multi-stage achievability scheme to generate pliable index codes for group complete PICOD problems when $S = \{s\}$ is a singleton set. Using the maximum acyclic induced subgraph bound, lower bounds on the broadcast rate are derived for singleton $S$, which establishes the optimality of the achievability scheme for a range of values for $t$ and for any $g$ and $s$. For all other values, it is shown that the achievability scheme is optimal among the restricted class of broadcast codes.