We provide a simple online $\Delta(1+o(1))$-edge-coloring algorithm for bipartite graphs of maximum degree $\Delta=\omega(\log n)$ under adversarial vertex arrivals on one side of the graph. Our algorithm slightly improves the result of (Cohen, Peng and Wajc, FOCS19), which was the first, and currently only, to obtain an asymptotically optimal $\Delta(1+o(1))$ guarantee for an adversarial arrival model. More importantly, our algorithm provides a new, simpler approach for tackling online edge coloring.
We introduce an quantum entropy for bimodule quantum channels on finite von Neumann algebras, generalizing the remarkable Pimsner-Popa entropy. The relative entropy for Fourier multipliers of bimodule quantum channels establishes an upper bound of the quantum entropy. Additionally, we present the Araki relative entropy for bimodule quantum channels, revealing its equivalence to the relative entropy for Fourier multipliers and demonstrating its left/right monotonicities and convexity. Notably, the quantum entropy attains its maximum if there is a downward Jones basic construction. By considering R\'{e}nyi entropy for Fourier multipliers, we find a continuous bridge between the logarithm of the Pimsner-Popa index and the Pimsner-Popa entropy. As a consequence, the R\'{e}nyi entropy at $1/2$ serves a criterion for the existence of a downward Jones basic construction.
Duan, Wu and Zhou (FOCS 2023) recently obtained the improved upper bound on the exponent of square matrix multiplication $\omega<2.3719$ by introducing a new approach to quantify and compensate the ``combination loss" in prior analyses of powers of the Coppersmith-Winograd tensor. In this paper we show how to use this new approach to improve the exponent of rectangular matrix multiplication as well. Our main technical contribution is showing how to combine this analysis of the combination loss and the analysis of the fourth power of the Coppersmith-Winograd tensor in the context of rectangular matrix multiplication developed by Le Gall and Urrutia (SODA 2018).
Join-preserving maps on the discrete time scale $\omega^+$, referred to as time warps, have been proposed as graded modalities that can be used to quantify the growth of information in the course of program execution. The set of time warps forms a simple distributive involutive residuated lattice -- called the time warp algebra -- that is equipped with residual operations relevant to potential applications. In this paper, we show that although the time warp algebra generates a variety that lacks the finite model property, it nevertheless has a decidable equational theory. We also describe an implementation of a procedure for deciding equations in this algebra, written in the OCaml programming language, that makes use of the Z3 theorem prover.
Secure multiparty computation (MPC) schemes allow two or more parties to conjointly compute a function on their private input sets while revealing nothing but the output. Existing state-of-the-art number-theoretic-based designs face the threat of attacks through quantum algorithms. In this context, we present secure MPC protocols that can withstand quantum attacks. We first present the design and analysis of an information-theoretic secure oblivious linear evaluation (OLE), namely ${\sf qOLE}$ in the quantum domain, and show that our ${\sf qOLE}$ is safe from external attacks. In addition, our scheme satisfies all the security requirements of a secure OLE. We further utilize ${\sf qOLE}$ as a building block to construct a quantum-safe multiparty private set intersection (MPSI) protocol.
Given a finite point set $P$ in ${\mathbb R}^d$, and $\epsilon>0$ we say that $N\subseteq{ \mathbb R}^d$ is a weak $\epsilon$-net if it pierces every convex set $K$ with $|K\cap P|\geq \epsilon |P|$. We show that for any finite point set in dimension $d\geq 3$, and any $\epsilon>0$, one can construct a weak $\epsilon$-net whose cardinality is $\displaystyle O^*\left(\frac{1}{\epsilon^{2.558}}\right)$ in dimension $d=3$, and $\displaystyle o\left(\frac{1}{\epsilon^{d-1/2}}\right)$ in all dimensions $d\geq 4$. To be precise, our weak $\epsilon$-net has cardinality $\displaystyle O\left(\frac{1}{\epsilon^{\alpha_d+\gamma}}\right)$ for any $\gamma>0$, with $$ \alpha_d= \left\{ \begin{array}{l} 2.558 & \text{if} \ d=3 \\3.48 & \text{if} \ d=4 \\\left(d+\sqrt{d^2-2d}\right)/2 & \text{if} \ d\geq 5. \end{array}\right\} $$ This is the first significant improvement of the bound of $\displaystyle \tilde{O}\left(\frac{1}{\epsilon^d}\right)$ that was obtained in 1993 by Chazelle, Edelsbrunner, Grigni, Guibas, Sharir, and Welzl for general point sets in dimension $d\geq 3$.
The target stationary distribution problem (TSDP) is the following: given an irreducible stochastic matrix $G$ and a target stationary distribution $\hat \mu$, construct a minimum norm perturbation, $\Delta$, such that $\hat G = G+\Delta$ is also stochastic and has the prescribed target stationary distribution, $\hat \mu$. In this paper, we revisit the TSDP under a constraint on the support of $\Delta$, that is, on the set of non-zero entries of $\Delta$. This is particularly meaningful in practice since one cannot typically modify all entries of $G$. We first show how to construct a feasible solution $\hat G$ that has essentially the same support as the matrix $G$. Then we show how to compute globally optimal and sparse solutions using the component-wise $\ell_1$ norm and linear optimization. We propose an efficient implementation that relies on a column-generation approach which allows us to solve sparse problems of size up to $10^5 \times 10^5$ in a few minutes. We illustrate the proposed algorithms with several numerical experiments.
Robins et al. (2008, 2017) applied the theory of higher order influence functions (HOIFs) to derive an estimator of the mean $\psi$ of an outcome Y in a missing data model with Y missing at random conditional on a vector X of continuous covariates; their estimator, in contrast to previous estimators, is semiparametric efficient under the minimal conditions of Robins et al. (2009b), together with an additional (non-minimal) smoothness condition on the density g of X, because the Robins et al. (2008, 2017) estimator depends on a nonparametric estimate of g. In this paper, we introduce a new HOIF estimator that has the same asymptotic properties as the original one, but does not impose any smoothness requirement on g. This is important for two reasons. First, one rarely has the knowledge about the properties of g. Second, even when g is smooth, if the dimension of X is even moderate, accurate nonparametric estimation of its density is not feasible at the sample sizes often encountered in applications. In fact, to the best of our knowledge, this new HOIF estimator remains the only semiparametric efficient estimator of $\psi$ under minimal conditions, despite the rapidly growing literature on causal effect estimation. We also show that our estimator can be generalized to the entire class of functionals considered by Robins et al. (2008) which include the average effect of a treatment on a response Y when a vector X suffices to control confounding and the expected conditional variance of a response Y given a vector X. Simulation experiments are also conducted, which demonstrate that our new estimator outperforms those of Robins et al. (2008, 2017) in finite samples, when g is not very smooth.
In the near term, quantum approximate optimization algorithms (QAOAs) hold great potential to solve combinatorial optimization problems. These are hybrid algorithms, i.e., a combination of quantum and classical algorithms. Several proof-of-concept applications of QAOAs for solving combinatorial problems, such as portfolio optimization, energy optimization in power systems, and job scheduling, have been demonstrated. However, whether QAOAs can efficiently solve optimization problems from classical software engineering, such as test optimization, remains unstudied. To this end, we present the first effort to formulate a software test case optimization problem as a QAOA problem and solve it on quantum computer simulators. To solve bigger test optimization problems that require many qubits, which are unavailable these days, we integrate a problem decomposition strategy with the QAOA. We performed an empirical evaluation with five test case optimization problems and four industrial datasets from ABB, Google, and Orona to compare various configurations of our approach, assess its decomposition strategy of handling large datasets, and compare its performance with classical algorithms (i.e., Genetic Algorithm (GA) and Random Search). Based on the evaluation results, we recommend the best configuration of our approach for test case optimization problems. Also, we demonstrate that our strategy can reach the same effectiveness as GA and outperform GA in two out of five test case optimization problems we conducted.
We show an $O(n)$-time reduction from the problem of testing whether a multiset of positive integers can be partitioned into two multisets so that the sum of the integers in each multiset is equal to $n/2$ to the problem of testing whether an $n$-vertex biconnected outerplanar DAG admits an upward planar drawing. This constitutes the first barrier to the existence of efficient algorithms for testing the upward planarity of DAGs with no large triconnected minor. We also show a result in the opposite direction. Suppose that partitioning a multiset of positive integers into two multisets so that the sum of the integers in each multiset is $n/2$ can be solved in $f(n)$ time. Let $G$ be an $n$-vertex biconnected outerplanar DAG and $e$ be an edge incident to the outer face of an outerplanar drawing of $G$. Then it can be tested in $O(f(n))$ time whether $G$ admits an upward planar drawing with $e$ on the outer face.
In this article, we present a construction of a spanner on a set of $n$ points in $\mathbf{R}^d$ that we call a heavy path WSPD spanner. The construction is parameterized by a constant $s > 2$ called the separation ratio. The size of the graph is $O(s^dn)$ and the spanning ratio is at most $1 + 2/s + 2/(s - 1)$. We also show that this graph has a hop spanning ratio of at most $2\lg n + 1$. We present a memoryless local routing algorithm for heavy path WSPD spanners. The routing algorithm requires a vertex $v$ of the graph to store $O(\mathrm{deg}(v)\log n)$ bits of information, where $\mathrm{deg}(v)$ is the degree of $v$. The routing ratio is at most $1 + 4/s + 1/(s - 1)$ and at least $1 + 4/s$ in the worst case. The number of edges on the routing path is bounded by $2\lg n + 1$. We then show that the heavy path WSPD spanner can be constructed in metric spaces of bounded doubling dimension. These metric spaces have been studied in computational geometry as a generalization of Euclidean space. We show that, in a metric space with doubling dimension $\lambda$, the heavy path WSPD spanner has size $O(s^\lambda n)$ where $s$ is the separation ratio. The spanning ratio and hop spanning ratio are the same as in the Euclidean case. Finally, we show that the local routing algorithm works in the bounded doubling dimension case. The vertices require the same amount of storage, but the routing ratio becomes at most $1 + (2 + \frac{\tau}{\tau-1})/s + 1/(s - 1)$ in the worst case, where $\tau \ge 11$ is a constant related to the doubling dimension.