Multi-party random number generation is a key building-block in many practical protocols. While straightforward to solve when all parties are trusted to behave correctly, the problem becomes much more difficult in the presence of faults. In this context, this paper presents RandSolomon, a protocol that allows a network of N processes to produce an unpredictable common random number among the non-faulty of them. We provide optimal resilience for partially-synchronous systems where less than a third of the participants might behave arbitrarily and, contrary to many solutions, we do not require at any point faulty-processes to be responsive.
Consider the problem of nonparametric estimation of an unknown $\beta$-H\"older smooth density $p_{XY}$ at a given point, where $X$ and $Y$ are both $d$ dimensional. An infinite sequence of i.i.d.\ samples $(X_i,Y_i)$ are generated according to this distribution, and two terminals observe $(X_i)$ and $(Y_i)$, respectively. They are allowed to exchange $k$ bits either in oneway or interactively in order for Bob to estimate the unknown density. We show that the minimax mean square risk is order $\left(\frac{k}{\log k} \right)^{-\frac{2\beta}{d+2\beta}}$ for one-way protocols and $k^{-\frac{2\beta}{d+2\beta}}$ for interactive protocols. The logarithmic improvement is nonexistent in the parametric counterparts, and therefore can be regarded as a consequence of nonparametric nature of the problem. Moreover, a few rounds of interactions achieve the interactive minimax rate: the number of rounds can grow as slowly as the super-logarithm (i.e., inverse tetration) of $k$. The proof of the upper bound is based on a novel multi-round scheme for estimating the joint distribution of a pair of biased Bernoulli variables.
Listing dense subgraphs in large graphs plays a key task in varieties of network analysis applications like community detection. Clique, as the densest model, has been widely investigated. However, in practice, communities rarely form as cliques for various reasons, e.g., data noise. Therefore, $k$-plex, -- graph with each vertex adjacent to all but at most $k$ vertices, is introduced as a relaxed version of clique. Often, to better simulate cohesive communities, an emphasis is placed on connected $k$-plexes with small $k$. In this paper, we continue the research line of listing all maximal $k$-plexes and maximal $k$-plexes of prescribed size. Our first contribution is algorithm \emph{ListPlex} that lists all maximal $k$-plexes in $O^*(\gamma^D)$ time for each constant $k$, where $\gamma$ is a value related to $k$ but strictly smaller than 2, and $D$ is the degeneracy of the graph that is far less than the vertex number $n$ in real-word graphs. Compared to the trivial bound of $2^n$, the improvement is significant, and our bound is better than all previously known results. In practice, we further use several techniques to accelerate listing $k$-plexes of a given size, such as structural-based prune rules, cache-efficient data structures, and parallel techniques. All these together result in a very practical algorithm. Empirical results show that our approach outperforms the state-of-the-art solutions by up to orders of magnitude.
A deterministic pathogen transmission model based on high-fidelity physics has been developed. The model combines computational fluid dynamics and computational crowd dynamics in order to be able to provide accurate tracing of viral matter that is exhaled, transmitted and inhaled via aerosols. The examples shown indicate that even with modest computing resources, the propagation and transmission of viral matter can be simulated for relatively large areas with thousands of square meters, hundreds of pedestrians and several minutes of physical time. The results obtained and insights gained from these simulations can be used to inform global pandemic propagation models, increasing substantially their accuracy.
We study the online scheduling problem where the machines need to be calibrated before processing any jobs. To calibrate a machine, it will take $\lambda$ time steps as the activation time, and then the machine will remain calibrated status for $T$ time steps. The job can only be processed by the machine that is in calibrated status. Given a set of jobs arriving online, each of the jobs is characterized by a release time, a processing time, and a deadline. We assume that there is an infinite number of machines for usage. The objective is to minimize the total number of calibrations while feasibly scheduling all jobs. For the case that all jobs have unit processing times, we propose an $\mathcal{O}(\lambda)$-competitive algorithm, which is asymptotically optimal. When $\lambda=0$, the problem is degraded to rent minimization, where our algorithm achieves a competitive ratio of $3e+7(\approx 15.16)$ which improves upon the previous results for such problems.
The generalized coloring numbers of Kierstead and Yang offer an algorithmically useful characterization of graph classes with bounded expansion. In this work, we consider the hardness and approximability of these parameters. First, we complete the work of Grohe et al. by showing that computing the weak 2-coloring number is NP-hard. Our approach further establishes that determining the weak $r$-coloring number is APX-hard for all $r \geq 2$. We adapt this to the $r$-coloring number as well, proving APX-hardness for all $r \geq 2$. Our reductions also imply that for every fixed $r \geq 2$, no XP algorithm (runtime $O(n^{f(k)})$) exists for testing if either generalized coloring number is at most $k$. Finally, we give an approximation algorithm for the $r$-coloring number which improves both the runtime and approximation factor of the existing approach of Dvo\v{r}\'{a}k. Our algorithm greedily orders vertices with small enough $\ell$-reach for every $\ell \leq r$ and achieves an $O(C_{r-1} k^{r-1})$-approximation, where $C_i$ is the $i$th Catalan number.
This paper studies the problem of recovering the hidden vertex correspondence between two edge-correlated random graphs. We focus on the Gaussian model where the two graphs are complete graphs with correlated Gaussian weights and the Erd\H{o}s-R\'enyi model where the two graphs are subsampled from a common parent Erd\H{o}s-R\'enyi graph $\mathcal{G}(n,p)$. For dense graphs with $p=n^{-o(1)}$, we prove that there exists a sharp threshold, above which one can correctly match all but a vanishing fraction of vertices and below which correctly matching any positive fraction is impossible, a phenomenon known as the "all-or-nothing" phase transition. Even more strikingly, in the Gaussian setting, above the threshold all vertices can be exactly matched with high probability. In contrast, for sparse Erd\H{o}s-R\'enyi graphs with $p=n^{-\Theta(1)}$, we show that the all-or-nothing phenomenon no longer holds and we determine the thresholds up to a constant factor. Along the way, we also derive the sharp threshold for exact recovery, sharpening the existing results in Erd\H{o}s-R\'enyi graphs. The proof of the negative results builds upon a tight characterization of the mutual information based on the truncated second-moment computation and an "area theorem" that relates the mutual information to the integral of the reconstruction error. The positive results follows from a tight analysis of the maximum likelihood estimator that takes into account the cycle structure of the induced permutation on the edges.
Computation of confidence sets is central to data science and machine learning, serving as the workhorse of A/B testing and underpinning the operation and analysis of reinforcement learning algorithms. This paper studies the geometry of the minimum-volume confidence sets for the multinomial parameter. When used in place of more standard confidence sets and intervals based on bounds and asymptotic approximation, learning algorithms can exhibit improved sample complexity. Prior work showed the minimum-volume confidence sets are the level-sets of a discontinuous function defined by an exact p-value. While the confidence sets are optimal in that they have minimum average volume, computation of membership of a single point in the set is challenging for problems of modest size. Since the confidence sets are level-sets of discontinuous functions, little is apparent about their geometry. This paper studies the geometry of the minimum volume confidence sets by enumerating and covering the continuous regions of the exact p-value function. This addresses a fundamental question in A/B testing: given two multinomial outcomes, how can one determine if their corresponding minimum volume confidence sets are disjoint? We answer this question in a restricted setting.
This paper addresses the problem of viewpoint estimation of an object in a given image. It presents five key insights that should be taken into consideration when designing a CNN that solves the problem. Based on these insights, the paper proposes a network in which (i) The architecture jointly solves detection, classification, and viewpoint estimation. (ii) New types of data are added and trained on. (iii) A novel loss function, which takes into account both the geometry of the problem and the new types of data, is propose. Our network improves the state-of-the-art results for this problem by 9.8%.
Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.