亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate the randomized and quantum communication complexities of the well-studied Equality function with small error probability $\epsilon$, getting the optimal constant factors in the leading terms in a number of different models. In the randomized model, 1) we give a general technique to convert public-coin protocols to private-coin protocols by incurring a small multiplicative error, at a small additive cost. This is an improvement over Newman's theorem [Inf. Proc. Let.'91] in the dependence on the error parameter. 2) Using this we obtain a $(\log(n/\epsilon^2)+4)$-cost private-coin communication protocol that computes the $n$-bit Equality function, to error $\epsilon$. This improves upon the $\log(n/\epsilon^3)+O(1)$ upper bound implied by Newman's theorem, and matches the best known lower bound, which follows from Alon [Comb. Prob. Comput.'09], up to an additive $\log\log(1/\epsilon)+O(1)$. In the quantum model, 1) we exhibit a one-way protocol of cost $\log(n/\epsilon)+4$, that uses only pure states and computes the $n$-bit Equality function to error $\epsilon$. This bound was implicitly already shown by Nayak [PhD thesis'99]. 2) We show that any $\epsilon$-error one-way protocol for $n$-bit Equality that uses only pure states communicates at least $\log(n/\epsilon)-\log\log(1/\epsilon)-O(1)$ qubits. 3) We exhibit a one-way protocol of cost $\log(\sqrt{n}/\epsilon)+3$, that uses mixed states and computes the $n$-bit Equality function to error $\epsilon$. This is also tight up to an additive $\log\log(1/\epsilon)+O(1)$, which follows from Alon's result. Our upper bounds also yield upper bounds on the approximate rank and related measures of the Identity matrix. This also implies improved upper bounds on these measures for the distributed SINK function, which was recently used to refute the randomized and quantum versions of the log-rank conjecture.

相關內容

We consider n robots with limited visibility: each robot can observe other robots only up to a constant distance denoted as the viewing range. The robots operate in discrete rounds that are either fully synchronous (FSync) or semi-synchronized (SSync). Most previously studied formation problems in this setting seek to bring the robots closer together (e.g., Gathering or Chain-Formation). In this work, we introduce the Max-Line-Formation problem, which has a contrary goal: to arrange the robots on a straight line of maximal length. First, we prove that the problem is impossible to solve by robots with a constant sized circular viewing range. The impossibility holds under comparably strong assumptions: robots that agree on both axes of their local coordinate systems in FSync. On the positive side, we show that the problem is solvable by robots with a constant square viewing range, i.e., the robots can observe other robots that lie within a constant-sized square centered at their position. In this case, the robots need to agree on only one axis of their local coordinate systems. We derive two algorithms: the first algorithm considers oblivious robots and converges to the optimal configuration in time $\mathcal{O}(n^2 \cdot \log (n/\varepsilon))$ under the SSync scheduler. The other algorithm makes use of locally visible lights (LUMI). It is designed for the FSync scheduler and can solve the problem exactly in optimal time $\Theta(n)$. Afterward, we show that both the algorithmic and the analysis techniques can also be applied to the Gathering and Chain-Formation problem: we introduce an algorithm with a reduced viewing range for Gathering and give new and improved runtime bounds for the Chain-Formation problem.

Conventional information-theoretic quantities assume access to probability distributions. Estimating such distributions is not trivial. Here, we consider function-based formulations of cross entropy that sidesteps this a priori estimation requirement. We propose three measures of R\'enyi's $\alpha$-cross-entropies in the setting of reproducing-kernel Hilbert spaces. Each measure has its appeals. We prove that we can estimate these measures in an unbiased, non-parametric, and minimax-optimal way. We do this via sample-constructed Gram matrices. This yields matrix-based estimators of R\'enyi's $\alpha$-cross-entropies. These estimators satisfy all of the axioms that R\'enyi established for divergences. Our cross-entropies can thus be used for assessing distributional differences. They are also appropriate for handling high-dimensional distributions, since the convergence rate of our estimator is independent of the sample dimensionality. Python code for implementing these measures can be found at //github.com/isledge/MBRCE

We prove tight H\"olderian error bounds for all $p$-cones. Surprisingly, the exponents differ in several ways from those that have been previously conjectured; moreover, they illuminate $p$-cones as a curious example of a class of objects that possess properties in 3 dimensions that they do not in 4 or more. Using our error bounds, we analyse least squares problems with $p$-norm regularization, where our results enable us to compute the corresponding KL exponents for previously inaccessible values of $p$. Another application is a (relatively) simple proof that most $p$-cones are neither self-dual nor homogeneous. Our error bounds are obtained under the framework of facial residual functions and we expand it by establishing for general cones an optimality criterion under which the resulting error bound must be tight.

We study a randomized quadrature algorithm to approximate the integral of periodic functions defined over the high-dimensional unit cube. Recent work by Kritzer, Kuo, Nuyens and Ullrich (2019) shows that rank-1 lattice rules with a randomly chosen number of points and good generating vector achieve almost the optimal order of the randomized error in weighted Korobov spaces, and moreover, that the error is bounded independently of the dimension if the weight parameters satisfy the summability condition $\sum_{j=1}^{\infty}\gamma_j^{1/\alpha}<\infty$. The argument is based on the existence result that at least half of the possible generating vectors yield almost the optimal order of the worst-case error in the same function spaces. In this paper we provide a component-by-component construction algorithm of such randomized rank-1 lattice rules, without any need to check whether the constructed generating vectors satisfy a desired worst-case error bound. Similarly to the above-mentioned work, we prove that our algorithm achieves almost the optimal order of the randomized error and that the error bound is independent of the dimension if the same condition $\sum_{j=1}^{\infty}\gamma_j^{1/\alpha}<\infty$ holds. We also provide analogous results for tent-transformed lattice rules for weighted half-period cosine spaces and for polynomial lattice rules in weighted Walsh spaces, respectively.

Throughput is a main performance objective in communication networks. This paper considers a fundamental maximum throughput routing problem -- the all-or-nothing multicommodity flow (ANF) problem -- in arbitrary directed graphs and in the practically relevant but challenging setting where demands can be (much) larger than the edge capacities. Hence, in addition to assigning requests to valid flows for each routed commodity, an admission control mechanism is required which prevents overloading the network when routing commodities. We make several contributions. On the theoretical side we obtain substantially improved bi-criteria approximation algorithms for this NP-hard problem. We present two non-trivial linear programming relaxations and show how to convert their fractional solutions into integer solutions via randomized rounding. One is an exponential-size formulation (solvable in polynomial time using a separation oracle) that considers a "packing" view and allows a more flexible approach, while the other is a generalization of the compact LP formulation of Liu et al. (INFOCOM'19) that allows for easy solving via standard LP solvers. We obtain a polynomial-time randomized algorithm that yields an arbitrarily good approximation on the weighted throughput while violating the edge capacity constraints by only a small multiplicative factor. We also describe a deterministic rounding algorithm by derandomization, using the method of pessimistic estimators. We complement our theoretical results with a proof of concept empirical evaluation.

Quantum low-density parity-check (LDPC) codes are an important class of quantum error correcting codes. In such codes, each qubit only affects a constant number of syndrome bits, and each syndrome bit only relies on some constant number of qubits. Constructing quantum LDPC codes is challenging. It is an open problem to understand if there exist good quantum LDPC codes, i.e. with constant rate and relative distance. Furthermore, techniques to perform fault-tolerant gates are poorly understood. We present a unified way to address these problems. Our main results are a) a bound on the distance, b) a bound on the code dimension and c) limitations on certain fault-tolerant gates that can be applied to quantum LDPC codes. All three of these bounds are cast as a function of the graph separator of the connectivity graph representation of the quantum code. We find that unless the connectivity graph contains an expander, the code is severely limited. This implies a necessary, but not sufficient, condition to construct good codes. This is the first bound that studies the limitations of quantum LDPC codes that does not rely on locality. As an application, we present novel bounds on quantum LDPC codes associated with local graphs in $D$-dimensional hyperbolic space.

We consider the power of local algorithms for approximately solving Max $k$XOR, a generalization of two constraint satisfaction problems previously studied with classical and quantum algorithms (MaxCut and Max E3LIN2). On instances with either random signs or no overlapping clauses and $D+1$ clauses per variable, we calculate the average satisfying fraction of the depth-1 QAOA and compare with a generalization of the local threshold algorithm. Notably, the quantum algorithm outperforms the threshold algorithm for $k > 4$. On the other hand, we highlight potential difficulties for the QAOA to achieve computational quantum advantage on this problem. We first compute a tight upper bound on the maximum satisfying fraction of nearly all large random regular Max $k$XOR instances by numerically calculating the ground state energy density $P(k)$ of a mean-field $k$-spin glass. The upper bound grows with $k$ much faster than the performance of both one-local algorithms. We also identify a new obstruction result for low-depth quantum circuits (including the QAOA) when $k=3$, generalizing a result of Bravyi et al [arXiv:1910.08980] when $k=2$. We conjecture that a similar obstruction exists for all $k$.

We establish the error bounds of fourth-order compact finite difference (4cFD) methods for the Dirac equation in the massless and nonrelativistic regime, which involves a small dimensionless parameter $0 < \varepsilon \le 1$ inversely proportional to the speed of light. In this regime, the solution propagates waves with wavelength $O(\varepsilon)$ in time and $O(1)$ in space, as well as with the wave speed $O(1/\varepsilon)$ rapid outgoing waves. We adapt the conservative and semi-implicit 4cFD methods to discretize the Dirac equation and rigorously carry out their error bounds depending explicitly on the mesh size $h$, time step $\tau$ and the small parameter $\varepsilon$. Based on the error bounds, the $\varepsilon$-scalability of the 4cFD methods is $h = O(\varepsilon^{1/4})$ and $\tau = O(\varepsilon^{3/2})$, which not only improves the spatial resolution capacity but also has superior accuracy than classical second-order finite difference methods. Furthermore, physical observables including the total density and current density have the same conclusions. Numerical results are provided to validate the error bounds and the dynamics of the Dirac equation with different potentials in 2D is presented.

This paper studies distributed binary test of statistical independence under communication (information bits) constraints. While testing independence is very relevant in various applications, distributed independence test is particularly useful for event detection in sensor networks where data correlation often occurs among observations of devices in the presence of a signal of interest. By focusing on the case of two devices because of their tractability, we begin by investigating conditions on Type I error probability restrictions under which the minimum Type II error admits an exponential behavior with the sample size. Then, we study the finite sample-size regime of this problem. We derive new upper and lower bounds for the gap between the minimum Type II error and its exponential approximation under different setups, including restrictions imposed on the vanishing Type I error probability. Our theoretical results shed light on the sample-size regimes at which approximations of the Type II error probability via error exponents became informative enough in the sense of predicting well the actual error probability. We finally discuss an application of our results where the gap is evaluated numerically, and we show that exponential approximations are not only tractable but also a valuable proxy for the Type II probability of error in the finite-length regime.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司