亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of untangling a given (non-planar) straight-line circular drawing $\delta_G$ of an outerplanar graph $G=(V, E)$ into a planar straight-line circular drawing by shifting a minimum number of vertices to a new position on the circle. For an outerplanar graph $G$, it is clear that such a crossing-free circular drawing always exists and we define the circular shifting number shift$(\delta_G)$ as the minimum number of vertices that are required to be shifted in order to resolve all crossings of $\delta_G$. We show that the problem Circular Untangling, asking whether shift$(\delta_G) \le K$ for a given integer $K$, is NP-complete. For $n$-vertex outerplanar graphs, we obtain a tight upper bound of shift$(\delta_G) \le n - \lfloor\sqrt{n-2}\rfloor -2$. Based on these results we study Circular Untangling for almost-planar circular drawings, in which a single edge is involved in all the crossings. In this case, we provide a tight upper bound shift$(\delta_G) \le \lfloor \frac{n}{2} \rfloor-1$ and present a constructive polynomial-time algorithm to compute the circular shifting number of almost-planar drawings.

相關內容

We prove that any compact semi-algebraic set is homeomorphic to the solution space of some art gallery problem. Previous works have established similar universality theorems, but holding only up to homotopy equivalence, rather than homeomorphism, and prior to this work, the existence of art galleries even for simple spaces such as the M\"obius strip or the three-holed torus were unknown. Our construction relies on an elegant and versatile gadget to copy guard positions with minimal overhead, and is thus simpler than previous constructions, consisting of a single rectangular room with convex slits cut out from the edges. We additionally show that both the orientable and non-orientable surfaces of genus $n$ require galleries with only $O(n)$ vertices.

We introduce the problem of determining if the mode of the output distribution of a quantum circuit (given as a black-box) is larger than a given threshold, named HighDist, and a similar problem based on the absolute values of the amplitudes, named HighAmp. We design quantum algorithms for promised versions of these problems whose space complexities are logarithmic in the size of the domain of the distribution, but the query complexities are independent. Using these, we further design algorithms to estimate the largest probability and the largest amplitude among the output distribution of a quantum black-box. All of these allow us to improve the query complexity of a few recently studied problems, namely, $k$-distinctness and its gapped version, estimating the largest frequency in an array, estimating the min-entropy of a distribution, and the non-linearity of a Boolean function, in the $\tilde{O}(1)$-qubits scenario. The time-complexities of almost all of our algorithms have a small overhead over their query complexities making them efficiently implementable on currently available quantum backends.

We introduce a notion of \emph{generic local algorithm} which strictly generalizes existing frameworks of local algorithms such as \emph{factors of i.i.d.} by capturing local \emph{quantum} algorithms such as the Quantum Approximate Optimization Algorithm (QAOA). Motivated by a question of Farhi et al. [arXiv:1910.08187, 2019] we then show limitations of generic local algorithms including QAOA on random instances of constraint satisfaction problems (CSPs). Specifically, we show that any generic local algorithm whose assignment to a vertex depends only on a local neighborhood with $o(n)$ other vertices (such as the QAOA at depth less than $\epsilon\log(n)$) cannot arbitrarily-well approximate boolean CSPs if the problem satisfies a geometric property from statistical physics called the coupled overlap-gap property (OGP) [Chen et al., Annals of Probability, 47(3), 2019]. We show that the random MAX-k-XOR problem has this property when $k\geq4$ is even by extending the corresponding result for diluted $k$-spin glasses. Our concentration lemmas confirm a conjecture of Brandao et al. [arXiv:1812.04170, 2018] asserting that the landscape independence of QAOA extends to logarithmic depth -- in other words, for every fixed choice of QAOA angle parameters, the algorithm at logarithmic depth performs almost equally well on almost all instances. One of these concentration lemmas is a strengthening of McDiarmid's inequality, applicable when the random variables have a highly biased distribution, and may be of independent interest.

In this paper, we consider two fundamental symmetric kernels in linear algebra: the Cholesky factorization and the symmetric rank-$k$ update (SYRK), with the classical three nested loops algorithms for these kernels. In addition, we consider a machine model with a fast memory of size $S$ and an unbounded slow memory. In this model, all computations must be performed on operands in fast memory, and the goal is to minimize the amount of communication between slow and fast memories. As the set of computations is fixed by the choice of the algorithm, only the ordering of the computations (the schedule) directly influences the volume of communications.We prove lower bounds of $\frac{1}{3\sqrt{2}}\frac{N^3}{\sqrt{S}}$ for the communication volume of the Cholesky factorization of an $N\times N$ symmetric positive definite matrix, and of $\frac{1}{\sqrt{2}}\frac{N^2M}{\sqrt{S}}$ for the SYRK computation of $\mat{A}\cdot\transpose{\mat{A}}$, where $\mathbf{A}$ is an $N\times M$ matrix. Both bounds improve the best known lower bounds from the literature by a factor $\sqrt{2}$.In addition, we present two out-of-core, sequential algorithms with matching communication volume: \TBS for SYRK, with a volume of $\frac{1}{\sqrt{2}}\frac{N^2M}{\sqrt{S}} + \bigo{NM\log N}$, and \LBC for Cholesky, with a volume of $\frac{1}{3\sqrt{2}}\frac{N^3}{\sqrt{S}} + \bigo{N^{5/2}}$. Both algorithms improve over the best known algorithms from the literature by a factor $\sqrt{2}$, and prove that the leading terms in our lower bounds cannot be improved further. This work shows that the operational intensity of symmetric kernels like SYRK or Cholesky is intrinsically higher (by a factor $\sqrt{2}$) than that of corresponding non-symmetric kernels (GEMM and LU factorization).

In this paper, we analyze the problem of how to adapt the concept of proportionality to situations where several perfectly divisible resources have to be allocated among certain set of agents that have exactly one claim which is used for all resources. In particular, we introduce the constrained proportional awards rule, which extend the classical proportional rule to these situations. Moreover, we provide an axiomatic characterization of this rule.

We propose a framework to study the effect of local recovery requirements of codeword symbols on the dimension of linear codes, based on a combinatorial proxy that we call \emph{visible rank}. The locality constraints of a linear code are stipulated by a matrix $H$ of $\star$'s and $0$'s (which we call a "stencil"), whose rows correspond to the local parity checks (with the $\star$'s indicating the support of the check). The visible rank of $H$ is the largest $r$ for which there is a $r \times r$ submatrix in $H$ with a unique generalized diagonal of $\star$'s. The visible rank yields a field-independent combinatorial lower bound on the rank of $H$ and thus the co-dimension of the code. We prove a rank-nullity type theorem relating visible rank to the rank of an associated construct called \emph{symmetric spanoid}, which was introduced by Dvir, Gopi, Gu, and Wigderson~\cite{DGGW20}. Using this connection and a construction of appropriate stencils, we answer a question posed in \cite{DGGW20} and demonstrate that symmetric spanoid rank cannot improve the currently best known $\widetilde{O}(n^{(q-2)/(q-1)})$ upper bound on the dimension of $q$-query locally correctable codes (LCCs) of length $n$. We also study the $t$-Disjoint Repair Group Property ($t$-DRGP) of codes where each codeword symbol must belong to $t$ disjoint check equations. It is known that linear $2$-DRGP codes must have co-dimension $\Omega(\sqrt{n})$. We show that there are stencils corresponding to $2$-DRGP with visible rank as small as $O(\log n)$. However, we show the second tensor of any $2$-DRGP stencil has visible rank $\Omega(n)$, thus recovering the $\Omega(\sqrt{n})$ lower bound for $2$-DRGP. For $q$-LCC, however, the $k$'th tensor power for $k\le n^{o(1)}$ is unable to improve the $\widetilde{O}(n^{(q-2)/(q-1)})$ upper bound on the dimension of $q$-LCCs by a polynomial factor.

The new type of ideal basis introduced herein constitutes a compromise between the Gr\"obner bases based on the Buchberger's algorithm and the characteristic sets based on the Wu's method. It reduces the complexity of the traditional Gr\"obner bases and subdues the notorious intermediate expression swell problem and intermediate coefficient swell problem to a substantial extent. The computation of an $S$-polynomial for the new bases requires at most $O(m\ln^2m\ln\ln m)$ word operations whereas $O(m^6\ln^2m)$ word operations are requisite in the Buchberger's algorithm. Here $m$ denotes the upper bound for the numbers of terms both in the leading coefficients and for the rest of the polynomials. The new bases are for zero-dimensional polynomial ideals and based on univariate pseudo-divisions. However in contrast to the pseudo-divisions in the Wu's method for the characteristic sets, the new bases retain the algebraic information of the original ideal and in particular, solve the ideal membership problem. In order to determine the authentic factors of the eliminant, we analyze the multipliers of the pseudo-divisions and develop an algorithm over principal quotient rings with zero divisors.

The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. We focus on the problem of constructing a perfect matching in as few rounds as possible. In particular, we present an adaptive strategy for the player which achieves a perfect matching in $\beta n$ rounds, where the value of $\beta < 1.206$ is derived from a solution to some system of differential equations. This improves upon the previously best known upper bound of $(1+2/e+o(1)) \, n < 1.736 \, n$ rounds. We also improve the previously best lower bound of $(\ln 2 + o(1)) \, n > 0.693 \, n$ and show that the player cannot achieve the desired property in less than $\alpha n$ rounds, where the value of $\alpha > 0.932$ is derived from a solution to another system of differential equations. As a result, the gap between the upper and lower bounds is decreased roughly four times.

The ZX-calculus is a graphical language for reasoning about quantum computation using ZX-diagrams, a certain flexible generalisation of quantum circuits that can be used to represent linear maps from $m$ to $n$ qubits for any $m,n \geq 0$. Some applications for the ZX-calculus, such as quantum circuit optimisation and synthesis, rely on being able to efficiently translate a ZX-diagram back into a quantum circuit of comparable size. While several sufficient conditions are known for describing families of ZX-diagrams that can be efficiently transformed back into circuits, it has previously been conjectured that the general problem of circuit extraction is hard. That is, that it should not be possible to efficiently convert an arbitrary ZX-diagram describing a unitary linear map into an equivalent quantum circuit. In this paper we prove this conjecture by showing that the circuit extraction problem is #P-hard, and so is itself at least as hard as strong simulation of quantum circuits. In addition to our main hardness result, which relies specifically on the circuit representation, we give a representation-agnostic hardness result. Namely, we show that any oracle that takes as input a ZX-diagram description of a unitary and produces samples of the output of the associated quantum computation enables efficient probabilistic solutions to NP-complete problems.

We survey lower-bound results in complexity theory that have been obtained via newfound interconnections between propositional proof complexity, boolean circuit complexity, and query/communication complexity. We advocate for the theory of total search problems (TFNP) as a unifying language for these connections and discuss how this perspective suggests a whole programme for further research.

北京阿比特科技有限公司