亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that for every fixed $k\geq 3$, the problem whether the termination/counter complexity of a given demonic VASS is $\mathcal{O}(n^k)$, $\Omega(n^{k})$, and $\Theta(n^{k})$ is coNP-complete, NP-complete, and DP-complete, respectively. We also classify the complexity of these problems for $k\leq 2$. This shows that the polynomial-time algorithm designed for strongly connected demonic VASS in previous works cannot be extended to the general case. Then, we prove that the same problems for VASS games are PSPACE-complete. Again, we classify the complexity also for $k\leq 2$. Interestingly, tractable subclasses of demonic VASS and VASS games are obtained by bounding certain structural parameters, which opens the way to applications in program analysis despite the presented lower complexity bounds.

相關內容

We present the new version of the Loop Acceleration Tool (LoAT), a powerful tool for proving non-termination and worst-case lower bounds for programs operating on integers. It is based on a novel calculus for loop acceleration, i.e., transforming loops into non-deterministic straight-line code, and for finding non-terminating configurations. To implement it efficiently, LoAT uses a new approach based on SMT solving and unsat cores. An extensive evaluation shows that LoAT is highly competitive with other state-of-the-art tools for proving non-termination. While no other tool is able to deduce worst-case lower bounds for full integer programs, we also demonstrate that LoAT significantly outperforms its predecessors.

In 2017, Krenn reported that certain problems related to the perfect matchings and colourings of graphs emerge out of studying the constructability of general quantum states using modern photonic technologies. He realized that if we can prove that the \emph{weighted matching index} of a graph, a parameter defined in terms of perfect matchings and colourings of the graph is at most 2, that could lead to exciting insights on the potential of resources of quantum inference. Motivated by this, he conjectured that the {weighted matching index} of any graph is at most 2. The first result on this conjecture was by Bogdanov, who proved that the \emph{(unweighted) matching index} of graphs (non-isomorphic to $K_4$) is at most 2, thus classifying graphs non-isomorphic to $K_4$ into Type 0, Type 1 and Type 2. By definition, the weighted matching index of Type 0 graphs is 0. We give a structural characterization for Type 2 graphs, using which we settle Krenn's conjecture for Type 2 graphs. Using this characterization, we provide a simple $O(|V||E|)$ time algorithm to find the unweighted matching index of any graph. In view of our work, Krenn's conjecture remains to be proved only for Type 1 graphs. We give upper bounds for the weighted matching index in terms of connectivity parameters for such graphs. Using these bounds, for a slightly simplified version, we settle Krenn's conjecture for the class of graphs with vertex connectivity at most 2 and the class of graphs with maximum degree at most 4. Krenn has been publicizing his conjecture in various ways since 2017. He has even declared a reward for a resolution of his conjecture. We hope that this article will popularize the problem among computer scientists.

Elementary function operations such as sin and exp cannot in general be computed exactly on today's digital computers, and thus have to be approximated. The standard approximations in library functions typically provide only a limited set of precisions, and are too inefficient for many applications. Polynomial approximations that are customized to a limited input domain and output accuracy can provide superior performance. In fact, the Remez algorithm computes the best possible approximation for a given polynomial degree, but has so far not been formally verified. This paper presents Dandelion, an automated certificate checker for polynomial approximations of elementary functions computed with Remez-like algorithms that is fully verified in the HOL4 theorem prover. Dandelion checks whether the difference between a polynomial approximation and its target reference elementary function remains below a given error bound for all inputs in a given constraint. By extracting a verified binary with the CakeML compiler, Dandelion can validate certificates within a reasonable time, fully automating previous manually verified approximations.

In latest years, several advancements have been made in symbolic-numerical eigenvalue techniques for solving polynomial systems. In this article, we add to this list. We design an algorithm which solves systems with isolated solutions reliably and efficiently. In overdetermined cases, it reduces the task to an eigenvalue problem in a simpler and considerably faster way than in previous methods, and it can outperform the homotopy continuation approach. We provide many examples and an implementation in the proof-of-concept Julia package EigenvalueSolver.jl.

We investigate the complexity of explicit construction problems, where the goal is to produce a particular object of size $n$ possessing some pseudorandom property in time polynomial in $n$. We give overwhelming evidence that $\bf{APEPP}$, defined originally by Kleinberg et al., is the natural complexity class associated with explicit constructions of objects whose existence follows from the probabilistic method, by placing a variety of such construction problems in this class. We then demonstrate that a result of Je\v{r}\'{a}bek on provability in Bounded Arithmetic, when reinterpreted as a reduction between search problems, shows that constructing a truth table of high circuit complexity is complete for $\bf{APEPP}$ under $\bf{P}^{\bf{NP}}$ reductions. This illustrates that Shannon's classical proof of the existence of hard boolean functions is in fact a $\textit{universal}$ probabilistic existence argument: derandomizing his proof implies a generic derandomization of the probabilistic method. As a corollary, we prove that $\bf{EXP}^{\bf{NP}}$ contains a language of circuit complexity $2^{n^{\Omega(1)}}$ if and only if it contains a language of circuit complexity $\frac{2^n}{2n}$. Finally, for several of the problems shown to lie in $\bf{APEPP}$, we demonstrate direct polynomial time reductions to the explicit construction of hard truth tables.

Tusn\'ady's problem asks to bound the discrepancy of points and axis-parallel boxes in $\mathbb{R}^d$. Algorithmic bounds on Tusn\'ady's problem use a canonical decomposition of Matou\v{s}ek for the system of points and axis-parallel boxes, together with other techniques like partial coloring and / or random-walk based methods. We use the notion of \emph{shallow cell complexity} and the \emph{shallow packing lemma}, together with the chaining technique, to obtain an improved decomposition of the set system. Coupled with an algorithmic technique of Bansal and Garg for discrepancy minimization, which we also slightly extend, this yields improved algorithmic bounds on Tusn\'ady's problem. For $d\geq 5$, our bound matches the lower bound of $\Omega(\log^{d-1}n)$ given by Matou\v{s}ek, Nikolov and Talwar [IMRN, 2020] -- settling Tusn\'ady's problem, upto constant factors. For $d=2,3,4$, we obtain improved algorithmic bounds of $O(\log^{7/4}n)$, $O(\log^{5/2}n)$ and $O(\log^{13/4}n)$ respectively, which match or improve upon the non-constructive bounds of Nikolov for $d\geq 3$. Further, we also give improved bounds for the discrepancy of set systems of points and polytopes in $\mathbb{R}^d$ generated via translations of a fixed set of hyperplanes. As an application, we also get a bound for the geometric discrepancy of anchored boxes in $\mathbb{R}^d$ with respect to an arbitrary measure, matching the upper bound for the Lebesgue measure, which improves on a result of Aistleitner, Bilyk, and Nikolov [MC and QMC methods, \emph{Springer, Proc. Math. Stat.}, 2018] for $d\geq 4$.

In this work, we extend the robust version of the Sylvester-Gallai theorem, obtained by Barak, Dvir, Wigderson and Yehudayoff, and by Dvir, Saraf and Wigderson, to the case of quadratic polynomials. Specifically, we prove that if $\mathcal{Q}\subset \mathbb{C}[x_1.\ldots,x_n]$ is a finite set, $|\mathcal{Q}|=m$, of irreducible quadratic polynomials that satisfy the following condition: There is $\delta>0$ such that for every $Q\in\mathcal{Q}$ there are at least $\delta m$ polynomials $P\in \mathcal{Q}$ such that whenever $Q$ and $P$ vanish then so does a third polynomial in $\mathcal{Q}\setminus\{Q,P\}$, then $\dim(\text{span}({\mathcal{Q}}))=\text{poly}(1/\delta)$. The work of Barak et al. and Dvir et al. studied the case of linear polynomials and proved an upper bound of $O(1/\delta)$ on the dimension (in the first work an upper bound of $O(1/\delta^2)$ was given, which was improved to $O(1/\delta)$ in the second work).

Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $\max\{c^\top x \colon A x = b,\, x \in \mathbb{Z}^n_{\geq 0}\}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of $A$ and $\|A\|_{\max}$. This class of problems is known under the name of ILP problems in the standard form, adding the word "bounded" if $x \leq u$, for some integer vector $u$. Recently, many new sparsity, proximity, and complexity results were obtained for bounded and unbounded ILP problems in the standard form. In this paper, we consider ILP problems in the canonical form $$\max\{c^\top x \colon b_l \leq A x \leq b_r,\, x \in \mathbb{Z}^n\},$$ where $b_l$ and $b_r$ are integer vectors. We assume that the integer matrix $A$ has the rank $n$, $(n + m)$ rows, $n$ columns, and parameterize the problem by $m$ and $\Delta(A)$, where $\Delta(A)$ is the maximum of $n \times n$ sub-determinants of $A$, taken in the absolute value. We show that any ILP problem in the standard form can be polynomially reduced to some ILP problem in the canonical form, preserving $m$ and $\Delta(A)$, but the reverse reduction is not always possible. More precisely, we define the class of generalized ILP problems in the standard form, which includes an additional group constraint, and prove the equivalence to ILP problems in the canonical form. We generalize known sparsity, proximity, and complexity bounds for ILP problems in the canonical form. Additionally, sometimes, we strengthen previously known results for ILP problems in the canonical form, and, sometimes, we give shorter proofs. Finally, we consider the special cases of $m \in \{0,1\}$. By this way, we give specialised sparsity, proximity, and complexity bounds for the problems on simplices, Knapsack problems and Subset-Sum problems.

Semidefinite programming (SDP) is a powerful tool for tackling a wide range of computationally hard problems such as clustering. Despite the high accuracy, semidefinite programs are often too slow in practice with poor scalability on large (or even moderate) datasets. In this paper, we introduce a linear time complexity algorithm for approximating an SDP relaxed $K$-means clustering. The proposed sketch-and-lift (SL) approach solves an SDP on a subsampled dataset and then propagates the solution to all data points by a nearest-centroid rounding procedure. It is shown that the SL approach enjoys a similar exact recovery threshold as the $K$-means SDP on the full dataset, which is known to be information-theoretically tight under the Gaussian mixture model. The SL method can be made adaptive with enhanced theoretic properties when the cluster sizes are unbalanced. Our simulation experiments demonstrate that the statistical accuracy of the proposed method outperforms state-of-the-art fast clustering algorithms without sacrificing too much computational efficiency, and is comparable to the original $K$-means SDP with substantially reduced runtime.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司