亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A polynomial Turing kernel for some parameterized problem $P$ is a polynomial-time algorithm that solves $P$ using queries to an oracle of $P$ whose sizes are upper-bounded by some polynomial in the parameter. Here the term "polynomial" refers to the bound on the query sizes, as the running time of any kernel is required to be polynomial. One of the most important open goals in parameterized complexity is to understand the applicability and limitations of polynomial Turing Kernels. As any fixed-parameter tractable problem admits a Turing kernel of some size, the focus has mostly being on determining which problems admit such kernels whose query sizes can be indeed bounded by some polynomial. In this paper we take a different approach, and instead focus on the number of queries that a Turing kernel uses, assuming it is restricted to using only polynomial sized queries. Our study focuses on one the main problems studied in parameterized complexity, the Clique problem: Given a graph $G$ and an integer $k$, determine whether there are $k$ pairwise adjacent vertices in $G$. We show that Clique parameterized by several structural parameters exhibits the following phenomena: - It admits polynomial Turing kernels which use a sublinear number of queries, namely $O(n/\log^c n)$ queries where $n$ is the total size of the graph and $c$ is any constant. This holds even for a very restrictive type of Turing kernels which we call OR-kernels. - It does not admit polynomial Turing kernels which use $O(n^{1-\epsilon})$ queries, unless NP$\subseteq$coNP/poly. For proving the second item above, we develop a new framework for bounding the number of queries needed by polynomial Turing kernels. This framework is inspired by the standard lower bounds framework for Karp kernels, and while it is quite similar, it still requires some novel ideas to allow its extension to the Turing setting.

相關內容

Classic dynamic data structure problems maintain a data structure subject to a sequence S of updates and they answer queries using the latest version of the data structure, i.e., the data structure after processing the whole sequence. To handle operations that change the sequence S of updates, Demaine et al. (TALG 2007) introduced retroactive data structures. A retroactive operation modifies the update sequence S in a given position t, called time, and either creates or cancels an update in S at time t. A partially retroactive data structure restricts queries to be executed exclusively in the latest version of the data structure. A fully retroactive data structure supports queries at any time t: a query at time t is answered using only the updates of S up to time t. If the sequence S only consists of insertions, the resulting data structure is an incremental retroactive data structure. While efficient retroactive data structures have been proposed for classic data structures, e.g., stack, priority queue and binary search tree, the retroactive version of graph problems are rarely studied. In this paper we study retroactive graph problems including connectivity, minimum spanning forest (MSF), maximum degree, etc. We provide fully retroactive data structures for maintaining the maximum degree, connectivity and MSF in $\tilde{O}(n)$ time per operation. We also give an algorithm for the incremental fully retroactive connectivity with $\tilde{O}(1)$ time per operation. We compliment our algorithms with almost tight hardness results. We show that under the OMv conjecture (proposed by Henzinger et al. (STOC 2015)), there does not exist fully retroactive data structures maintaining connectivity or MSF, or incremental fully retroactive data structure maintaining the maximum degree with $O(n^{1-\epsilon})$ time per operation, for any constant $\epsilon > 0$.

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

It is well-known that an algorithm exists which approximates the NP-complete problem of Set Cover within a factor of ln(n), and it was recently proven that this approximation ratio is optimal unless P = NP. This optimality result is the product of many advances in characterizations of NP, in terms of interactive proof systems and probabilistically checkable proofs (PCP), and improvements to the analyses thereof. However, as a result, it is difficult to extract the development of Set Cover approximation bounds from the greater scope of proof system analysis. This paper attempts to present a chronological progression of results on lower-bounding the approximation ratio of Set Cover. We analyze a series of proofs of progressively better bounds and unify the results under similar terminologies and frameworks to provide an accurate comparison of proof techniques and their results. We also treat many preliminary results as black-boxes to better focus our analysis on the core reductions to Set Cover instances. The result is alternative versions of several hardness proofs, beginning with initial inapproximability results and culminating in a version of the proof that ln(n) is a tight lower bound.

Given a simple graph $G$ and an integer $k$, the goal of $k$-Clique problem is to decide if $G$ contains a complete subgraph of size $k$. We say an algorithm approximates $k$-Clique within a factor $g(k)$ if it can find a clique of size at least $k / g(k)$ when $G$ is guaranteed to have a $k$-clique. Recently, it was shown that approximating $k$-Clique within a constant factor is W[1]-hard [Lin21]. We study the approximation of $k$-Clique under the Exponential Time Hypothesis (ETH). The reduction of [Lin21] already implies an $n^{\Omega(\sqrt[6]{\log k})}$-time lower bound under ETH. We improve this lower bound to $n^{\Omega(\log k)}$. Using the gap-amplification technique by expander graphs, we also prove that there is no $k^{o(1)}$ factor FPT-approximation algorithm for $k$-Clique under ETH. We also suggest a new way to prove the Parameterized Inapproximability Hypothesis (PIH) under ETH. We show that if there is no $n^{O(\frac{k}{\log k})}$ algorithm to approximate $k$-Clique within a constant factor, then PIH is true.

A permutation graph can be defined as an intersection graph of segments whose endpoints lie on two parallel lines $\ell_1$ and $\ell_2$, one on each. A bipartite permutation graph is a permutation graph which is bipartite. In the the bipartite permutation vertex deletion problem we ask for a given $n$-vertex graph, whether we can remove at most $k$ vertices to obtain a bipartite permutation graph. This problem is NP-complete but it does admit an FPT algorithm parameterized by $k$. In this paper we study the kernelization of this problem and show that it admits a polynomial kernel with $O(k^{99})$ vertices.

A tournament graph $T = \left(V, E \right)$ is an oriented complete graph, which can be used to model a round-robin tournament between $n$ players. In this paper, we address the problem of finding a champion of the tournament, also known as Copeland winner, which is a player that wins the highest number of matches. Solving this problem has important implications on several Information Retrieval applications, including Web search, conversational IR, machine translation, question answering, recommender systems, etc. Our goal is to solve the problem by minimizing the number of times we probe the adjacency matrix, i.e., the number of matches played. We prove that any deterministic/randomized algorithm finding a champion with constant success probability requires $\Omega(\ell n)$ comparisons, where $\ell$ is the number of matches lost by the champion. We then present an optimal deterministic algorithm matching this lower bound without knowing $\ell$ and we extend our analysis to three strictly related problems. Lastly, we conduct a comprehensive experimental assessment of the proposed algorithms to speed up a state-of-the-art solution for ranking on public data. Results show that our proposals speed up the retrieval of the champion up to $13\times$ in this scenario.

It is well known [Lov\'{a}sz, 1967] that up to isomorphism a graph $G$ is determined by the homomorphism counts $\hom(F, G)$, i.e., the number of homomorphisms from $F$ to $G$, where $F$ ranges over all graphs. Moreover, it suffices that $F$ ranges over the graphs with at most as many vertices as $G$. Thus in principle we can answer any query concerning $G$ with only accessing the $\hom(\cdot,G)$'s instead of $G$ itself. In this paper, we zoom in on those queries that can be answered using a constant number of $\hom(\cdot,G)$ for every graph $G$. We observe that if a query $\varphi$ is expressible as a Boolean combination of universal sentences in first-order logic, then whether a graph $G$ satisfies $\varphi$ can be determined by the vector \[\overrightarrow{\mathrm{hom}}_{F_1, \ldots, F_k}(G):= \big(\mathrm{hom}(F_1, G), \ldots, \mathrm{hom}(F_k, G)\big),\] where the graphs $F_1,\ldots,F_k$ only depend on $\varphi$. This leads to a query algorithm for $\varphi$ that is non-adaptive in the sense that those $F_i$ are independent of the input $G$. On the other hand, we prove that the existence of an isolated vertex, which is not definable by such a $\varphi$ but in first-order logic, cannot be determined by any $\overrightarrow{\mathrm{hom}}_{F_1, \ldots, F_k}(\cdot)$. These results provide a clear delineation of the power of non-adaptive query algorithms with access to a constant number of $\hom(\cdot, G)$'s. For adaptive query algorithms, i.e., algorithms that might access some $\hom(F_{i+1}, G)$ with $F_{i+1}$ depending on $\hom(F_1, G), \ldots, \hom(F_i, G)$, we show that three homomorphism counts $\hom(\cdot,G)$ are both sufficient and in general necessary to determine the graph $G$. In particular, by three adaptive queries we can answer any question on $G$. Moreover, adaptively accessing two $\hom(\cdot, G)$'s is already enough to detect an isolated vertex.

A family of closed simple (i.e., Jordan) curves is $m$-intersecting if any pair of its curves have at most $m$ points of common intersection. We say that a pair of such curves touch if they intersect at a single point of common tangency. In this work we show that any $m$-intersecting family of $n$ Jordan curves in general position in the plane contains $O\left(n^{2-\frac{1}{3m+15}}\right)$ touching pairs Furthermore, we use the string separator theorem of Fox and Pach in order to establish the following Crossing Lemma for contact graphs of Jordan curves: Let $\Gamma$ be an $m$-intersecting family of closed Jordan curves in general position in the plane with exactly $T=\Omega(n)$ touching pairs of curves, then the curves of $\Gamma$ determine $\Omega\left(T\cdot\left(\frac{T}{n}\right)^{\frac{1}{9m+45}}\right)$ intersection points. This extends the similar bounds that were previously established by Salazar for the special case of pairwise intersecting (and $m$-intersecting) curves. Specializing to the case at hand, this substantially improves the bounds that were recently derived by Pach, Rubin and Tardos for arbitrary families of Jordan curves.

Let CMSO denote the counting monadic second order logic of graphs. We give a constructive proof that for some computable function $f$, there is an algorithm $\mathfrak{A}$ that takes as input a CMSO sentence $\varphi$, a positive integer $t$, and a connected graph $G$ of maximum degree at most $\Delta$, and determines, in time $f(|\varphi|,t)\cdot 2^{O(\Delta \cdot t)}\cdot |G|^{O(t)}$, whether $G$ has a supergraph $G'$ of treewidth at most $t$ such that $G'\models \varphi$. The algorithmic metatheorem described above sheds new light on certain unresolved questions within the framework of graph completion algorithms. In particular, using this metatheorem, we provide an explicit algorithm that determines, in time $f(d)\cdot 2^{O(\Delta \cdot d)}\cdot |G|^{O(d)}$, whether a connected graph of maximum degree $\Delta$ has a planar supergraph of diameter at most $d$. Additionally, we show that for each fixed $k$, the problem of determining whether $G$ has an $k$-outerplanar supergraph of diameter at most $d$ is strongly uniformly fixed parameter tractable with respect to the parameter $d$. This result can be generalized in two directions. First, the diameter parameter can be replaced by any contraction-closed effectively CMSO-definable parameter $\mathbf{p}$. Examples of such parameters are vertex-cover number, dominating number, and many other contraction-bidimensional parameters. In the second direction, the planarity requirement can be relaxed to bounded genus, and more generally, to bounded local treewidth.

Many representative graph neural networks, $e.g.$, GPR-GNN and ChebyNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose $\textit{BernNet}$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.

北京阿比特科技有限公司