亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a new data structure for answering connectivity queries in undirected graphs subject to batched vertex failures. Precisely, given any graph G and integer k, we can in fixed-parameter time construct a data structure that can later be used to answer queries of the form: ``are vertices s and t connected via a path that avoids vertices $u_1,..., u_k$?'' in time $2^{2^{O(k)}}$. In the terminology of the literature on data structures, this gives the first deterministic data structure for connectivity under vertex failures where for every fixed number of failures, all operations can be performed in constant time. With the aim to understand the power and the limitations of our new techniques, we prove an algorithmic meta theorem for the recently introduced separator logic, which extends first-order logic with atoms for connectivity under vertex failures. We prove that the model-checking problem for separator logic is fixed-parameter tractable on every class of graphs that exclude a fixed topological minor. We also show a weak converse. This implies that from the point of view of parameterized complexity, under standard complexity assumptions, the frontier of tractability of separator logic is almost exactly delimited by classes excluding a fixed topological minor. The backbone of our proof relies on a decomposition theorem of Cygan et al. [SICOMP '19], which provides a tree decomposition of a given graph into bags that are unbreakable. Crucially, unbreakability allows to reduce separator logic to plain first-order logic within each bag individually. We design our model-checking algorithm using dynamic programming over the tree decomposition, where the transition at each bag amounts to running a suitable model-checking subprocedure for plain first-order logic. This approach is robust enough to provide also efficient enumeration of queries expressed in separator logic.

相關內容

In this paper we show that every graph of pathwidth less than $k$ that has a path of order $n$ also has an induced path of order at least $\frac{1}{3} n^{1/k}$. This is an exponential improvement and a generalization of the polylogarithmic bounds obtained by Esperet, Lemoine and Maffray (2016) for interval graphs of bounded clique number. We complement this result with an upper-bound. This result is then used to prove the two following generalizations: - every graph of treewidth less than $k$ that has a path of order $n$ contains an induced path of order at least $\frac{1}{4} (\log n)^{1/k}$; - for every non-trivial graph class that is closed under topological minors there is a constant $d \in (0,1)$ such that every graph from this class that has a path of order $n$ contains an induced path of order at least $(\log n)^d$. We also describe consequences of these results beyond graph classes that are closed under topological minors.

We investigate the computational complexity of a family of substructural logics with exchange and weakening but without contraction. With the aid of the techniques provided by Lazi\'c and Schmitz (2015), we show that the deducibility problem for full Lambek calculus with exchange and weakening ($\mathbf{FL}_{\mathbf{ew}}$) is TOWER-complete, where TOWER is one of the non-elementary complexity classes introduced by Schmitz (2016). The same complexity result holds even for deducibility in BCK-logic, i.e., the implicational fragment of $\mathbf{FL}_{\mathbf{ew}}$. We furthermore show the TOWER-completeness of the provability problem for elementary affine logic, which was proved to be decidable by Dal Lago and Martini (2004).

We study the problem of query evaluation on probabilistic graphs, namely, tuple-independent probabilistic databases over signatures of arity two. We focus on the class of queries closed under homomorphisms, or, equivalently, the infinite unions of conjunctive queries. Our main result states that the probabilistic query evaluation problem is #P-hard for all unbounded queries from this class. As bounded queries from this class are equivalent to a union of conjunctive queries, they are already classified by the dichotomy of Dalvi and Suciu (2012). Hence, our result and theirs imply a complete data complexity dichotomy, between polynomial time and #P-hardness, on evaluating homomorphism-closed queries over probabilistic graphs. This dichotomy covers in particular all fragments of infinite unions of conjunctive queries over arity-two signatures, such as negation-free (disjunctive) Datalog, regular path queries, and a large class of ontology-mediated queries. The dichotomy also applies to a restricted case of probabilistic query evaluation called generalized model counting, where fact probabilities must be 0, 0.5, or 1. We show the main result by reducing from the problem of counting the valuations of positive partitioned 2-DNF formulae, or from the source-to-target reliability problem in an undirected graph, depending on properties of minimal models for the query.

The intersection graph induced by a set $\Disks$ of $n$ disks can be dense. It is thus natural to try and sparsify it, while preserving connectivity. Unfortunately, sparse graphs can always be made disconnected by removing a small number of vertices. In this work, we present a sparsification algorithm that maintains connectivity between two disks in the computed graph, if the original graph remains ``well-connected'' even after removing an arbitrary ``attack'' set $\BSet \subseteq \Disks$ from both graphs. Thus, the new sparse graph has similar reliability to the original disk graph, and can withstand catastrophic failure of nodes while still providing a connectivity guarantee for the remaining graph. The new graphs has near linear complexity, and can be constructed in near linear time. The algorithm extends to any collection of shapes in the plane, such that their union complexity is near linear.

Let $\kappa(s,t)$ denote the maximum number of internally disjoint paths in an undirected graph $G$. We consider designing a data structure that includes a list of cuts, and answers the following query: given $s,t \in V$, determine whether $\kappa(s,t) \leq k$, and if so, return a pointer to an $st$-cut of size $\leq k$ (or to a minimum $st$-cut) in the list. A trivial data structure that includes a list of $n(n-1)/2$ cuts and requires $\Theta(kn^2)$ space can answer each query in $O(1)$ time. We obtain the following results. In the case when $G$ is $k$-connected, we show that $n$ cuts suffice, and that these cuts can be partitioned into $(2k+1)$ laminar families. Thus using space $O(kn)$ we can answers each min-cut query in $O(1)$ time, slightly improving and substantially simplifying a recent result of Pettie and Yin. We then extend this data structure to subset $k$-connectivity. In the general case we show that $(2k+1)n$ cuts suffice to return an $st$-cut of size $\leq k$,and a list of size $k(k+2)n$ contains a minimum $st$-cut for every $s,t \in V$. Combining our subset $k$-connectivity data structure with the data structure of Hsu and Lu for checking $k$-connectivity, we give an $O(k^2 n)$ space data structure that returns an $st$-cut of size $\leq k$ in $O(\log k)$ time, while $O(k^3 n)$ space enables to return a minimum $st$-cut.

We show that the normal form of the Taylor expansion of a $\lambda$-term is isomorphic to its B\"ohm tree, improving Ehrhard and Regnier's original proof along three independent directions. First, we simplify the final step of the proof by following the left reduction strategy directly in the resource calculus, avoiding to introduce an abstract machine ad hoc. We also introduce a groupoid of permutations of copies of arguments in a rigid variant of the resource calculus, and relate the coefficients of Taylor expansion with this structure, while Ehrhard and Regnier worked with groups of permutations of occurrences of variables. Finally, we extend all the results to a nondeterministic setting: by contrast with previous attempts, we show that the uniformity property that was crucial in Ehrhard and Regnier's approach can be preserved in this setting.

This article fits in the area of research that investigates the application of topological duality methods to problems that appear in theoretical computer science. One of the eventual goals of this approach is to derive results in computational complexity theory by studying appropriate topological objects which characterize them. The link which relates these two seemingly separated fields is logic, more precisely a subdomain of finite model theory known as logic on words. It allows for a description of complexity classes as certain families of languages, possibly non-regular, on a finite alphabet. Very few is known about the duality theory relative to fragments of first-order logic on words which lie outside of the scope of regular languages. The contribution of our work is a detailed study of such a fragment. Fixing an integer $k \geq 1$, we consider the Boolean algebra $\mathcal{B}\Sigma_1[\mathcal{N}^{u}_k]$. It corresponds to the fragment of logic on words consisting in Boolean combinations of sentences defined by using a block of at most $k$ existential quantifiers, letter predicates and uniform numerical predicates of arity $l \in \{1,...,k\}$. We give a detailed study of the dual space of this Boolean algebra, for any $k \geq 1$, and provide several characterizations of its points. In the particular case where $k=1$, we are able to construct a family of ultrafilter equations which characterize the Boolean algebra $\mathcal{B} \Sigma_1[\mathcal{N}^{u}_1]$. We use topological methods in order to prove that these equations are sound and complete with respect to the Boolean algebra we mentioned.

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to guarantee both optimism and convergence of the associated value iteration scheme. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$ which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first horizon-free regret bound beyond the finite-horizon MDP setting.

In order to facilitate the accesses of general users to knowledge graphs, an increasing effort is being exerted to construct graph-structured queries of given natural language questions. At the core of the construction is to deduce the structure of the target query and determine the vertices/edges which constitute the query. Existing query construction methods rely on question understanding and conventional graph-based algorithms which lead to inefficient and degraded performances facing complex natural language questions over knowledge graphs with large scales. In this paper, we focus on this problem and propose a novel framework standing on recent knowledge graph embedding techniques. Our framework first encodes the underlying knowledge graph into a low-dimensional embedding space by leveraging generalized local knowledge graphs. Given a natural language question, the learned embedding representations of the knowledge graph are utilized to compute the query structure and assemble vertices/edges into the target query. Extensive experiments were conducted on the benchmark dataset, and the results demonstrate that our framework outperforms state-of-the-art baseline models regarding effectiveness and efficiency.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司