亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study a variant of Min Cost Flow in which the flow needs to be connected. Specifically, in the Connected Flow problem one is given a directed graph $G$, along with a set of demand vertices $D \subseteq V(G)$ with demands $\mathsf{dem}: D \rightarrow \mathbb{N}$, and costs and capacities for each edge. The goal is to find a minimum cost flow that satisfies the demands, respects the capacities and induces a (strongly) connected subgraph. This generalizes previously studied problems like the (Many Visits) TSP. We study the parameterized complexity of Connected Flow parameterized by $|D|$, the treewidth $tw$ and by vertex cover size $k$ of $G$ and provide: (i) $\mathsf{NP}$-completeness already for the case $|D|=2$ with only unit demands and capacities and no edge costs, and fixed-parameter tractability if there are no capacities, (ii) a fixed-parameter tractable $\mathcal{O}^{\star}(k^{\mathcal{O}(k)})$ time algorithm for the general case, and a kernel of size polynomial in $k$ for the special case of Many Visits TSP, (iii) a $|V(G)|^{\mathcal{O}(tw)}$ time algorithm and a matching $|V(G)|^{o(tw)}$ time conditional lower bound conditioned on the Exponential Time Hypothesis. To achieve some of our results, we significantly extend an approach by Kowalik et al.~[ESA'20].

相關內容

We show that the problem of determining the feasibility of quadratic systems over $\mathbb{C}$, $\mathbb{R}$, and $\mathbb{Z}$ requires exponential time. This separates P and NP over these fields/rings in the BCSS model of computation.

We lay the foundations of a new theory for algorithms and computational complexity by parameterizing the instances of a computational problem as a moduli scheme. Considering the geometry of the scheme associated to 3-SAT, we separate P and NP.

In this paper, we investigate the parameterized complexity of model checking for Dependence Logic which is a well studied logic in the area of Team Semantics. We start with a list of nine immediate parameterizations for this problem, namely: the number of disjunctions (i.e., splits)/(free) variables/universal quantifiers, formula-size, the tree-width of the Gaifman graph of the input structure, the size of the universe/team, and the arity of dependence atoms. We present a comprehensive picture of the parameterized complexity of model checking and obtain a division of the problem into tractable and various intractable degrees. Furthermore, we also consider the complexity of the most important variants (data and expression complexity) of the model checking problem by fixing parts of the input.

In a previous paper we have presented a CEGAR approach for the verification of parameterized systems with an arbitrary number of processes organized in an array or a ring. The technique is based on the iterative computation of parameterized invariants, i.e., infinite families of invariants for the infinitely many instances of the system. Safety properties are proved by checking that every global configuration of the system satisfying all parameterized invariants also satisfies the property; we have shown that this check can be reduced to the satisfiability problem for Monadic Second Order on words, which is decidable. A strong limitation of the approach is that processes can only have a fixed number of variables with a fixed finite range. In particular, they cannot use variables with range [0,N-1], where N is the number of processes, which appear in many standard distributed algorithms. In this paper, we extend our technique to this case. While conducting the check whether a safety property is inductive assuming a computed set of invariants becomes undecidable, we show how to reduce it to checking satisfiability of a first-order formula. We report on experiments showing that automatic first-order theorem provers can still perform this check for a collection of non-trivial examples. Additionally, we can give small sets of readable invariants for these checks.

The arboricity of a graph is the minimum number of forests needed to cover all edges of the graph. In this paper, we study the arboricity from a game theoretic perspective and consider cost sharing in the minimum forest cover problem. We introduce the arboricity game as a cooperative cost game defined on a graph, where the players are edges and the cost of each coalition is the arboricity of the subgraph induced by the coalition. We study properties of the core and propose an efficient algorithm for computing the nucleolus when the core is nonempty. To compute the nucleolus, we introduce the prime partition of a graph, which decomposes the edge set into a partially ordered set defined from all minimal densest minors and their invariant precedence relation. For any core allocation of arboricity games, all edges from the same set of the prime partition share the same value. Thus the prime partition enables us to simplify the variables and constraints in linear programs of Maschler's scheme and to compute the nucleolus of arboricity games in polynomial time when the core is nonempty. Besides, the prime partition provides an analogous graph decomposition to the celebrated core decomposition and the density-friendly decomposition, which may be of independent interest.

Path graphs are intersection graphs of paths in a tree.~In this paper we give a "6\ good characterization" of path graphs, namely, we prove that path graph membership is in $NP\cap CoNP$ without resorting to existing polynomial time algorithms. The characterization is given in terms of the collection of the \emph{attachedness graphs} of a graph, a novel device to deal with the connected components of a graph after the removal of clique separators. On the one hand, the characterization refines and simplifies the characterization of path graphs due to Monma and Wei [C.L.~Monma,~and~V.K.~Wei, Intersection {G}raphs of {P}aths in a {T}ree, J. Combin. Theory Ser. B, 41:2 (1986) 141--181], which we build on, by reducing a constrained vertex coloring problem defined on the \emph{attachedness graphs} to a vertex 2-coloring problem on the same graphs. On the other hand, the characterization allows us to exhibit two exhaustive lists of obstructions to path graph membership in the form of minimal forbidden induced/partial 2-edge colored subgraphs in each of the \emph{attachedness graphs}.

Retraction note: After posting the manuscript on arXiv, we were informed by Erik Jan van Leeuwen that both results were known and they appeared in his thesis[vL09]. A PTAS for MDS is at Theorem 6.3.21 on page 79 and A PTAS for MCDS is at Theorem 6.3.31 on page 82. The techniques used are very similar. He noted that the idea for dealing with the connected version using a constant number of extra layers in the shifting technique not only appeared Zhang et al.[ZGWD09] but also in his 2005 paper [vL05]. Finally, van Leeuwen also informed us that the open problem that we posted has been resolved by Marx~[Mar06, Mar07] who showed that an efficient PTAS for MDS does not exist [Mar06] and under ETH, the running time of $n^{O(1/\epsilon)}$ is best possible [Mar07]. We thank Erik Jan van Leeuwen for the information and we regret that we made this mistake. Abstract before retraction: We present two (exponentially) faster PTAS's for dominating set problems in unit disk graphs. Given a geometric representation of a unit disk graph, our PTAS's that find $(1+\epsilon)$-approximate solutions to the Minimum Dominating Set (MDS) and the Minimum Connected Dominating Set (MCDS) of the input graph run in time $n^{O(1/\epsilon)}$. This can be compared to the best known $n^{O(1/\epsilon \log {1/\epsilon})}$-time PTAS by Nieberg and Hurink~[WAOA'05] for MDS that only uses graph structures and an $n^{O(1/\epsilon^2)}$-time PTAS for MCDS by Zhang, Gao, Wu, and Du~[J Glob Optim'09]. Our key ingredients are improved dynamic programming algorithms that depend exponentially on more essential 1-dimensional "widths" of the problems.

Throughput is a main performance objective in communication networks. This paper considers a fundamental maximum throughput routing problem -- the all-or-nothing multicommodity flow (ANF) problem -- in arbitrary directed graphs and in the practically relevant but challenging setting where demands can be (much) larger than the edge capacities. Hence, in addition to assigning requests to valid flows for each routed commodity, an admission control mechanism is required which prevents overloading the network when routing commodities. We make several contributions. On the theoretical side we obtain substantially improved bi-criteria approximation algorithms for this NP-hard problem. We present two non-trivial linear programming relaxations and show how to convert their fractional solutions into integer solutions via randomized rounding. One is an exponential-size formulation (solvable in polynomial time using a separation oracle) that considers a ``packing'' view and allows a more flexible approach, while the other is a generalization of the compact LP formulation of Liu et al. (INFOCOM'19) that allows for easy solving via standard LP solvers. We obtain a polynomial-time randomized algorithm that yields an arbitrarily good approximation on the weighted throughput while violating the edge capacity constraints by only a small multiplicative factor. We also describe a deterministic rounding algorithm by derandomization, using the method of pessimistic estimators. We complement our theoretical results with a proof of concept empirical evaluation.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司