亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop a linear time algorithm for finding the diameter of an asteroidal triple-free (AT-free) graph. Furthermore, we update the definition of polar pairs and develop new properties of polar pairs for (weak) dominating pair graphs. We prove that the problem of computing a simplicial vertex in a general graph can be accomplished in O(n^2) based on an existing reduction to the problem of finding diameter in an AT-free graph. We improve the best-known run-time complexities of several graph theoretical problems.

相關內容

Linearity and dependency analyses are key to several applications in computer science, especially, in resource management and information flow control. What connects these analyses is that both of them need to model at least two different worlds with constrained mutual interaction. Though linearity and dependency analyses address similar problems, the analyses are carried out by employing different methods. For linearity analysis, type systems employ the comonadic exponential modality from Girard's linear logic. For dependency analysis, type systems employ the monadic modality from Moggi's computational metalanguage. Owing to this methodical difference, a unification of the two analyses, though theoretically and practically desirable, is not straightforward. Fortunately, with recent advances in graded-context type systems, it has been realized that linearity and dependency analyses can be viewed through the same lens. However, existing graded-context type systems fall short of a unification of linearity and dependency analyses. The problem with existing graded-context type systems is that though their linearity analysis is general, their dependency analysis is limited, primarily because the graded modality they employ is comonadic and not monadic. In this paper, we address this limitation by systematically extending existing graded-context type systems so that the graded modality is both comonadic and monadic. This extension enables us to unify linearity analysis with a general dependency analysis. We present a unified Linear Dependency Calculus, LDC, which analyses linearity and dependency using the same mechanism in an arbitrary Pure Type System. We show that LDC is a general linear and dependency calculus by subsuming into it the standard calculi for the individual analyses.

We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

A recent line of work on VC set systems in minor-free (undirected) graphs, starting from Li and Parter, who constructed a new VC set system for planar graphs, has given surprising algorithmic results. In this work, we initialize a more systematic study of VC set systems for minor-free graphs and their applications in both undirected graphs and directed graphs (a.k.a digraphs). More precisely: - We propose a new variant of Li-Parter set system for undirected graphs. - We extend our set system to $K_h$-minor-free digraphs and show that its VC dimension is $O(h^2)$. - We show that the system of directed balls in minor-free digraphs has VC dimension at most $h-1$. - On the negative side, we show that VC set system constructed from shortest path trees of planar digraphs does not have a bounded VC dimension. The highlight of our work is the results for digraphs, as we are not aware of known algorithmic work on constructing and exploiting VC set systems for digraphs.

In the average-case $k$-SUM problem, given $r$ integers chosen uniformly at random from $\{0,\ldots,M-1\}$, the objective is to find a set of $k$ numbers that sum to $0$ modulo $M$ (this set is called a solution). In the related $k$-XOR problem, given $k$ uniformly random Boolean vectors of length $\log{M}$, the objective is to find a set of $k$ of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of $k$, $r$, and $M$. The dense regime of $M \leq r^k$, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of $M\gg r^k$, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted $k$-SUM and $k$-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As $M$ increases past $r^k$, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems.

The (Perfect) Matching Cut problem is to decide if a graph $G$ has a (perfect) matching cut, i.e., a (perfect) matching that is also an edge cut of $G$. Both Matching Cut and Perfect Matching Cut are known to be NP-complete, leading to many complexity results for both problems on special graph classes. A perfect matching cut is also a matching cut with maximum number of edges. To increase our understanding of the relationship between the two problems, we introduce the Maximum Matching Cut problem. This problem is to determine a largest matching cut in a graph. We generalize and unify known polynomial-time algorithms for Matching Cut and Perfect Matching Cut restricted to graphs of diameter at most $2$ and to $(P_6 + sP_2)$-free graphs. We also show that the complexity of Maximum Matching Cut} differs from the complexities of Matching Cut and Perfect Matching Cut by proving NP-hardness of Maximum Matching Cut for $2P_3$-free graphs of diameter 3 and radius 2 and for line graphs. In this way, we obtain full dichotomies of Maximum Matching Cut for graphs of bounded diameter, bounded radius and $H$-free graphs.

If $X$ is a subset of vertices of a graph $G$, then vertices $u$ and $v$ are $X$-visible if there exists a shortest $u,v$-path $P$ such that $V(P)\cap X \subseteq \{u,v\}$. If each two vertices from $X$ are $X$-visible, then $X$ is a mutual-visibility set. The mutual-visibility number of $G$ is the cardinality of a largest mutual-visibility set of $G$ and has been already investigated. In this paper a variety of mutual-visibility problems is introduced based on which natural pairs of vertices are required to be $X$-visible. This yields the total, the dual, and the outer mutual-visibility numbers. We first show that these graph invariants are related to each other and to the classical mutual-visibility number, and then we prove that the three newly introduced mutual-visibility problems are computationally difficult. According to this result, we compute or bound their values for several graphs classes that include for instance grid graphs and tori. We conclude the study by presenting some inter-comparison between the values of such parameters, which is based on the computations we made for some specific families.

A temporal graph is a graph in which edges are assigned a time label. Two nodes u and v of a temporal graph are connected one to the other if there exists a path from u to v with increasing edge time labels. We consider the problem of assigning time labels to the edges of a digraph in order to maximize the total reachability of the resulting temporal graph (that is, the number of pairs of nodes which are connected one to the other). In particular, we prove that this problem is NP-hard. We then conjecture that the problem is approximable within a constant approximation ratio. This conjecture is a consequence of the following graph theoretic conjecture: any strongly connected directed graph with n nodes admits an out-arborescence and an in-arborescence that are edge-disjoint, have the same root, and each spans $\Omega$(n) nodes.

An oriented graph has weak diameter at most $d$ if every non-adjacent pair of vertices are connected by a directed $d$-path. The function $f_d(n)$ denotes the minimum number of arcs in an oriented graph on $n$ vertices having weak diameter $d$. It turns out that finding the exact value of $f_d(n)$ is a challenging problem even for $d = 2$. This function was introduced by Katona and Szeme\'redi [Studia Scientiarum Mathematicarum Hungarica, 1967], and after that several attempts were made to find its exact value by Znam [Acta Fac. Rerum Natur. Univ. Comenian. Math. Publ, 1970], Dawes and Meijer [J. Combin. Math. and Combin. Comput, 1987], F\"uredi, Horak, Pareek and Zhu [Graphs and Combinatorics, 1998], and Kostochka, Luczak, Simonyi and Sopena [Discrete Mathematics and Theoretical Computer Science, 1999] through improving its best known upper bounds. In that process, they also proved that this function is asymptotically equal to $n\log_2 n$ and hence, is an asymptotically increasing function. In this article, we prove that $f(n)$ is a strictly increasing function. Furthermore, we improve the best known upper bound of $f(n)$ and conjecture that it is tight.

Two genomes over the same set of gene families form a canonical pair when each of them has exactly one gene from each family. Different distances of canonical genomes can be derived from a structure called breakpoint graph, which represents the relation between the two given genomes as a collection of cycles of even length and paths. Then, the breakpoint distance is equal to n - (c_2 + p_0/2), where n is the number of genes, c_2 is the number of cycles of length 2 and p_0 is the number of paths of length 0. Similarly, when the considered rearrangements are those modeled by the double-cut-and-join (DCJ) operation, the rearrangement distance is n - (c + p_e/2), where c is the total number of cycles and p_e is the total number of even paths. The distance formulation is a basic unit for several other combinatorial problems related to genome evolution and ancestral reconstruction, such as median or double distance. Interestingly, both median and double distance problems can be solved in polynomial time for the breakpoint distance, while they are NP-hard for the rearrangement distance. One way of exploring the complexity space between these two extremes is to consider the {\sigma}_k distance, defined to be n - [c_2 + c_4 + ... + c_k + (p_0 + p_2 + ... +p_k)/2], and increasingly investigate the complexities of median and double distance for the {\sigma}_4 distance, then the {\sigma}_6 distance, and so on. While for the median much effort was done in our and in other research groups but no progress was obtained even for the {\sigma}_4 distance, for solving the double distance under {\sigma}_4 and {\sigma}_6 distances we could devise linear time algorithms, which we present here.

Recent years have witnessed the enormous success of low-dimensional vector space representations of knowledge graphs to predict missing facts or find erroneous ones. Currently, however, it is not yet well-understood how ontological knowledge, e.g. given as a set of (existential) rules, can be embedded in a principled way. To address this shortcoming, in this paper we introduce a framework based on convex regions, which can faithfully incorporate ontological knowledge into the vector space embedding. Our technical contribution is two-fold. First, we show that some of the most popular existing embedding approaches are not capable of modelling even very simple types of rules. Second, we show that our framework can represent ontologies that are expressed using so-called quasi-chained existential rules in an exact way, such that any set of facts which is induced using that vector space embedding is logically consistent and deductively closed with respect to the input ontology.

北京阿比特科技有限公司