亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Path graphs are intersection graphs of paths in a tree.~In this paper we give a "6\ good characterization" of path graphs, namely, we prove that path graph membership is in $NP\cap CoNP$ without resorting to existing polynomial time algorithms. The characterization is given in terms of the collection of the \emph{attachedness graphs} of a graph, a novel device to deal with the connected components of a graph after the removal of clique separators. On the one hand, the characterization refines and simplifies the characterization of path graphs due to Monma and Wei [C.L.~Monma,~and~V.K.~Wei, Intersection {G}raphs of {P}aths in a {T}ree, J. Combin. Theory Ser. B, 41:2 (1986) 141--181], which we build on, by reducing a constrained vertex coloring problem defined on the \emph{attachedness graphs} to a vertex 2-coloring problem on the same graphs. On the other hand, the characterization allows us to exhibit two exhaustive lists of obstructions to path graph membership in the form of minimal forbidden induced/partial 2-edge colored subgraphs in each of the \emph{attachedness graphs}.

相關內容

We study the problem of robustly estimating the parameter $p$ of an Erd\H{o}s-R\'enyi random graph on $n$ nodes, where a $\gamma$ fraction of nodes may be adversarially corrupted. After showing the deficiencies of canonical estimators, we design a computationally-efficient spectral algorithm which estimates $p$ up to accuracy $\tilde O(\sqrt{p(1-p)}/n + \gamma\sqrt{p(1-p)} /\sqrt{n}+ \gamma/n)$ for $\gamma < 1/60$. Furthermore, we give an inefficient algorithm with similar accuracy for all $\gamma <1/2$, the information-theoretic limit. Finally, we prove a nearly-matching statistical lower bound, showing that the error of our algorithms is optimal up to logarithmic factors.

We consider the problem of designing a succinct data structure for path graphs (which are a proper subclass of chordal graphs and a proper superclass of interval graphs) with $n$ vertices while supporting degree, adjacency, and neighborhood queries efficiently. We provide two solutions for this problem. Our first data structure is succinct and occupies $n \log n+o(n \log n)$ bits while answering adjacency query in $O(\log n)$ time, and neighborhood and degree queries in $O(d \log^2 n)$ time where $d$ is the degree of the queried vertex. Our second data structure answers adjacency queries faster at the expense of slightly more space. More specifically, we provide an $O(n \log^2 n)$ bit data structure that supports adjacency query in $O(1)$ time, and the neighborhood query in $O(d \log n)$ time where $d$ is the degree of the queried vertex. Central to our data structures is the usage of the classical heavy path decomposition, followed by a careful bookkeeping using an orthogonal range search data structure among others, which maybe of independent interest for designing succinct data structures for other graphs. It is the use of the results of Acan et al. in the second data structure that permits a simple and efficient implementation at the expense of more space.

The greedy spanner in a low dimensional Euclidean space is a fundamental geometric construction that has been extensively studied over three decades as it possesses the two most basic properties of a good spanner: constant maximum degree and constant lightness. Recently, Eppstein and Khodabandeh showed that the greedy spanner in $\mathbb{R}^2$ admits a sublinear separator in a strong sense: any subgraph of $k$ vertices of the greedy spanner in $\mathbb{R}^2$ has a separator of size $O(\sqrt{k})$. Their technique is inherently planar and is not extensible to higher dimensions. They left showing the existence of a small separator for the greedy spanner in $\mathbb{R}^d$ for any constant $d\geq 3$ as an open problem. In this paper, we resolve the problem of Eppstein and Khodabandeh by showing that any subgraph of $k$ vertices of the greedy spanner in $\mathbb{R}^d$ has a separator of size $O(k^{1-1/d})$. We introduce a new technique that gives a simple characterization for any geometric graph to have a sublinear separator that we dub $\tau$-lanky: a geometric graph is $\tau$-lanky if any ball of radius $r$ cuts at most $\tau$ edges of length at least $r$ in the graph. We show that any $\tau$-lanky geometric graph of $n$ vertices in $\mathbb{R}^d$ has a separator of size $O(\tau n^{1-1/d})$. We then derive our main result by showing that the greedy spanner is $O(1)$-lanky. We indeed obtain a more general result that applies to unit ball graphs and point sets of low fractal dimensions in $\mathbb{R}^d$. Our technique naturally extends to doubling metrics. We use the $\tau$-lanky characterization to show that there exists a $(1+\epsilon)$-spanner for doubling metrics of dimension $d$ with a constant maximum degree and a separator of size $O(n^{1-\frac{1}{d}})$; this result resolves an open problem posed by Abam and Har-Peled a decade ago.

We study the problem of \emph{dynamic regret minimization} in $K$-armed Dueling Bandits under non-stationary or time varying preferences. This is an online learning setup where the agent chooses a pair of items at each round and observes only a relative binary `win-loss' feedback for this pair, sampled from an underlying preference matrix at that round. We first study the problem of static-regret minimization for adversarial preference sequences and design an efficient algorithm with $O(\sqrt{KT})$ high probability regret. We next use similar algorithmic ideas to propose an efficient and provably optimal algorithm for dynamic-regret minimization under two notions of non-stationarities. In particular, we establish $\tO(\sqrt{SKT})$ and $\tO({V_T^{1/3}K^{1/3}T^{2/3}})$ dynamic-regret guarantees, $S$ being the total number of `effective-switches' in the underlying preference relations and $V_T$ being a measure of `continuous-variation' non-stationarity. The complexity of these problems have not been studied prior to this work despite the practicability of non-stationary environments in real world systems. We justify the optimality of our algorithms by proving matching lower bound guarantees under both the above-mentioned notions of non-stationarities. Finally, we corroborate our results with extensive simulations and compare the efficacy of our algorithms over state-of-the-art baselines.

We introduce a new data structure for answering connectivity queries in undirected graphs subject to batched vertex failures. Precisely, given any graph G and integer k, we can in fixed-parameter time construct a data structure that can later be used to answer queries of the form: ``are vertices s and t connected via a path that avoids vertices $u_1,..., u_k$?'' in time $2^{2^{O(k)}}$. In the terminology of the literature on data structures, this gives the first deterministic data structure for connectivity under vertex failures where for every fixed number of failures, all operations can be performed in constant time. With the aim to understand the power and the limitations of our new techniques, we prove an algorithmic meta theorem for the recently introduced separator logic, which extends first-order logic with atoms for connectivity under vertex failures. We prove that the model-checking problem for separator logic is fixed-parameter tractable on every class of graphs that exclude a fixed topological minor. We also show a weak converse. This implies that from the point of view of parameterized complexity, under standard complexity assumptions, the frontier of tractability of separator logic is almost exactly delimited by classes excluding a fixed topological minor. The backbone of our proof relies on a decomposition theorem of Cygan et al. [SICOMP '19], which provides a tree decomposition of a given graph into bags that are unbreakable. Crucially, unbreakability allows to reduce separator logic to plain first-order logic within each bag individually. We design our model-checking algorithm using dynamic programming over the tree decomposition, where the transition at each bag amounts to running a suitable model-checking subprocedure for plain first-order logic. This approach is robust enough to provide also efficient enumeration of queries expressed in separator logic.

We consider the query complexity of finding a local minimum of a function defined on a graph, where at most $k$ rounds of interaction with the oracle are allowed. Rounds model parallel settings, where each query takes resources to complete and is executed on a separate processor. Thus the query complexity in $k$ rounds informs how many processors are needed to achieve a parallel time of $k$. We focus on the d-dimensional grid $[n]^d$, where the dimension $d$ is a constant, and consider two regimes for the number of rounds: constant and polynomial in n. We give algorithms and lower bounds that characterize the trade-off between the number of rounds of adaptivity and the query complexity of local search. When the number of rounds $k$ is constant, we show that the query complexity of local search in $k$ rounds is $\Theta\bigl(n^{\frac{d^{k+1} - d^k}{d^k - 1}}\bigl)$, for both deterministic and randomized algorithms. When the number of rounds is polynomial, i.e. $k = n^{\alpha}$ for $0 < \alpha < d/2$, the randomized query complexity is $\Theta\left(n^{d-1 - \frac{d-2}{d}\alpha}\right)$ for all $d \geq 5$. For $d=3$ and $d=4$, we show the same upper bound expression holds and give almost matching lower bounds. The local search analysis also enables us to characterize the query complexity of computing a Brouwer fixed point in rounds. Our proof technique for lower bounding the query complexity in rounds may be of independent interest as an alternative to the classical relational adversary method of Aaronson from the fully adaptive setting.

We first design an $\mathcal{O}(n^2)$ solution for finding a maximum induced matching in permutation graphs given their permutation models, based on a dynamic programming algorithm with the aid of the sweep line technique. With the support of the disjoint-set data structure, we improve the complexity to $\mathcal{O}(m + n)$. Consequently, we extend this result to give an $\mathcal{O}(m + n)$ algorithm for the same problem in trapezoid graphs. By combining our algorithms with the current best graph identification algorithms, we can solve the MIM problem in permutation and trapezoid graphs in linear and $\mathcal{O}(n^2)$ time, respectively. Our results are far better than the best known $\mathcal{O}(mn)$ algorithm for the maximum induced matching problem in both graph classes, which was proposed by Habib et al.

Cyber-physical systems (CPS) data privacy protection during sharing, aggregating, and publishing is a challenging problem. Several privacy protection mechanisms have been developed in the literature to protect sensitive data from adversarial analysis and eliminate the risk of re-identifying the original properties of shared data. However, most of the existing solutions have drawbacks, such as (i) lack of a proper vulnerability characterization model to accurately identify where privacy is needed, (ii) ignoring data providers privacy preference, (iii) using uniform privacy protection which may create inadequate privacy for some provider while overprotecting others, and (iv) lack of a comprehensive privacy quantification model assuring data privacy-preservation. To address these issues, we propose a personalized privacy preference framework by characterizing and quantifying the CPS vulnerabilities as well as ensuring privacy. First, we introduce a Standard Vulnerability Profiling Library (SVPL) by arranging the nodes of an energy-CPS from maximum to minimum vulnerable based on their privacy loss. Based on this model, we present our personalized privacy framework (PDP) in which Laplace noise is added based on the individual node's selected privacy preferences. Finally, combining these two proposed methods, we demonstrate that our privacy characterization and quantification model can attain better privacy preservation by eliminating the trade-off between privacy, utility, and risk of losing information.

Multi-hop logical reasoning is an established problem in the field of representation learning on knowledge graphs (KGs). It subsumes both one-hop link prediction as well as other more complex types of logical queries. Existing algorithms operate only on classical, triple-based graphs, whereas modern KGs often employ a hyper-relational modeling paradigm. In this paradigm, typed edges may have several key-value pairs known as qualifiers that provide fine-grained context for facts. In queries, this context modifies the meaning of relations, and usually reduces the answer set. Hyper-relational queries are often observed in real-world KG applications, and existing approaches for approximate query answering cannot make use of qualifier pairs. In this work, we bridge this gap and extend the multi-hop reasoning problem to hyper-relational KGs allowing to tackle this new type of complex queries. Building upon recent advancements in Graph Neural Networks and query embedding techniques, we study how to embed and answer hyper-relational conjunctive queries. Besides that, we propose a method to answer such queries and demonstrate in our experiments that qualifiers improve query answering on a diverse set of query patterns.

Knowledge graphs (KGs) store highly heterogeneous information about the world in the structure of a graph, and are useful for tasks such as question answering and reasoning. However, they often contain errors and are missing information. Vibrant research in KG refinement has worked to resolve these issues, tailoring techniques to either detect specific types of errors or complete a KG. In this work, we introduce a unified solution to KG characterization by formulating the problem as unsupervised KG summarization with a set of inductive, soft rules, which describe what is normal in a KG, and thus can be used to identify what is abnormal, whether it be strange or missing. Unlike first-order logic rules, our rules are labeled, rooted graphs, i.e., patterns that describe the expected neighborhood around a (seen or unseen) node, based on its type, and information in the KG. Stepping away from the traditional support/confidence-based rule mining techniques, we propose KGist, Knowledge Graph Inductive SummarizaTion, which learns a summary of inductive rules that best compress the KG according to the Minimum Description Length principle---a formulation that we are the first to use in the context of KG rule mining. We apply our rules to three large KGs (NELL, DBpedia, and Yago), and tasks such as compression, various types of error detection, and identification of incomplete information. We show that KGist outperforms task-specific, supervised and unsupervised baselines in error detection and incompleteness identification, (identifying the location of up to 93% of missing entities---over 10% more than baselines), while also being efficient for large knowledge graphs.

北京阿比特科技有限公司