亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a given graph $G$, a depth-first search (DFS) tree $T$ of $G$ is an $r$-rooted spanning tree such that every edge of $G$ is either an edge of $T$ or is between a \textit{descendant} and an \textit{ancestor} in $T$. A graph $G$ together with a DFS tree is called a \textit{lineal topology} $\mathcal{T} = (G, r, T)$. Sam et al. (2023) initiated study of the parameterized complexity of the \textsc{Min-LLT} and \textsc{Max-LLT} problems which ask, given a graph $G$ and an integer $k\geq 0$, whether $G$ has a DFS tree with at most $k$ and at least $k$ leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least $n-k$ and at most $n-k$ leaves, respectively, these problems are fixed-parameter tractable when parameterized by $k$. However, the proofs were based on Courcelle's theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with $\Oh(k^3)$ vertices. In particular, this implies FPT algorithms running in $k^{\Oh(k)}\cdot n^{O(1)}$ time. We achieve these results by making use of a $\Oh(k)$-sized vertex cover structure associated with each problem. This also allows us to demonstrate polynomial kernels for \textsc{Min-LLT} and \textsc{Max-LLT} for the structural parameterization by the vertex cover number.

相關內容

A $c$-labeling $\phi: V(G) \rightarrow \{1, 2, \hdots, c \}$ of graph $G$ is distinguishing if, for every non-trivial automorphism $\pi$ of $G$, there is some vertex $v$ so that $\phi(v) \neq \phi(\pi(v))$. The distinguishing number of $G$, $D(G)$, is the smallest $c$ such that $G$ has a distinguishing $c$-labeling. We consider a compact version of Tyshkevich's graph decomposition theorem where trivial components are maximally combined to form a complete graph or a graph of isolated vertices. Suppose the compact canonical decomposition of $G$ is $G_{k} \circ G_{k-1} \circ \cdots \circ G_1 \circ G_0$. We prove that $\phi$ is a distinguishing labeling of $G$ if and only if $\phi$ is a distinguishing labeling of $G_i$ when restricted to $V(G_i)$ for $i = 0, \hdots, k$. Thus, $D(G) = \max \{D(G_i), i = 0, \hdots, k \}$. We then present an algorithm that computes the distinguishing number of a unigraph in linear time.

We solve a problem of Dujmovi\'c and Wood (2007) by showing that a complete convex geometric graph on $n$ vertices cannot be decomposed into fewer than $n-1$ star-forests, each consisting of noncrossing edges. This bound is clearly tight. We also discuss similar questions for abstract graphs.

Given a closed simple polygon $P$, we say two points $p,q$ see each other if the segment $pq$ is fully contained in $P$. The art gallery problem seeks a minimum size set $G\subset P$ of guards that sees $P$ completely. The only currently correct algorithm to solve the art gallery problem exactly uses algebraic methods and is attributed to Sharir. As the art gallery problem is ER-complete, it seems unlikely to avoid algebraic methods, without additional assumptions. In this paper, we introduce the notion of vision stability. In order to describe vision stability consider an enhanced guard that can see "around the corner" by an angle of $\delta$ or a diminished guard whose vision is by an angle of $\delta$ "blocked" by reflex vertices. A polygon $P$ has vision stability $\delta$ if the optimal number of enhanced guards to guard $P$ is the same as the optimal number of diminished guards to guard $P$. We will argue that most relevant polygons are vision stable. We describe a one-shot vision stable algorithm that computes an optimal guard set for visionstable polygons using polynomial time and solving one integer program. It guarantees to find the optimal solution for every vision stable polygon. We implemented an iterative visionstable algorithm and show its practical performance is slower, but comparable with other state of the art algorithms. Our iterative algorithm is inspired and follows closely the one-shot algorithm. It delays several steps and only computes them when deemed necessary. Given a chord $c$ of a polygon, we denote by $n(c)$ the number of vertices visible from $c$. The chord-width of a polygon is the maximum $n(c)$ over all possible chords $c$. The set of vision stable polygons admits an FPT algorithm when parametrized by the chord-width. Furthermore, the one-shot algorithm runs in FPT time, when parameterized by the number of reflex vertices.

We consider the Shortest Odd Path problem, where given an undirected graph $G$, a weight function on its edges, and two vertices $s$ and $t$ in $G$, the aim is to find an $(s,t)$-path with odd length and, among all such paths, of minimum weight. For the case when the weight function is conservative, i.e., when every cycle has non-negative total weight, the complexity of the Shortest Odd Path problem had been open for 20 years, and was recently shown to be NP-hard. We give a polynomial-time algorithm for the special case when the weight function is conservative and the set $E^-$ of negative-weight edges forms a single tree. Our algorithm exploits the strong connection between Shortest Odd Path and the problem of finding two internally vertex-disjoint paths between two terminals in an undirected edge-weighted graph. It also relies on solving an intermediary problem variant called Shortest Parity-Constrained Odd Path where for certain edges we have parity constraints on their position along the path. Also, we exhibit two FPT algorithms for solving Shortest Odd Path in graphs with conservative weight functions. The first FPT algorithm is parameterized by $|E^-|$, the number of negative edges, or more generally, by the maximum size of a matching in the subgraph of $G$ spanned by $E^-$. Our second FPT algorithm is parameterized by the treewidth of $G$.

Out-of-distribution (OOD) detection refers to training the model on an in-distribution (ID) dataset to classify whether the input images come from unknown classes. Considerable effort has been invested in designing various OOD detection methods based on either convolutional neural networks or transformers. However, zero-shot OOD detection methods driven by CLIP, which only require class names for ID, have received less attention. This paper presents a novel method, namely CLIP saying no (CLIPN), which empowers the logic of saying no within CLIP. Our key motivation is to equip CLIP with the capability of distinguishing OOD and ID samples using positive-semantic prompts and negation-semantic prompts. Specifically, we design a novel learnable no prompt and a no text encoder to capture negation semantics within images. Subsequently, we introduce two loss functions: the image-text binary-opposite loss and the text semantic-opposite loss, which we use to teach CLIPN to associate images with no prompts, thereby enabling it to identify unknown samples. Furthermore, we propose two threshold-free inference algorithms to perform OOD detection by utilizing negation semantics from no prompts and the text encoder. Experimental results on 9 benchmark datasets (3 ID datasets and 6 OOD datasets) for the OOD detection task demonstrate that CLIPN, based on ViT-B-16, outperforms 7 well-used algorithms by at least 2.34% and 11.64% in terms of AUROC and FPR95 for zero-shot OOD detection on ImageNet-1K. Our CLIPN can serve as a solid foundation for effectively leveraging CLIP in downstream OOD tasks. The code is available on //github.com/xmed-lab/CLIPN.

We give an algorithm that, given an $n$-vertex graph $G$ and an integer $k$, in time $2^{O(k)} n$ either outputs a tree decomposition of $G$ of width at most $2k + 1$ or determines that the treewidth of $G$ is larger than $k$. This is the first 2-approximation algorithm for treewidth that is faster than the known exact algorithms, and in particular improves upon the previous best approximation ratio of 5 in time $2^{O(k)} n$ given by Bodlaender et al. [SIAM J. Comput., 45 (2016)]. Our algorithm works by applying incremental improvement operations to a tree decomposition, using an approach inspired by a proof of Bellenbaum and Diestel [Comb. Probab. Comput., 11 (2002)].

The $k$-dimensional Weisfeiler-Leman ($k$-WL) algorithm is a simple combinatorial algorithm that was originally designed as a graph isomorphism heuristic. It naturally finds applications in Babai's quasipolynomial time isomorphism algorithm, practical isomorphism solvers, and algebraic graph theory. However, it also has surprising connections to other areas such as logic, proof complexity, combinatorial optimization, and machine learning. The algorithm iteratively computes a coloring of the $k$-tuples of vertices of a graph. Since F\"urer's linear lower bound [ICALP 2001], it has been an open question whether there is a super-linear lower bound for the iteration number for $k$-WL on graphs. We answer this question affirmatively, establishing an $\Omega(n^{k/2})$-lower bound for all $k$.

The Ramsey number is the minimum number of nodes, $n = R(s, t)$, such that all undirected simple graphs of order $n$, contain a clique of order $s$, or an independent set of order $t$. This paper explores the application of a best first search algorithm and reinforcement learning (RL) techniques to find counterexamples to specific Ramsey numbers. We incrementally improve over prior search methods such as random search by introducing a graph vectorization and deep neural network (DNN)-based heuristic, which gauge the likelihood of a graph being a counterexample. The paper also proposes algorithmic optimizations to confine a polynomial search runtime. This paper does not aim to present new counterexamples but rather introduces and evaluates a framework supporting Ramsey counterexample exploration using other heuristics. Code and methods are made available through a PyPI package and GitHub repository.

In this letter, we propose a novel construction of type-II $Z$-complementary code set (ZCCS) having arbitrary sequence length using the Kronecker product between a complete complementary code (CCC) and mutually orthogonal uni-modular sequences. In this construction, Barker sequences are used to reduce row sequence peak-to-mean envelope power ratio (PMEPR) for some specific lengths sequence and column sequence PMEPR for some specific sizes of codes. The column sequence PMEPR of the proposed type-II ZCCS is upper bounded by a number smaller than $2$. The proposed construction also contributes new lengths of type-II $Z$-complementary pair (ZCP) and type-II $Z$-complementary set (ZCS). Furthermore, the PMEPR of these new type-II ZCPs is also lower than existing type-II ZCPs.

Multiple TSP ($\mathrm{mTSP}$) is a important variant of $\mathrm{TSP}$ where a set of $k$ salesperson together visit a set of $n$ cities. The $\mathrm{mTSP}$ problem has applications to many real life applications such as vehicle routing. Rothkopf introduced another variant of $\mathrm{TSP}$ called many-visits TSP ($\mathrm{MV\mbox{-}TSP}$) where a request $r(v)\in \mathbb{Z}_+$ is given for each city $v$ and a single salesperson needs to visit each city $r(v)$ times and return back to his starting point. A combination of $\mathrm{mTSP}$ and $\mathrm{MV\mbox{-}TSP}$ called many-visits multiple TSP $(\mathrm{MV\mbox{-}mTSP})$ was studied by B\'erczi, Mnich, and Vincze where the authors give approximation algorithms for various variants of $\mathrm{MV\mbox{-}mTSP}$. In this work, we show a simple linear programming (LP) based reduction that converts a $\mathrm{mTSP}$ LP-based algorithm to a LP-based algorithm for $\mathrm{MV\mbox{-}mTSP}$ with the same approximation factor. We apply this reduction to improve or match the current best approximation factors of several variants of the $\mathrm{MV\mbox{-}mTSP}$. Our reduction shows that the addition of visit requests $r(v)$ to $\mathrm{mTSP}$ does $\textit{not}$ make the problem harder to approximate even when $r(v)$ is exponential in number of vertices. To apply our reduction, we either use existing LP-based algorithms for $\mathrm{mTSP}$ variants or show that several existing combinatorial algorithms for $\mathrm{mTSP}$ variants can be interpreted as LP-based algorithms. This allows us to apply our reduction to these combinatorial algorithms as well achieving the improved guarantees.

北京阿比特科技有限公司