亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a graph $G$ with a fixed vertex order $\prec$, one obtains a circle graph $H$ whose vertices are the edges of $G$ and where two such edges are adjacent if and only if their endpoints are pairwise distinct and alternate in $\prec$. Therefore, the problem of determining whether $G$ has a $k$-page book embedding with spine order $\prec$ is equivalent to deciding whether $H$ can be colored with $k$ colors. Finding a $k$-coloring for a circle graph is known to be NP-complete for $k \geq 4$ and trivial for $k \leq 2$. For $k = 3$, Unger (1992) claims an efficient algorithm that finds a 3-coloring in $O(n \log n)$ time, if it exists. Given a circle graph $H$, Unger's algorithm (1) constructs a 3-\textsc{Sat} formula $\Phi$ that is satisfiable if and only if $H$ admits a 3-coloring and (2) solves $\Phi$ by a backtracking strategy that relies on the structure imposed by the circle graph. However, the extended abstract misses several details and Unger refers to his PhD thesis (in German) for details. In this paper we argue that Unger's algorithm for 3-coloring circle graphs is not correct and that 3-coloring circle graphs should be considered as an open problem. We show that step (1) of Unger's algorithm is incorrect by exhibiting a circle graph whose formula $\Phi$ is satisfiable but that is not 3-colorable. We further show that Unger's backtracking strategy for solving $\Phi$ in step (2) may produce incorrect results and give empirical evidence that it exhibits a runtime behaviour that is not consistent with the claimed running time.

相關內容

Modern detection transformers (DETRs) use a set of object queries to predict a list of bounding boxes, sort them by their classification confidence scores, and select the top-ranked predictions as the final detection results for the given input image. A highly performant object detector requires accurate ranking for the bounding box predictions. For DETR-based detectors, the top-ranked bounding boxes suffer from less accurate localization quality due to the misalignment between classification scores and localization accuracy, thus impeding the construction of high-quality detectors. In this work, we introduce a simple and highly performant DETR-based object detector by proposing a series of rank-oriented designs, combinedly called Rank-DETR. Our key contributions include: (i) a rank-oriented architecture design that can prompt positive predictions and suppress the negative ones to ensure lower false positive rates, as well as (ii) a rank-oriented loss function and matching cost design that prioritizes predictions of more accurate localization accuracy during ranking to boost the AP under high IoU thresholds. We apply our method to improve the recent SOTA methods (e.g., H-DETR and DINO-DETR) and report strong COCO object detection results when using different backbones such as ResNet-$50$, Swin-T, and Swin-L, demonstrating the effectiveness of our approach. Code is available at \url{//github.com/LeapLabTHU/Rank-DETR}.

Generalising the concept of a complete permutation polynomial over a finite field, we define completness to level $k$ for $k\ge1$ in fields of odd characteristic. We construct two families of polynomials that satisfy the condition of high level completeness for all finite fields, and two more families complete to the maximum level a possible for large collection of finite fields. Under the binary operation of composition of functions one family of polynomials is an abelian group isomorphic to the additive group, while the other is isomorphic to the multiplicative group.

A $k$-attractor is a combinatorial object unifying dictionary-based compression. It allows to compare the repetitiveness measures of different dictionary compressors such as Lempel-Ziv 77, the Burrows-Wheeler transform, straight line programs and macro schemes. For a string $T \in \Sigma^n$, the $k$-attractor is defined as a set of positions $\Gamma \subseteq [1,n]$, such that every distinct substring of length at most $k$ is covered by at least one of the selected positions. Thus, if a substring occurs multiple times in $T$, one position suffices to cover it. A 1-attractor is easily computed in linear time, while Kempa and Prezza [STOC 2018] have shown that for $k \geq 3$, it is NP-complete to compute the smallest $k$-attractor by a reduction from $k$-set cover. The main result of this paper answers the open question for the complexity of the 2-attractor problem, showing that the problem remains NP-complete. Kempa and Prezza's proof for $k \geq 3$ also reduces the 2-attractor problem to the 2-set cover problem, which is equivalent to edge cover, but that does not fully capture the complexity of the 2-attractor problem. For this reason, we extend edge cover by a color function on the edges, yielding the colorful edge cover problem. Any edge cover must then satisfy the additional constraint that each color is represented. This extension raises the complexity such that colorful edge cover becomes NP-complete while also more precisely modeling the 2-attractor problem. We obtain a reduction showing $k$-attractor to be NP-complete and APX-hard for any $k \geq 2$.

For an undirected unweighted graph $G=(V,E)$ with $n$ vertices and $m$ edges, let $d(u,v)$ denote the distance from $u\in V$ to $v\in V$ in $G$. An $(\alpha,\beta)$-stretch approximate distance oracle (ADO) for $G$ is a data structure that given $u,v\in V$ returns in constant (or near constant) time a value $\hat d (u,v)$ such that $d(u,v) \le \hat d (u,v) \le \alpha\cdot d(u,v) + \beta$, for some reals $\alpha >1, \beta$. If $\beta = 0$, we say that the ADO has stretch $\alpha$. Thorup and Zwick~\cite{thorup2005approximate} showed that one cannot beat stretch 3 with subquadratic space (in terms of $n$) for general graphs. P\v{a}tra\c{s}cu and Roditty~\cite{patrascu2010distance} showed that one can obtain stretch 2 using $O(m^{1/3}n^{4/3})$ space, and so if $m$ is subquadratic in $n$ then the space usage is also subquadratic. Moreover, P\v{a}tra\c{s}cu and Roditty~\cite{patrascu2010distance} showed that one cannot beat stretch 2 with subquadratic space even for graphs where $m=\tilde{O}(n)$, based on the set-intersection hypothesis. In this paper we explore the conditions for which an ADO can be stored using subquadratic space while supporting a sub-2 stretch. In particular, we show that if the maximum degree in $G$ is $\Delta_G \leq O(n^{1/2-\varepsilon})$ for some $0<\varepsilon \leq 1/2$, then there exists an ADO for $G$ that uses $\tilde{O}(n^{2-\frac {2\varepsilon}{3}})$ space and has a sub-2 stretch. Moreover, we prove a conditional lower bound, based on the set intersection hypothesis, which states that for any positive integer $k \leq \log n$, obtaining a sub-$\frac{k+2}{k}$ stretch for graphs with maximum degree $\Theta(n^{1/k})$ requires quadratic space. Thus, for graphs with maximum degree $\Theta(n^{1/2})$, obtaining a sub-2 stretch requires quadratic space.

It is proven that a conjecture of Tao (2010) holds true for log-concave random variables on the integers: For every $n \geq 1$, if $X_1,\ldots,X_n$ are i.i.d. integer-valued, log-concave random variables, then $$ H(X_1+\cdots+X_{n+1}) \geq H(X_1+\cdots+X_{n}) + \frac{1}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} - o(1) $$ as $H(X_1) \to \infty$, where $H$ denotes the (discrete) Shannon entropy. The problem is reduced to the continuous setting by showing that if $U_1,\ldots,U_n$ are independent continuous uniforms on $(0,1)$, then $$ h(X_1+\cdots+X_n + U_1+\cdots+U_n) = H(X_1+\cdots+X_n) + o(1) $$ as $H(X_1) \to \infty$, where $h$ stands for the differential entropy. Explicit bounds for the $o(1)$-terms are provided.

Motivated by an application from geodesy, we introduce a novel clustering problem which is a $k$-center (or k-diameter) problem with a side constraint. For the side constraint, we are given an undirected connectivity graph $G$ on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in $G$. We call the resulting problems the connected $k$-center problem and the connected $k$-diameter problem. We prove several results on the complexity and approximability of these problems. Our main result is an $O(\log^2{k})$-approximation algorithm for the connected $k$-center and the connected $k$-diameter problem. For Euclidean metrics and metrics with constant doubling dimension, the approximation factor of this algorithm improves to $O(1)$. We also consider the special cases that the connectivity graph is a line or a tree. For the line we give optimal polynomial-time algorithms and for the case that the connectivity graph is a tree, we either give an optimal polynomial-time algorithm or a $2$-approximation algorithm for all variants of our model. We complement our upper bounds by several lower bounds.

In this paper we show derivations among logarithmic space bounded counting classes based on closure properties of $#L$ that leads us to the result that $NL=PL=C_=L$.

A 2-packing set for an undirected graph $G=(V,E)$ is a subset $\mathcal{S} \subset V$ such that any two vertices $v_1,v_2 \in \mathcal{S}$ have no common neighbors. Finding a 2-packing set of maximum cardinality is a NP-hard problem. We develop a new approach to solve this problem on arbitrary graphs using its close relation to the independent set problem. Thereby, our algorithm red2pack uses new data reduction rules specific to the 2-packing set problem as well as a graph transformation. Our experiments show that we outperform the state-of-the-art for arbitrary graphs with respect to solution quality and also are able to compute solutions multiple orders of magnitude faster than previously possible. For example, we are able to solve 63% of our graphs to optimality in less than a second while the competitor for arbitrary graphs can only solve 5% of the graphs in the data set to optimality even with a 10 hour time limit. Moreover, our approach can solve a wide range of large instances that have previously been unsolved.

Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning. However, visual models trained to detect new objects during inference have been unable to replicate this ability, and instead either perform poorly or require meta-training and/or fine-tuning on similar objects. In this work, we propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning. Our approach leverages a frozen pre-trained feature extractor, and analogous to in-context learning, recasts meta-learning as sequence modeling over datapoints with known labels and a test datapoint with an unknown label. On 8 out of 11 meta-learning benchmarks, our approach -- without meta-training or fine-tuning -- exceeds or matches the state-of-the-art algorithm, P>M>F, which is meta-trained on these benchmarks.

We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR), whose common challenge is generalizing to the unseen-classes by only training on the seen-classes. Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution, which causes severe recognition rate imbalance between the seen-class (high) and unseen-class (low). We show that the key reason is that the generation is not Counterfactual Faithful, and thus we propose a faithful one, whose generation is from the sample-specific counterfactual question: What would the sample look like, if we set its class attribute to a certain class, while keeping its sample attribute unchanged? Thanks to the faithfulness, we can apply the Consistency Rule to perform unseen/seen binary classification, by asking: Would its counterfactual still look like itself? If ``yes'', the sample is from a certain class, and ``no'' otherwise. Through extensive experiments on ZSL and OSR, we demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance. Note that this framework is orthogonal to existing methods, thus, it can serve as a new baseline to evaluate how ZSL/OSR models generalize. Codes are available at //github.com/yue-zhongqi/gcm-cf.

北京阿比特科技有限公司