亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

According to the classic Chv{\'{a}}tal's Lemma from 1977, a graph of minimum degree $\delta(G)$ contains every tree on $\delta(G)+1$ vertices. Our main result is the following algorithmic "extension" of Chv\'{a}tal's Lemma: For any $n$-vertex graph $G$, integer $k$, and a tree $T$ on at most $\delta(G)+k$ vertices, deciding whether $G$ contains a subgraph isomorphic to $T$, can be done in time $f(k)\cdot n^{\mathcal{O}(1)}$ for some function $f$ of $k$ only. The proof of our main result is based on an interplay between extremal graph theory and parameterized algorithms.

相關內容

We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data. To do this, we define a common format, "visual sentences", in which we can represent raw images and videos as well as annotated data sources such as semantic segmentations and depth reconstructions without needing any meta-knowledge beyond the pixels. Once this wide variety of visual data (comprising 420 billion tokens) is represented as sequences, the model can be trained to minimize a cross-entropy loss for next token prediction. By training across various scales of model architecture and data diversity, we provide empirical evidence that our models scale effectively. Many different vision tasks can be solved by designing suitable visual prompts at test time.

Let $(\Omega, \mu)$, $(\Delta, \nu)$ be measure spaces and $p=1$ or $p=\infty$. Let $(\{f_\alpha\}_{\alpha\in \Omega}, \{\tau_\alpha\}_{\alpha\in \Omega})$ and $(\{g_\beta\}_{\beta\in \Delta}, \{\omega_\beta\}_{\beta\in \Delta})$ be unbounded continuous p-Schauder frames for a Banach space $\mathcal{X}$. Then for every $x \in ( \mathcal{D}(\theta_f) \cap\mathcal{D}(\theta_g))\setminus\{0\}$, we show that \begin{align}\label{UB} (1) \quad \quad \quad \quad \mu(\operatorname{supp}(\theta_f x))\nu(\operatorname{supp}(\theta_g x)) \geq \frac{1}{\left(\displaystyle\sup_{\alpha \in \Omega, \beta \in \Delta}|f_\alpha(\omega_\beta)|\right)\left(\displaystyle\sup_{\alpha \in \Omega , \beta \in \Delta}|g_\beta(\tau_\alpha)|\right)}, \end{align} where \begin{align*} &\theta_f:\mathcal{D}(\theta_f) \ni x \mapsto \theta_fx \in \mathcal{L}^p(\Omega, \mu); \quad \theta_fx: \Omega \ni \alpha \mapsto (\theta_fx) (\alpha):= f_\alpha (x) \in \mathbb{K},\\ &\theta_g: \mathcal{D}(\theta_g) \ni x \mapsto \theta_gx \in \mathcal{L}^p(\Delta, \nu); \quad \theta_gx: \Delta \ni \beta \mapsto (\theta_gx) (\beta):= g_\beta (x) \in \mathbb{K}. \end{align*} We call Inequality (1) as \textbf{Unbounded Donoho-Stark-Elad-Bruckstein-Ricaud-Torr\'{e}sani Uncertainty Principle}. Along with recent \textbf{Functional Continuous Uncertainty Principle} [arXiv:2308.00312], Inequality (1) also improves Ricaud-Torr\'{e}sani uncertainty principle [IEEE Trans. Inform. Theory, 2013]. In particular, it improves Elad-Bruckstein uncertainty principle [IEEE Trans. Inform. Theory, 2002] and Donoho-Stark uncertainty principle [SIAM J. Appl. Math., 1989].

In the matroid partitioning problem, we are given $k$ matroids $\mathcal{M}_1 = (V, \mathcal{I}_1), \dots , \mathcal{M}_k = (V, \mathcal{I}_k)$ defined over a common ground set $V$ of $n$ elements, and we need to find a partitionable set $S \subseteq V$ of largest possible cardinality, denoted by $p$. Here, a set $S \subseteq V$ is called partitionable if there exists a partition $(S_1, \dots , S_k)$ of $S$ with $S_i \in \mathcal{I}_i$ for $i = 1, \ldots, k$. In 1986, Cunningham [SICOMP 1986] presented a matroid partition algorithm that uses $O(n p^{3/2} + k n)$ independence oracle queries, which was the previously known best algorithm. This query complexity is $O(n^{5/2})$ when $k \leq n$. Our main result is to present a matroid partition algorithm that uses $\tilde{O}(k'^{1/3} n p + k n)$ independence oracle queries, where $k' = \min\{k, p\}$. This query complexity is $\tilde{O}(n^{7/3})$ when $k \leq n$, and this improves upon the one of previous Cunningham's algorithm. To obtain this, we present a new approach \emph{edge recycling augmentation}, which can be attained through new ideas: an efficient utilization of the binary search technique by Nguyen [2019] and Chakrabarty-Lee-Sidford-Singla-Wong [FOCS 2019] and a careful analysis of the independence oracle query complexity. Our analysis differs significantly from the one for matroid intersection algorithms, because of the parameter $k$. We also present a matroid partition algorithm that uses $\tilde{O}((n + k) \sqrt{p})$ rank oracle queries.

We consider the optimization problem of cardinality constrained maximization of a monotone submodular set function $f:2^U\to\mathbb{R}_{\geq 0}$ (SM) with noisy evaluations of $f$. In particular, it is assumed that we do not have value oracle access to $f$, but instead for any $X\subseteq U$ and $u\in U$ we can take samples from a noisy distribution with expected value $f(X\cup\{u\})-f(X)$. Our goal is to develop algorithms in this setting that take as few samples as possible, and return a solution with an approximation guarantee relative to the optimal with high probability. We propose the algorithm Confident Threshold Greedy (CTG), which is based on the threshold greedy algorithm of Badanidiyuru and Vondrak [1] and samples adaptively in order to produce an approximate solution with high probability. We prove that CTG achieves an approximation ratio arbitrarily close to $1-1/e$, depending on input parameters. We provide an experimental evaluation on real instances of SM and demonstrate the sample efficiency of CTG.

A graph class $\mathscr{C}$ is called monadically stable if one cannot interpret, in first-order logic, arbitrary large linear orders in colored graphs from $\mathscr{C}$. We prove that the model checking problem for first-order logic is fixed-parameter tractable on every monadically stable graph class. This extends the results of [Grohe, Kreutzer, and Siebertz; J. ACM '17] for nowhere dense classes and of [Dreier, M\"ahlmann, and Siebertz; STOC '23] for structurally nowhere dense classes to all monadically stable classes. As a complementary hardness result, we prove that for every hereditary graph class $\mathscr{C}$ that is edge-stable (excludes some half-graph as a semi-induced subgraph) but not monadically stable, first-order model checking is $\mathrm{AW}[*]$-hard on $\mathscr{C}$, and $\mathrm{W}[1]$-hard when restricted to existential sentences. This confirms, in the special case of edge-stable classes, an on-going conjecture that the notion of monadic NIP delimits the tractability of first-order model checking on hereditary classes of graphs. For our tractability result, we first prove that monadically stable graph classes have almost linear neighborhood complexity. Using this, we construct sparse neighborhood covers for monadically stable classes, which provides the missing ingredient for the algorithm of [Dreier, M\"ahlmann, and Siebertz; STOC '23]. The key component of this construction is the usage of orders with low crossing number [Welzl; SoCG '88], a tool from the area of range queries. For our hardness result, we prove a new characterization of monadically stable graph classes in terms of forbidden induced subgraphs. We then use this characterization to show that in hereditary classes that are edge-stable but not monadically stable, one can effectively interpret the class of all graphs using only existential formulas.

Given a vector dataset $\mathcal{X}$, a query vector $\vec{x}_q$, graph-based Approximate Nearest Neighbor Search (ANNS) aims to build a proximity graph (PG) as an index of $\mathcal{X}$ and approximately return vectors with minimum distances to $\vec{x}_q$ by searching over the PG index. It suffers from the large-scale $\mathcal{X}$ because a PG with full vectors is too large to fit into the memory, e.g., a billion-scale $\mathcal{X}$ in 128 dimensions would consume nearly 600 GB memory. To solve this, Product Quantization (PQ) integrated graph-based ANNS is proposed to reduce the memory usage, using smaller compact codes of quantized vectors in memory instead of the large original vectors. Existing PQ methods do not consider the important routing features of PG, resulting in low-quality quantized vectors that affect the ANNS's effectiveness. In this paper, we present an end-to-end Routing-guided learned Product Quantization (RPQ) for graph-based ANNS. It consists of (1) a \textit{differentiable quantizer} used to make the standard discrete PQ differentiable to suit for back-propagation of end-to-end learning, (2) a \textit{sampling-based feature extractor} used to extract neighborhood and routing features of a PG, and (3) a \textit{multi-feature joint training module} with two types of feature-aware losses to continuously optimize the differentiable quantizer. As a result, the inherent features of a PG would be embedded into the learned PQ, generating high-quality quantized vectors. Moreover, we integrate our RPQ with the state-of-the-art DiskANN and existing popular PGs to improve their performance. Comprehensive experiments on real-world large-scale datasets (from 1M to 1B) demonstrate RPQ's superiority, e.g., 1.7$\times$-4.2$\times$ improvement on QPS at the same recall@10 of 95\%.

The Euclidean Steiner tree problem asks to find a min-cost metric graph that connects a given set of \emph{terminal} points $X$ in $\mathbb{R}^d$, possibly using points not in $X$ which are called Steiner points. Even though near-linear time $(1 + \epsilon)$-approximation was obtained in the offline setting in seminal works of Arora and Mitchell, efficient dynamic algorithms for Steiner tree is still open. We give the first algorithm that (implicitly) maintains a $(1 + \epsilon)$-approximate solution which is accessed via a set of tree traversal queries, subject to point insertion and deletions, with amortized update and query time $O(\poly\log n)$ with high probability. Our approach is based on an Arora-style geometric dynamic programming, and our main technical contribution is to maintain the DP subproblems in the dynamic setting efficiently. We also need to augment the DP subproblems to support the tree traversal queries.

The Djokovi\'{c}-Winkler relation $\Theta$ is a binary relation defined on the edge set of a given graph that is based on the distances of certain vertices and which plays a prominent role in graph theory. In this paper, we explore the relatively uncharted ``reflexive complement'' $\overline\Theta$ of $\Theta$, where $(e,f)\in \overline\Theta$ if and only if $e=f$ or $(e,f)\notin \Theta$ for edges $e$ and $f$. We establish the relationship between $\overline\Theta$ and the set $\Delta_{ef}$, comprising the distances between the vertices of $e$ and $f$ and shed some light on the intricacies of its transitive closure $\overline\Theta^*$. Notably, we demonstrate that $\overline\Theta^*$ exhibits multiple equivalence classes only within a restricted subclass of complete multipartite graphs. In addition, we characterize non-trivial relations $R$ that coincide with $\overline\Theta$ as those where the graph representation is disconnected, with each connected component being the (join of) Cartesian product of complete graphs. The latter results imply, somewhat surprisingly, that knowledge about the distances between vertices is not required to determine $\overline\Theta^*$. Moreover, $\overline\Theta^*$ has either exactly one or three equivalence classes.

A fundamental functional in nonparametric statistics is the Mann-Whitney functional ${\theta} = P (X < Y )$ , which constitutes the basis for the most popular nonparametric procedures. The functional ${\theta}$ measures a location or stochastic tendency effect between two distributions. A limitation of ${\theta}$ is its inability to capture scale differences. If differences of this nature are to be detected, specific tests for scale or omnibus tests need to be employed. However, the latter often suffer from low power, and they do not yield interpretable effect measures. In this manuscript, we extend ${\theta}$ by additionally incorporating the recently introduced distribution overlap index (nonparametric dispersion measure) $I_2$ that can be expressed in terms of the quantile process. We derive the joint asymptotic distribution of the respective estimators of ${\theta}$ and $I_2$ and construct confidence regions. Extending the Wilcoxon- Mann-Whitney test, we introduce a new test based on the joint use of these functionals. It results in much larger consistency regions while maintaining competitive power to the rank sum test for situations in which {\theta} alone would suffice. Compared with classical omnibus tests, the simulated power is much improved. Additionally, the newly proposed inference method yields effect measures whose interpretation is surprisingly straightforward.

The Constant Degree Hypothesis was introduced by Barrington et. al. (1990) to study some extensions of $q$-groups by nilpotent groups and the power of these groups in a certain computational model. In its simplest formulation, it establishes exponential lower bounds for $\mathrm{AND}_d \circ \mathrm{MOD}_m \circ \mathrm{MOD}_q$ circuits computing AND of unbounded arity $n$ (for constant integers $d,m$ and a prime $q$). While it has been proved in some special cases (including $d=1$), it remains wide open in its general form for over 30 years. In this paper we prove that the hypothesis holds when we restrict our attention to symmetric circuits with $m$ being a prime. While we build upon techniques by Grolmusz and Tardos (2000), we have to prove a new symmetric version of their Degree Decreasing Lemma and apply it in a highly non-trivial way. Moreover, to establish the result we perform a careful analysis of automorphism groups of $\mathrm{AND} \circ \mathrm{MOD}_m$ subcircuits and study the periodic behaviour of the computed functions. Finally, our methods also yield lower bounds when $d$ is treated as a function of $n$.

北京阿比特科技有限公司