亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the sample complexity of obtaining an $\epsilon$-optimal policy in \emph{Robust} discounted Markov Decision Processes (RMDPs), given only access to a generative model of the nominal kernel. This problem is widely studied in the non-robust case, and it is known that any planning approach applied to an empirical MDP estimated with $\tilde{\mathcal{O}}(\frac{H^3 \mid S \mid\mid A \mid}{\epsilon^2})$ samples provides an $\epsilon$-optimal policy, which is minimax optimal. Results in the robust case are much more scarce. For $sa$- (resp $s$-)rectangular uncertainty sets, the best known sample complexity is $\tilde{\mathcal{O}}(\frac{H^4 \mid S \mid^2\mid A \mid}{\epsilon^2})$ (resp. $\tilde{\mathcal{O}}(\frac{H^4 \mid S \mid^2\mid A \mid^2}{\epsilon^2})$), for specific algorithms and when the uncertainty set is based on the total variation (TV), the KL or the Chi-square divergences. In this paper, we consider uncertainty sets defined with an $L_p$-ball (recovering the TV case), and study the sample complexity of \emph{any} planning algorithm (with high accuracy guarantee on the solution) applied to an empirical RMDP estimated using the generative model. In the general case, we prove a sample complexity of $\tilde{\mathcal{O}}(\frac{H^4 \mid S \mid\mid A \mid}{\epsilon^2})$ for both the $sa$- and $s$-rectangular cases (improvements of $\mid S \mid$ and $\mid S \mid\mid A \mid$ respectively). When the size of the uncertainty is small enough, we improve the sample complexity to $\tilde{\mathcal{O}}(\frac{H^3 \mid S \mid\mid A \mid }{\epsilon^2})$, recovering the lower-bound for the non-robust case for the first time and a robust lower-bound when the size of the uncertainty is small enough.

相關內容

It is well known that the spectral gap of the down-up walk over an $n$-partite simplicial complex (also known as Glauber dynamics) cannot be better than $O(1/n)$ due to natural obstructions such as coboundaries. We study an alternative random walk over partite simplicial complexes known as the sequential sweep or the systematic scan Glauber dynamics: Whereas the down-up walk at each step selects a random coordinate and updates it based on the remaining coordinates, the sequential sweep goes through each of the coordinates one by one in a deterministic order and applies the same update operation. It is natural, thus, to compare $n$-steps of the down-up walk with a single step of the sequential sweep. Interestingly, while the spectral gap of the $n$-th power of the down-up walk is still bounded from above by a constant, under a strong enough local spectral assumption (in the sense of Gur, Lifschitz, Liu, STOC 2022) we can show that the spectral gap of this walk can be arbitrarily close to 1. We also study other isoperimetric inequalities for these walks, and show that under the assumptions of local entropy contraction (related to the considerations of Gur, Lifschitz, Liu), these walks satisfy an entropy contraction inequality.

We prove that isomorphism of tournaments of twin width at most $k$ can be decided in time $k^{O(\log k)}n^{O(1)}$. This implies that the isomorphism problem for classes of tournaments of bounded or moderately growing twin width is in polynomial time. By comparison, there are classes of undirected graphs of bounded twin width that are isomorphism complete, that is, the isomorphism problem for the classes is as hard as the general graph isomorphism problem. Twin width is a graph parameter that has been introduced only recently (Bonnet et al., FOCS 2020), but has received a lot of attention in structural graph theory since then. On directed graphs, it is functionally smaller than clique width. We prove that on tournaments (but not on general directed graphs) it is also functionally smaller than directed tree width (and thus, the same also holds for cut width and directed path width). Hence, our result implies that tournament isomorphism testing is also fixed-parameter tractable when parameterized by any of these parameters. Our isomorphism algorithm heavily employs group-theoretic techniques. This seems to be necessary: as a second main result, we show that the combinatorial Weisfeiler-Leman algorithm does not decide isomorphism of tournaments of twin width at most 35 if its dimension is $o(\sqrt n)$. (Throughout this abstract, $n$ is the order of the input graphs.)

We present STADEE, a \textbf{STA}tistics-based \textbf{DEE}p detection method to identify machine-generated text, addressing the limitations of current methods that rely heavily on fine-tuning pre-trained language models (PLMs). STADEE integrates key statistical text features with a deep classifier, focusing on aspects like token probability and cumulative probability, crucial for handling nucleus sampling. Tested across diverse datasets and scenarios (in-domain, out-of-domain, and in-the-wild), STADEE demonstrates superior performance, achieving an 87.05% F1 score in-domain and outperforming both traditional statistical methods and fine-tuned PLMs, especially in out-of-domain and in-the-wild settings, highlighting its effectiveness and generalizability.

We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph $G$ and a set $\mathcal{F}$ of forbidden graphs, the goal is to compute a minimum-sized set $S$ of vertices of $G$ such that $G-S$ does not contain any graph in $\mathcal{F}$ as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. In this paper, we study the approximability of the problem on a large variety of graph classes. Our first result is a linear-time $(1+\varepsilon)$-approximation reduction from (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of bounded expansion to the same problem on bounded degree graphs within $\mathcal{G}$. This directly yields linear-size $(1+\varepsilon)$-approximation lossy kernels for the problems on any bounded-expansion graph classes. Our second result is a linear-time approximation scheme for (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of polynomial expansion, based on the local-search framework of Har-Peled and Quanrud [SICOMP 2017]. This approximation scheme can be applied to a more general family of problems that aim to hit all subgraphs satisfying a certain property $\pi$ that is efficiently testable and has bounded diameter. Both of our results have applications to Subgraph Hitting (not induced) on wide classes of geometric intersection graphs, resulting in linear-size lossy kernels and (near-)linear time approximation schemes for the problem.

In the Euclidean Bottleneck Steiner Tree problem, the input consists of a set of $n$ points in $\mathbb{R}^2$ called terminals and a parameter $k$, and the goal is to compute a Steiner tree that spans all the terminals and contains at most $k$ points of $\mathbb{R}^2$ as Steiner points such that the maximum edge-length of the Steiner tree is minimized, where the length of a tree edge is the Euclidean distance between its two endpoints. The problem is well-studied and is known to be NP-hard. In this paper, we give a $k^{O(k)} n^{O(1)}$-time algorithm for Euclidean Bottleneck Steiner Tree, which implies that the problem is fixed-parameter tractable (FPT). This settles an open question explicitly asked by Bae et al. [Algorithmica, 2011], who showed that the $\ell_1$ and $\ell_{\infty}$ variants of the problem are FPT. Our approach can be generalized to the problem with $\ell_p$ metric for any rational $1 \le p \le \infty$, or even other metrics on $\mathbb{R}^2$.

We consider the optimization problem of cardinality constrained maximization of a monotone submodular set function $f:2^U\to\mathbb{R}_{\geq 0}$ (SM) with noisy evaluations of $f$. In particular, it is assumed that we do not have value oracle access to $f$, but instead for any $X\subseteq U$ and $u\in U$ we can take samples from a noisy distribution with expected value $f(X\cup\{u\})-f(X)$. Our goal is to develop algorithms in this setting that take as few samples as possible, and return a solution with an approximation guarantee relative to the optimal with high probability. We propose the algorithm Confident Threshold Greedy (CTG), which is based on the threshold greedy algorithm of Badanidiyuru and Vondrak [1] and samples adaptively in order to produce an approximate solution with high probability. We prove that CTG achieves an approximation ratio arbitrarily close to $1-1/e$, depending on input parameters. We provide an experimental evaluation on real instances of SM and demonstrate the sample efficiency of CTG.

Given a vector dataset $\mathcal{X}$, a query vector $\vec{x}_q$, graph-based Approximate Nearest Neighbor Search (ANNS) aims to build a proximity graph (PG) as an index of $\mathcal{X}$ and approximately return vectors with minimum distances to $\vec{x}_q$ by searching over the PG index. It suffers from the large-scale $\mathcal{X}$ because a PG with full vectors is too large to fit into the memory, e.g., a billion-scale $\mathcal{X}$ in 128 dimensions would consume nearly 600 GB memory. To solve this, Product Quantization (PQ) integrated graph-based ANNS is proposed to reduce the memory usage, using smaller compact codes of quantized vectors in memory instead of the large original vectors. Existing PQ methods do not consider the important routing features of PG, resulting in low-quality quantized vectors that affect the ANNS's effectiveness. In this paper, we present an end-to-end Routing-guided learned Product Quantization (RPQ) for graph-based ANNS. It consists of (1) a \textit{differentiable quantizer} used to make the standard discrete PQ differentiable to suit for back-propagation of end-to-end learning, (2) a \textit{sampling-based feature extractor} used to extract neighborhood and routing features of a PG, and (3) a \textit{multi-feature joint training module} with two types of feature-aware losses to continuously optimize the differentiable quantizer. As a result, the inherent features of a PG would be embedded into the learned PQ, generating high-quality quantized vectors. Moreover, we integrate our RPQ with the state-of-the-art DiskANN and existing popular PGs to improve their performance. Comprehensive experiments on real-world large-scale datasets (from 1M to 1B) demonstrate RPQ's superiority, e.g., 1.7$\times$-4.2$\times$ improvement on QPS at the same recall@10 of 95\%.

The Djokovi\'{c}-Winkler relation $\Theta$ is a binary relation defined on the edge set of a given graph that is based on the distances of certain vertices and which plays a prominent role in graph theory. In this paper, we explore the relatively uncharted ``reflexive complement'' $\overline\Theta$ of $\Theta$, where $(e,f)\in \overline\Theta$ if and only if $e=f$ or $(e,f)\notin \Theta$ for edges $e$ and $f$. We establish the relationship between $\overline\Theta$ and the set $\Delta_{ef}$, comprising the distances between the vertices of $e$ and $f$ and shed some light on the intricacies of its transitive closure $\overline\Theta^*$. Notably, we demonstrate that $\overline\Theta^*$ exhibits multiple equivalence classes only within a restricted subclass of complete multipartite graphs. In addition, we characterize non-trivial relations $R$ that coincide with $\overline\Theta$ as those where the graph representation is disconnected, with each connected component being the (join of) Cartesian product of complete graphs. The latter results imply, somewhat surprisingly, that knowledge about the distances between vertices is not required to determine $\overline\Theta^*$. Moreover, $\overline\Theta^*$ has either exactly one or three equivalence classes.

We revisit the well-known Curve Shortening Flow for immersed curves in the $d$-dimensional Euclidean space. We exploit a fundamental structure of the problem to derive a new global construction of a solution, that is, a construction that is valid for all times and is insensitive to singularities. The construction is characterized by discretization in time and the approximant, while still exhibiting the possibile formation of finitely many singularities at a finite set of singular times, exists globally and is well behaved and simpler to analyze than a solution of the CSF. A solution of the latter is obtained in the limit. Estimates for a natural (geometric) norm involving length and total absolute curvature allow passage to the limit. Many classical qualitative results about the flow can be recovered by exploiting the simplicity of the approximant and new ones can be proved. The construction also suggests a numerical procedure for the computation of the flow which proves very effective as demonstrated by a series of numerical experiments scattered throughout the paper.

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

北京阿比特科技有限公司