亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is proven that a conjecture of Tao (2010) holds true for log-concave random variables on the integers: For every $n \geq 1$, if $X_1,\ldots,X_n$ are i.i.d. integer-valued, log-concave random variables, then $$ H(X_1+\cdots+X_{n+1}) \geq H(X_1+\cdots+X_{n}) + \frac{1}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} - o(1) $$ as $H(X_1) \to \infty$, where $H$ denotes the (discrete) Shannon entropy. The problem is reduced to the continuous setting by showing that if $U_1,\ldots,U_n$ are independent continuous uniforms on $(0,1)$, then $$ h(X_1+\cdots+X_n + U_1+\cdots+U_n) = H(X_1+\cdots+X_n) + o(1) $$ as $H(X_1) \to \infty$, where $h$ stands for the differential entropy. Explicit bounds for the $o(1)$-terms are provided.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

We investigate a fundamental vertex-deletion problem called (Induced) Subgraph Hitting: given a graph $G$ and a set $\mathcal{F}$ of forbidden graphs, the goal is to compute a minimum-sized set $S$ of vertices of $G$ such that $G-S$ does not contain any graph in $\mathcal{F}$ as an (induced) subgraph. This is a generic problem that encompasses many well-known problems that were extensively studied on their own, particularly (but not only) from the perspectives of both approximation and parameterization. In this paper, we study the approximability of the problem on a large variety of graph classes. Our first result is a linear-time $(1+\varepsilon)$-approximation reduction from (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of bounded expansion to the same problem on bounded degree graphs within $\mathcal{G}$. This directly yields linear-size $(1+\varepsilon)$-approximation lossy kernels for the problems on any bounded-expansion graph classes. Our second result is a linear-time approximation scheme for (Induced) Subgraph Hitting on any graph class $\mathcal{G}$ of polynomial expansion, based on the local-search framework of Har-Peled and Quanrud [SICOMP 2017]. This approximation scheme can be applied to a more general family of problems that aim to hit all subgraphs satisfying a certain property $\pi$ that is efficiently testable and has bounded diameter. Both of our results have applications to Subgraph Hitting (not induced) on wide classes of geometric intersection graphs, resulting in linear-size lossy kernels and (near-)linear time approximation schemes for the problem.

In the Euclidean Bottleneck Steiner Tree problem, the input consists of a set of $n$ points in $\mathbb{R}^2$ called terminals and a parameter $k$, and the goal is to compute a Steiner tree that spans all the terminals and contains at most $k$ points of $\mathbb{R}^2$ as Steiner points such that the maximum edge-length of the Steiner tree is minimized, where the length of a tree edge is the Euclidean distance between its two endpoints. The problem is well-studied and is known to be NP-hard. In this paper, we give a $k^{O(k)} n^{O(1)}$-time algorithm for Euclidean Bottleneck Steiner Tree, which implies that the problem is fixed-parameter tractable (FPT). This settles an open question explicitly asked by Bae et al. [Algorithmica, 2011], who showed that the $\ell_1$ and $\ell_{\infty}$ variants of the problem are FPT. Our approach can be generalized to the problem with $\ell_p$ metric for any rational $1 \le p \le \infty$, or even other metrics on $\mathbb{R}^2$.

Consider that there are $k\le n$ agents in a simple, connected, and undirected graph $G=(V,E)$ with $n$ nodes and $m$ edges. The goal of the dispersion problem is to move these $k$ agents to distinct nodes. Agents can communicate only when they are at the same node, and no other means of communication such as whiteboards are available. We assume that the agents operate synchronously. We consider two scenarios: when all agents are initially located at any single node (rooted setting) and when they are initially distributed over any one or more nodes (general setting). Kshemkalyani and Sharma presented a dispersion algorithm for the general setting, which uses $O(m_k)$ time and $\log(k+\delta)$ bits of memory per agent [OPODIS 2021]. Here, $m_k$ is the maximum number of edges in any induced subgraph of $G$ with $k$ nodes, and $\delta$ is the maximum degree of $G$. This algorithm is the fastest in the literature, as no algorithm with $o(m_k)$ time has been discovered even for the rooted setting. In this paper, we present faster algorithms for both the rooted and general settings. First, we present an algorithm for the rooted setting that solves the dispersion problem in $O(k\log \min(k,\delta))=O(k\log k)$ time using $O(\log \delta)$ bits of memory per agent. Next, we propose an algorithm for the general setting that achieves dispersion in $O(k (\log k)\cdot (\log \min(k,\delta))=O(k \log^2 k)$ time using $O(\log (k+\delta))$ bits. Finally, for the rooted setting, we give a time-optimal, i.e.,$O(k)$-time, algorithm with $O(\delta)$ bits of space per agent.

Non-autoregressive models have been widely studied in the Complete Information Scenario (CIS), in which the input has complete information of corresponding output. However, their explorations in the Incomplete Information Scenario (IIS) are extremely limited. Our analyses reveal that the IIS's incomplete input information will augment the inherent limitations of existing non-autoregressive models trained under Maximum Likelihood Estimation. In this paper, we propose for the IIS an Adversarial Non-autoregressive Transformer (ANT) which has two features: 1) Position-Aware Self-Modulation to provide more reasonable hidden representations, and 2) Dependency Feed Forward Network to strengthen its capacity in dependency modeling. We compare ANT with other mainstream models in the IIS and demonstrate that ANT can achieve comparable performance with much fewer decoding iterations. Furthermore, we show its great potential in various applications like latent interpolation and semi-supervised learning.

We consider the optimization problem of cardinality constrained maximization of a monotone submodular set function $f:2^U\to\mathbb{R}_{\geq 0}$ (SM) with noisy evaluations of $f$. In particular, it is assumed that we do not have value oracle access to $f$, but instead for any $X\subseteq U$ and $u\in U$ we can take samples from a noisy distribution with expected value $f(X\cup\{u\})-f(X)$. Our goal is to develop algorithms in this setting that take as few samples as possible, and return a solution with an approximation guarantee relative to the optimal with high probability. We propose the algorithm Confident Threshold Greedy (CTG), which is based on the threshold greedy algorithm of Badanidiyuru and Vondrak [1] and samples adaptively in order to produce an approximate solution with high probability. We prove that CTG achieves an approximation ratio arbitrarily close to $1-1/e$, depending on input parameters. We provide an experimental evaluation on real instances of SM and demonstrate the sample efficiency of CTG.

A graph class $\mathscr{C}$ is called monadically stable if one cannot interpret, in first-order logic, arbitrary large linear orders in colored graphs from $\mathscr{C}$. We prove that the model checking problem for first-order logic is fixed-parameter tractable on every monadically stable graph class. This extends the results of [Grohe, Kreutzer, and Siebertz; J. ACM '17] for nowhere dense classes and of [Dreier, M\"ahlmann, and Siebertz; STOC '23] for structurally nowhere dense classes to all monadically stable classes. As a complementary hardness result, we prove that for every hereditary graph class $\mathscr{C}$ that is edge-stable (excludes some half-graph as a semi-induced subgraph) but not monadically stable, first-order model checking is $\mathrm{AW}[*]$-hard on $\mathscr{C}$, and $\mathrm{W}[1]$-hard when restricted to existential sentences. This confirms, in the special case of edge-stable classes, an on-going conjecture that the notion of monadic NIP delimits the tractability of first-order model checking on hereditary classes of graphs. For our tractability result, we first prove that monadically stable graph classes have almost linear neighborhood complexity. Using this, we construct sparse neighborhood covers for monadically stable classes, which provides the missing ingredient for the algorithm of [Dreier, M\"ahlmann, and Siebertz; STOC '23]. The key component of this construction is the usage of orders with low crossing number [Welzl; SoCG '88], a tool from the area of range queries. For our hardness result, we prove a new characterization of monadically stable graph classes in terms of forbidden induced subgraphs. We then use this characterization to show that in hereditary classes that are edge-stable but not monadically stable, one can effectively interpret the class of all graphs using only existential formulas.

Given a vector dataset $\mathcal{X}$, a query vector $\vec{x}_q$, graph-based Approximate Nearest Neighbor Search (ANNS) aims to build a proximity graph (PG) as an index of $\mathcal{X}$ and approximately return vectors with minimum distances to $\vec{x}_q$ by searching over the PG index. It suffers from the large-scale $\mathcal{X}$ because a PG with full vectors is too large to fit into the memory, e.g., a billion-scale $\mathcal{X}$ in 128 dimensions would consume nearly 600 GB memory. To solve this, Product Quantization (PQ) integrated graph-based ANNS is proposed to reduce the memory usage, using smaller compact codes of quantized vectors in memory instead of the large original vectors. Existing PQ methods do not consider the important routing features of PG, resulting in low-quality quantized vectors that affect the ANNS's effectiveness. In this paper, we present an end-to-end Routing-guided learned Product Quantization (RPQ) for graph-based ANNS. It consists of (1) a \textit{differentiable quantizer} used to make the standard discrete PQ differentiable to suit for back-propagation of end-to-end learning, (2) a \textit{sampling-based feature extractor} used to extract neighborhood and routing features of a PG, and (3) a \textit{multi-feature joint training module} with two types of feature-aware losses to continuously optimize the differentiable quantizer. As a result, the inherent features of a PG would be embedded into the learned PQ, generating high-quality quantized vectors. Moreover, we integrate our RPQ with the state-of-the-art DiskANN and existing popular PGs to improve their performance. Comprehensive experiments on real-world large-scale datasets (from 1M to 1B) demonstrate RPQ's superiority, e.g., 1.7$\times$-4.2$\times$ improvement on QPS at the same recall@10 of 95\%.

Propositional model counting (#SAT) can be solved efficiently when the input formula is in deterministic decomposable negation normal form (d-DNNF). Translating an arbitrary formula into a representation that allows inference tasks, such as counting, to be performed efficiently, is called knowledge compilation. Top-down knowledge compilation is a state-of-the-art technique for solving #SAT problems that leverages the traces of exhaustive DPLL search to obtain d-DNNF representations. While knowledge compilation is well studied for propositional approaches, knowledge compilation for the (quantifier free) counting modulo theory setting (#SMT) has been studied to a much lesser degree. In this paper, we discuss compilation strategies for #SMT. We specifically advocate for a top-down compiler based on the traces of exhaustive DPLL(T) search.

The Euclidean Steiner tree problem asks to find a min-cost metric graph that connects a given set of \emph{terminal} points $X$ in $\mathbb{R}^d$, possibly using points not in $X$ which are called Steiner points. Even though near-linear time $(1 + \epsilon)$-approximation was obtained in the offline setting in seminal works of Arora and Mitchell, efficient dynamic algorithms for Steiner tree is still open. We give the first algorithm that (implicitly) maintains a $(1 + \epsilon)$-approximate solution which is accessed via a set of tree traversal queries, subject to point insertion and deletions, with amortized update and query time $O(\poly\log n)$ with high probability. Our approach is based on an Arora-style geometric dynamic programming, and our main technical contribution is to maintain the DP subproblems in the dynamic setting efficiently. We also need to augment the DP subproblems to support the tree traversal queries.

In 2016, a breakthrough result of Chechik and Wulff-Nilsen [SODA '16] established that every $n$-node graph $G$ has a $(1+\varepsilon)(2k-1)$-spanner of lightness $O_{\varepsilon}(n^{1/k})$, and recent followup work by Le and Solomon [STOC '23] generalized the proof strategy and improved the dependence on $\varepsilon$. We give a new proof of this result, with the improved $\varepsilon$-dependence. Our proof is a direct analysis of the often-studied greedy spanner, and can be viewed as an extension of the folklore Moore bounds used to analyze spanner sparsity.

北京阿比特科技有限公司