亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A $k$-crossing family in a point set $S$ in general position is a set of $k$ segments spanned by points of $S$ such that all $k$ segments mutually cross. In this short note we present two statements on crossing families which are based on sets of small cardinality: (1)~Any set of at least 15 points contains a crossing family of size~4. (2)~There are sets of $n$ points which do not contain a crossing family of size larger than~$8\lceil \frac{n}{41} \rceil$. Both results improve the previously best known bounds.

相關內容

The metric dimension dim(G) of a graph $G$ is the minimum cardinality of a subset $S$ of vertices of $G$ such that each vertex of $G$ is uniquely determined by its distances to $S$. It is well-known that the metric dimension of a graph can be drastically increased by the modification of a single edge. Our main result consists in proving that the increase of the metric dimension of an edge addition can be amortized in the sense that if the graph consists of a spanning tree $T$ plus $c$ edges, then the metric dimension of $G$ is at most the metric dimension of $T$ plus $6c$. We then use this result to prove a weakening of a conjecture of Eroh et al. The zero forcing number $Z(G)$ of $G$ is the minimum cardinality of a subset $S$ of black vertices (whereas the other vertices are colored white) of $G$ such that all the vertices will turned black after applying finitely many times the following rule: a white vertex is turned black if it is the only white neighbor of a black vertex. Eroh et al. conjectured that, for any graph $G$, $dim(G)\leq Z(G) + c(G)$, where $c(G)$ is the number of edges that have to be removed from $G$ to get a forest. They proved the conjecture is true for trees and unicyclic graphs. We prove a weaker version of the conjecture: $dim(G)\leq Z(G)+6c(G)$ holds for any graph. We also prove that the conjecture is true for graphs with edge disjoint cycles, widely generalizing the unicyclic result of Eroh et al.

We study the classical expander codes, introduced by Sipser and Spielman \cite{SS96}. Given any constants $0< \alpha, \varepsilon < 1/2$, and an arbitrary bipartite graph with $N$ vertices on the left, $M < N$ vertices on the right, and left degree $D$ such that any left subset $S$ of size at most $\alpha N$ has at least $(1-\varepsilon)|S|D$ neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly $\frac{\alpha N}{2 \varepsilon }$. This is strictly better than the best known previous result of $2(1-\varepsilon ) \alpha N$ \cite{Sudan2000note, Viderman13b} whenever $\varepsilon < 1/2$, and improves the previous result significantly when $\varepsilon $ is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever $\varepsilon < \frac{1}{4}$. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.

Mean field control (MFC) is an effective way to mitigate the curse of dimensionality of cooperative multi-agent reinforcement learning (MARL) problems. This work considers a collection of $N_{\mathrm{pop}}$ heterogeneous agents that can be segregated into $K$ classes such that the $k$-th class contains $N_k$ homogeneous agents. We aim to prove approximation guarantees of the MARL problem for this heterogeneous system by its corresponding MFC problem. We consider three scenarios where the reward and transition dynamics of all agents are respectively taken to be functions of $(1)$ joint state and action distributions across all classes, $(2)$ individual distributions of each class, and $(3)$ marginal distributions of the entire population. We show that, in these cases, the $K$-class MARL problem can be approximated by MFC with errors given as $e_1=\mathcal{O}(\frac{\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}}{N_{\mathrm{pop}}}\sum_{k}\sqrt{N_k})$, $e_2=\mathcal{O}(\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\sum_{k}\frac{1}{\sqrt{N_k}})$ and $e_3=\mathcal{O}\left(\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left[\frac{A}{N_{\mathrm{pop}}}\sum_{k\in[K]}\sqrt{N_k}+\frac{B}{\sqrt{N_{\mathrm{pop}}}}\right]\right)$, respectively, where $A, B$ are some constants and $|\mathcal{X}|,|\mathcal{U}|$ are the sizes of state and action spaces of each agent. Finally, we design a Natural Policy Gradient (NPG) based algorithm that, in the three cases stated above, can converge to an optimal MARL policy within $\mathcal{O}(e_j)$ error with a sample complexity of $\mathcal{O}(e_j^{-3})$, $j\in\{1,2,3\}$, respectively.

The \emph{generalized sorting problem} is a restricted version of standard comparison sorting where we wish to sort $n$ elements but only a subset of pairs are allowed to be compared. Formally, there is some known graph $G = (V, E)$ on the $n$ elements $v_1, \dots, v_n$, and the goal is to determine the true order of the elements using as few comparisons as possible, where all comparisons $(v_i, v_j)$ must be edges in $E$. We are promised that if the true ordering is $x_1 < x_2 < \cdots < x_n$ for $\{x_i\}$ an unknown permutation of the vertices $\{v_i\}$, then $(x_i, x_{i+1}) \in E$ for all $i$: this Hamiltonian path ensures that sorting is actually possible. In this work, we improve the bounds for generalized sorting on both random graphs and worst-case graphs. For Erd\H{o}s-Renyi random graphs $G(n, p)$ (with the promised Hamiltonian path added to ensure sorting is possible), we provide an algorithm for generalized sorting with an expected $O(n \log (np))$ comparisons, which we prove to be optimal for query complexity. This strongly improves over the best known algorithm of Huang, Kannan, and Khanna (FOCS 2011), which uses $\tilde{O}(\min(n \sqrt{np}, n/p^2))$ comparisons. For arbitrary graphs $G$ with $n$ vertices and $m$ edges (again with the promised Hamiltonian path), we provide an algorithm for generalized sorting with $\tilde{O}(\sqrt{mn})$ comparisons. This improves over the best known algorithm of Huang et al., which uses $\min(m, \tilde{O}(n^{3/2}))$ comparisons.

We consider the problem of sampling from the ferromagnetic Potts and random-cluster models on a general family of random graphs via the Glauber dynamics for the random-cluster model. The random-cluster model is parametrized by an edge probability $p \in (0,1)$ and a cluster weight $q > 0$. We establish that for every $q\ge 1$, the random-cluster Glauber dynamics mixes in optimal $\Theta(n\log n)$ steps on $n$-vertex random graphs having a prescribed degree sequence with bounded average branching $\gamma$ throughout the full high-temperature uniqueness regime $p<p_u(q,\gamma)$. The family of random graph models we consider include the Erd\H{o}s--R\'enyi random graph $G(n,\gamma/n)$, and so we provide the first polynomial-time sampling algorithm for the ferromagnetic Potts model on the Erd\H{o}s--R\'enyi random graphs that works for all $q$ in the full uniqueness regime. We accompany our results with mixing time lower bounds (exponential in the maximum degree) for the Potts Glauber dynamics, in the same settings where our $\Theta(n \log n)$ bounds for the random-cluster Glauber dynamics apply. This reveals a significant computational advantage of random-cluster based algorithms for sampling from the Potts Gibbs distribution at high temperatures in the presence of high-degree vertices.

We propose new query applications of the well known randomized incremental construction of the Trapezoidal Search DAG (TSD) on a set of $n$ line segments in the plane, where queries are allowed to be any axis aligned window. We show that our algorithm reports the $m$ trapezoids that are intersected by the query in $\mathcal{O}(m+\log n)$ expected time, regardless of the spatial location of the segment set and the query. In case the query is a {\em vertical segment}, the query time bound reduces to $\mathcal{O}(k +\log n)$ where $k$ is the number of segments that are intersected. This improves on the query and space bound of the well known Segment Tree based approach, which is to date the theoretical bottleneck for optimal query time. In the case where the set of segments is a connected planar subdivision, this method can easily be extended to an algorithm which reports the $k$ segments which intersect an axis aligned query window in $\mathcal{O}(k + \log n)$ expected time. Our publicly available implementation handles degeneracies exactly, including segments with overlap and multi-intersections. Experiments show that the method is practical and provides more reliable query times in comparison to R-trees and the segment tree based data structure on real-world and synthetic data sets.

Lattices defined as modules over algebraic rings or orders have garnered interest recently, particularly in the fields of cryptography and coding theory. Whilst there exist many attempts to generalise the conditions for LLL reduction to such lattices, there do not seem to be any attempts so far to generalise stronger notions of reduction such as Minkowski, HKZ and BKZ reduction. Moreover, most lattice reduction methods for modules over algebraic rings involve applying traditional techniques to the embedding of the module into real space, which distorts the structure of the algebra. In this paper, we generalise some classical notions of reduction theory to that of free modules defined over an order. Moreover, we extend the definitions of Minkowski, HKZ and BKZ reduction to that of such modules and show that bases reduced in this manner have vector lengths that can be bounded above by the successive minima of the lattice multiplied by a constant that depends on the algebra and the dimension of the module. In particular, we show that HKZ reduced bases are polynomially close to the successive minima of the lattice in terms of the module dimension. None of our definitions require the module to be embedded and thus preserve the structure of the module.

In this paper we have to demonstrate that if we claim to have an algorithm that solves CSAT in polynomial time with a DTM (Deterministic Turing Machine), then we have to admit that: there is a counterexample that invalidates the correctness of the algorithm. This is because if we suppose that it can prove that an elenkhos formula (a formula that lists the negated codes of all models) is a contradiction, and if we change exactly a specific boolean variable of that formula, then we have proven that: in this case the algorithm will always fail.

Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model expressivity. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

北京阿比特科技有限公司