亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a set P of n points in the plane, the unit-disk graph G_{r}(P) with respect to a parameter r is an undirected graph whose vertex set is P such that an edge connects two points p, q \in P if the Euclidean distance between p and q is at most r (the weight of the edge is 1 in the unweighted case and is the distance between p and q in the weighted case). Given a value \lambda>0 and two points s and t of P, we consider the following reverse shortest path problem: computing the smallest r such that the shortest path length between s and t in G_r(P) is at most \lambda. In this paper, we present an algorithm of O(\lfloor \lambda \rfloor \cdot n \log n) time and another algorithm of O(n^{5/4} \log^{7/4} n) time for the unweighted case, as well as an O(n^{5/4} \log^{5/2} n) time algorithm for the weighted case. We also consider the L_1 version of the problem where the distance of two points is measured by the L_1 metric; we solve the problem in O(n \log^3 n) time for both the unweighted and weighted cases.

相關內容

Center-based clustering is a pivotal primitive for unsupervised learning and data analysis. A popular variant is undoubtedly the k-means problem, which, given a set $P$ of points from a metric space and a parameter $k<|P|$, requires to determine a subset $S$ of $k$ centers minimizing the sum of all squared distances of points in $P$ from their closest center. A more general formulation, known as k-means with $z$ outliers, introduced to deal with noisy datasets, features a further parameter $z$ and allows up to $z$ points of $P$ (outliers) to be disregarded when computing the aforementioned sum. We present a distributed coreset-based 3-round approximation algorithm for k-means with $z$ outliers for general metric spaces, using MapReduce as a computational model. Our distributed algorithm requires sublinear local memory per reducer, and yields a solution whose approximation ratio is an additive term $O(\gamma)$ away from the one achievable by the best known sequential (possibly bicriteria) algorithm, where $\gamma$ can be made arbitrarily small. An important feature of our algorithm is that it obliviously adapts to the intrinsic complexity of the dataset, captured by the doubling dimension $D$ of the metric space. To the best of our knowledge, no previous distributed approaches were able to attain similar quality-performance tradeoffs for general metrics.

We introduce the problem of finding a set $B$ of $k$ points in $[0,1]^n$ such that the expected cost of the cheapest point in $B$ that dominates a random point from $[0,1]^n$ is minimized. We study the case where the coordinates of the random points are independently distributed and the cost function is linear. This problem arises naturally in various application areas where customers' requests are satisfied based on predefined products, each corresponding to a subset of features. We show that the problem is NP-hard already for $k=2$ when each coordinate is drawn from $\{0,1\}$, and obtain an FPTAS for general fixed $k$ under mild assumptions on the distributions.

In this paper we study the problem of coloring a unit interval graph which changes dynamically. In our model the unit intervals are added or removed one at the time, and have to be colored immediately, so that no two overlapping intervals share the same color. After each update only a limited number of intervals is allowed to be recolored. The limit on the number of recolorings per update is called the recourse budget. In this paper we show, that if the graph remains $k$-colorable at all times, and the updates consist of insertions only, then we can achieve the amortized recourse budget of $O(k^7 \log n)$ while maintaining a proper coloring with $k$ colors. This is an exponential improvement over the result in [Bosek et al., Recoloring Interval Graphs with Limited Recourse Budget. SWAT 2020] in terms of both $k$ and $n$. We complement this result by showing the lower bound of $\Omega(n)$ on the amortized recourse budget in the fully dynamic setting. Our incremental algorithm can be efficiently implemented. As a byproduct of independent interest we include a new result on coloring proper circular arc graphs. Let $L$ be the maximum number of arcs intersecting in one point for some set of unit circular arcs $\mathcal{A}$. We show that if there is a set $\mathcal{A}'$ of non-intersecting unit arcs of size $L^2-1$ such that $\mathcal{A} \cup \mathcal{A}'$ does not contain $L+1$ arcs intersecting in one point, then it is possible to color $\mathcal{A}$ with $L$ colors. This complements the work on unit circular arc coloring, which specifies sufficient conditions needed to color $\mathcal{A}$ with $L+1$ colors or more.

A cut sparsifier is a reweighted subgraph that maintains the weights of the cuts of the original graph up to a multiplicative factor of $(1\pm\epsilon)$. This paper considers computing cut sparsifiers of weighted graphs of size $O(n\log (n)/\epsilon^2)$. Our algorithm computes such a sparsifier in time $O(m\cdot\min(\alpha(n)\log(m/n),\log (n)))$, both for graphs with polynomially bounded and unbounded integer weights, where $\alpha(\cdot)$ is the functional inverse of Ackermann's function. This improves upon the state of the art by Bencz\'ur and Karger (SICOMP 2015), which takes $O(m\log^2 (n))$ time. For unbounded weights, this directly gives the best known result for cut sparsification. Together with preprocessing by an algorithm of Fung et al. (SICOMP 2019), this also gives the best known result for polynomially-weighted graphs. Consequently, this implies the fastest approximate min-cut algorithm, both for graphs with polynomial and unbounded weights. In particular, we show that it is possible to adapt the state of the art algorithm of Fung et al. for unweighted graphs to weighted graphs, by letting the partial maximum spanning forest (MSF) packing take the place of the Nagamochi-Ibaraki (NI) forest packing. MSF packings have previously been used by Abraham at al. (FOCS 2016) in the dynamic setting, and are defined as follows: an $M$-partial MSF packing of $G$ is a set $\mathcal{F}=\{F_1, \dots, F_M\}$, where $F_i$ is a maximum spanning forest in $G\setminus \bigcup_{j=1}^{i-1}F_j$. Our method for computing (a sufficient estimation of) the MSF packing is the bottleneck in the running time of our sparsification algorithm.

We consider the classical Minimum Crossing Number problem: given an $n$-vertex graph $G$, compute a drawing of $G$ in the plane, while minimizing the number of crossings between the images of its edges. This is a fundamental and extensively studied problem, whose approximability status is widely open. In all currently known approximation algorithms, the approximation factor depends polynomially on $\Delta$ -- the maximum vertex degree in $G$. The best current approximation algorithm achieves an $O(n^{1/2-\varepsilon}\cdot \text{poly}(\Delta\cdot\log n))$-approximation, for a small fixed constant $\epsilon$, while the best negative result is APX-hardness, leaving a large gap in our understanding of this basic problem. In this paper we design a randomized $O\left(2^{O((\log n)^{7/8}\log\log n)}\cdot\text{poly}(\Delta)\right )$-approximation algorithm for Minimum Crossing Number. This is the first approximation algorithm for the problem that achieves a subpolynomial in $n$ approximation factor (albeit only in graphs whose maximum vertex degree is subpolynomial in $n$). In order to achieve this approximation factor, we design a new algorithm for a closely related problem called Crossing Number with Rotation System, in which, for every vertex $v\in V(G)$, the circular ordering, in which the images of the edges incident to $v$ must enter the image of $v$ in the drawing is fixed as part of the input. Combining this result with the recent reduction of [Chuzhoy, Mahabadi, Tan '20] immediately yields the improved approximation algorithm for Minimum Crossing Number. We introduce several new technical tools, that we hope will be helpful in obtaining better algorithms for the problem in the future.

A triangle in a hypergraph $\mathcal{H}$ is a set of three distinct edges $e, f, g\in\mathcal{H}$ and three distinct vertices $u, v, w\in V(\mathcal{H})$ such that $\{u, v\}\subseteq e$, $\{v, w\}\subseteq f$, $\{w, u\}\subseteq g$ and $\{u, v, w\}\cap e\cap f\cap g=\emptyset$. Johansson proved in 1996 that $\chi(G)=\mathcal{O}(\Delta/\log\Delta)$ for any triangle-free graph $G$ with maximum degree $\Delta$. Cooper and Mubayi later generalized the Johansson's theorem to all rank $3$ hypergraphs. In this paper we provide a common generalization of both these results for all hypergraphs, showing that if $\mathcal{H}$ is a rank $k$, triangle-free hypergraph, then the list chromatic number \[ \chi_{\ell}(\mathcal{H})\leq \mathcal{O}\left(\max_{2\leq \ell \leq k} \left\{\left( \frac{\Delta_{\ell}}{\log \Delta_{\ell}} \right)^{\frac{1}{\ell-1}} \right\}\right), \] where $\Delta_{\ell}$ is the maximum $\ell$-degree of $\mathcal{H}$. The result is sharp apart from the constant. Moreover, our result implies, generalizes and improves several earlier results on the chromatic number and also independence number of hypergraphs, while its proof is based on a different approach than prior works in hypergraphs (and therefore provides alternative proofs to them). In particular, as an application, we establish a bound on chromatic number of sparse hypergraphs in which each vertex is contained in few triangles, and thus extend results of Alon, Krivelevich and Sudakov, and Cooper and Mubayi from hypergraphs of rank 2 and 3, respectively, to all hypergraphs.

Given a directed graph $G$ and integers $k$ and $l$, a D-core is the maximal subgraph $H \subseteq G$ such that for every vertex of $H$, its in-degree and out-degree are no smaller than $k$ and $l$, respectively. For a directed graph $G$, the problem of D-core decomposition aims to compute the non-empty D-cores for all possible values of $k$ and $l$. In the literature, several \emph{peeling-based} algorithms have been proposed to handle D-core decomposition. However, the peeling-based algorithms that work in a sequential fashion and require global graph information during processing are mainly designed for \emph{centralized} settings, which cannot handle large-scale graphs efficiently in distributed settings. Motivated by this, we study the \emph{distributed} D-core decomposition problem in this paper. We start by defining a concept called \emph{anchored coreness}, based on which we propose a new H-index-based algorithm for distributed D-core decomposition. Furthermore, we devise a novel concept, namely \emph{skyline coreness}, and show that the D-core decomposition problem is equivalent to the computation of skyline corenesses for all vertices. We design an efficient D-index to compute the skyline corenesses distributedly. We implement the proposed algorithms under both vertex-centric and block-centric distributed graph processing frameworks. Moreover, we theoretically analyze the algorithm and message complexities. Extensive experiments on large real-world graphs with billions of edges demonstrate the efficiency of the proposed algorithms in terms of both the running time and communication overhead.

We study the performance of Markov chains for the $q$-state ferromagnetic Potts model on random regular graphs. It is conjectured that their performance is dictated by metastability phenomena, i.e., the presence of "phases" (clusters) in the sample space where Markov chains with local update rules, such as the Glauber dynamics, are bound to take exponential time to escape, and therefore cause slow mixing. The phases that are believed to drive these metastability phenomena in the case of the Potts model emerge as local, rather than global, maxima of the so-called Bethe functional, and previous approaches of analysing these phases based on optimisation arguments fall short of the task. Our first contribution is to detail the emergence of the metastable phases for the $q$-state Potts model on the $d$-regular random graph for all integers $q,d\geq 3$, and establish that for an interval of temperatures, delineated by the uniqueness and the Kesten-Stigum thresholds on the $d$-regular tree, the two phases coexist. The proofs are based on a conceptual connection between spatial properties and the structure of the Potts distribution on the random regular graph, rather than complicated moment calculations. Based on this new structural understanding of the model, we obtain various algorithmic consequences. We first complement recent fast mixing results for Glauber dynamics by Blanca and Gheissari below the uniqueness threshold, showing an exponential lower bound on the mixing time above the uniqueness threshold. Then, we obtain tight results even for the non-local Swendsen-Wang chain, where we establish slow mixing/metastability for the whole interval of temperatures where the chain is conjectured to mix slowly on the random regular graph. The key is to bound the conductance of the chains using a random graph "planting" argument combined with delicate bounds on random-graph percolation.

In this paper, we introduce the \emph{interval query problem} on cube-free median graphs. Let $G$ be a cube-free median graph and $\mathcal{S}$ be a commutative semigroup. For each vertex $v$ in $G$, we are given an element $p(v)$ in $\mathcal{S}$. For each query, we are given two vertices $u,v$ in $G$ and asked to calculate the sum of $p(z)$ over all vertices $z$ belonging to a $u-v$ shortest path. This is a common generalization of range query problems on trees and grids. In this paper, we provide an algorithm to answer each interval query in $O(\log^2 n)$ time. The required data structure is constructed in $O(n\log^3 n)$ time and $O(n\log^2 n)$ space. To obtain our algorithm, we introduce a new technique, named the \emph{stairs decomposition}, to decompose an interval of cube-free median graphs into simpler substructures.

Consider the joint beamforming and quantization problem in the cooperative cellular network, where multiple relay-like base stations (BSs) connected to the central processor (CP) via rate-limited fronthaul links cooperatively serve the users. This problem can be formulated as the minimization of the total transmit power, subject to all users' signal-to-interference-plus-noise-ratio (SINR) constraints and all relay-like BSs' fronthaul rate constraints. In this paper, we first show that there is no duality gap between the considered problem and its Lagrangian dual by showing the tightness of the semidefinite relaxation (SDR) of the considered problem. Then we propose an efficient algorithm based on Lagrangian duality for solving the considered problem. The proposed algorithm judiciously exploits the special structure of the Karush-Kuhn-Tucker (KKT) conditions of the considered problem and finds the solution that satisfies the KKT conditions via two fixed-point iterations. The proposed algorithm is highly efficient (as evaluating the functions in both fixed-point iterations are computationally cheap) and is guaranteed to find the global solution of the problem. Simulation results show the efficiency and the correctness of the proposed algorithm.

北京阿比特科技有限公司