亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A spanner of a graph is a subgraph that preserves lengths of shortest paths up to a multiplicative distortion. For every $k$, a spanner with size $O(n^{1+1/k})$ and stretch $(2k+1)$ can be constructed by a simple centralized greedy algorithm, and this is tight assuming Erd\H{o}s girth conjecture. In this paper we study the problem of constructing spanners in a local manner, specifically in the Local Computation Model proposed by Rubinfeld et al. (ICS 2011). We provide a randomized Local Computation Agorithm (LCA) for constructing $(2r-1)$-spanners with $\tilde{O}(n^{1+1/r})$ edges and probe complexity of $\tilde{O}(n^{1-1/r})$ for $r \in \{2,3\}$, where $n$ denotes the number of vertices in the input graph. Up to polylogarithmic factors, in both cases, the stretch factor is optimal (for the respective number of edges). In addition, our probe complexity for $r=2$, i.e., for constructing a $3$-spanner, is optimal up to polylogarithmic factors. Our result improves over the probe complexity of Parter et al. (ITCS 2019) that is $\tilde{O}(n^{1-1/2r})$ for $r \in \{2,3\}$. Both our algorithms and the algorithms of Parter et al. use a combination of neighbor-probes and pair-probes in the above-mentioned LCAs. For general $k\geq 1$, we provide an LCA for constructing $O(k^2)$-spanners with $\tilde{O}(n^{1+1/k})$ edges using $O(n^{2/3}\Delta^2)$ neighbor-probes, improving over the $\tilde{O}(n^{2/3}\Delta^4)$ algorithm of Parter et al. By developing a new randomized LCA for graph decomposition, we further improve the probe complexity of the latter task to be $O(n^{2/3-(1.5-\alpha)/k}\Delta^2)$, for any constant $\alpha>0$. This latter LCA may be of independent interest.

相關內容

In the $t$-online-erasure model in property testing, an adversary is allowed to erase $t$ values of a queried function for each query the tester makes. This model was recently formulated by Kalemaj, Raskhodnikova andVarma, who showed that the properties of linearity of functions as well as quadraticity can be tested in$O_t(1)$ many queries: $O(\log (t))$ for linearity and $2^{2^{O(t)}}$ for quadraticity. They asked whether the more general property of low-degreeness can be tested in the online erasure model, whether better testers exist for quadraticity, and if similar results hold when ``erasures'' are replaced with ``corruptions''. We show that, in the $t$-online-erasure model, for a prime power $q$, given query access to a function $f: \mathbb{F}_q^n \xrightarrow[]{} \mathbb{F}_q$, one can distinguish in $\mathrm{poly}(\log^{d+q}(t)/\delta)$ queries between the case that $f$ is degree at most $d$, and the case that $f$ is $\delta$-far from any degree $d$ function (with respect to the fractional hamming distance). This answers the aforementioned questions and brings the query complexity to nearly match the query complexity of low-degree testing in the classical property testing model. Our results are based on the observation that the property of low-degreeness admits a large and versatile family of query efficient testers. Our testers operates by querying a uniformly random, sufficiently large set of points in a large enough affine subspace, and finding a tester for low-degreeness that only utilizes queries from that set of points. We believe that this tester may find other applications to algorithms in the online-erasure model or other related models, and may be of independent interest.

Given a straight-line drawing of a graph, a {\em segment} is a maximal set of edges that form a line segment. Given a planar graph $G$, the {\em segment number} of $G$ is the minimum number of segments that can be achieved by any planar straight-line drawing of $G$. The {\em line cover number} of $G$ is the minimum number of lines that support all the edges of a planar straight-line drawing of $G$. Computing the segment number or the line cover number of a planar graph is $\exists\mathbb{R}$-complete and, thus, NP-hard. We study the problem of computing the segment number from the perspective of parameterized complexity. We show that this problem is fixed-parameter tractable with respect to each of the following parameters: the vertex cover number, the segment number, and the line cover number. We also consider colored versions of the segment and the line cover number.

We study local filters for the Lipschitz property of real-valued functions $f: V \to [0,r]$, where the Lipschitz property is defined with respect to an arbitrary undirected graph $G=(V,E)$. We give nearly optimal local Lipschitz filters both with respect to $\ell_1$ distance and $\ell_0$ distance. Previous work only considered unbounded-range functions over $[n]^d$. Jha and Raskhodnikova (SICOMP `13) gave an algorithm for such functions with lookup complexity exponential in $d$, which Awasthi et al.\ (ACM Trans. Comput. Theory) showed was necessary in this setting. By considering the natural class of functions whose range is bounded in $[0,r]$, we circumvent this lower bound and achieve running time $(d^r\log n)^{O(\log r)}$ for the $\ell_1$-respecting filter and $d^{O(r)}\text{polylog }n$ for the $\ell_0$-respecting filter for functions over $[n]^d$. Furthermore, we show that our algorithms are nearly optimal in terms of the dependence on $r$ for the domain $\{0,1\}^d$, an important special case of the domain $[n]^d$. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing. Finally, we provide two applications of our local filters. First, they can be used in conjunction with the Laplace mechanism for differential privacy to provide filter mechanisms for privately releasing outputs of black box functions even in the presence of malicious clients. Second, we use them to obtain the first tolerant testers for the Lipschitz property.

Generating samples given a specific label requires estimating conditional distributions. We derive a tractable upper bound of the Wasserstein distance between conditional distributions to lay the theoretical groundwork to learn conditional distributions. Based on this result, we propose a novel conditional generation algorithm where conditional distributions are fully characterized by a metric space defined by a statistical distance. We employ optimal transport theory to propose the Wasserstein geodesic generator, a new conditional generator that learns the Wasserstein geodesic. The proposed method learns both conditional distributions for observed domains and optimal transport maps between them. The conditional distributions given unobserved intermediate domains are on the Wasserstein geodesic between conditional distributions given two observed domain labels. Experiments on face images with light conditions as domain labels demonstrate the efficacy of the proposed method.

Suppose we are given an $n$-node, $m$-edge input graph $G$, and the goal is to compute a spanning subgraph $H$ on $O(n)$ edges. This can be achieved in linear $O(m + n)$ time via breadth-first search. But can we hope for \emph{sublinear} runtime in some range of parameters? If the goal is to return $H$ as an adjacency list, there are simple lower bounds showing that $\Omega(m + n)$ runtime is necessary. If the goal is to return $H$ as an adjacency matrix, then we need $\Omega(n^2)$ time just to write down the entries of the output matrix. However, we show that neither of these lower bounds still apply if instead the goal is to return $H$ as an \emph{implicit} adjacency matrix, which we call an \emph{adjacency oracle}. An adjacency oracle is a data structure that gives a user the illusion that an adjacency matrix has been computed: it accepts edge queries $(u, v)$, and it returns in near-constant time a bit indicating whether $(u, v) \in E(H)$. Our main result is that one can construct an adjacency oracle for a spanning subgraph on at most $(1+\varepsilon)n$ edges, in $\tilde{O}(n \varepsilon^{-1})$ time, and that this construction time is near-optimal. Additional results include constructions of adjacency oracles for $k$-connectivity certificates and spanners, which are similarly sublinear on dense-enough input graphs. Our adjacency oracles are closely related to Local Computation Algorithms (LCAs) for graph sparsifiers; they can be viewed as LCAs with some computation moved to a preprocessing step, in order to speed up queries. Our oracles imply the first Local algorithm for computing sparse spanning subgraphs of general input graphs in $\tilde{O}(n)$ query time, which works by constructing our adjacency oracle, querying it once, and then throwing the rest of the oracle away. This addresses an open problem of Rubinfeld [CSR '17].

The rehearsal strategy is widely used to alleviate the catastrophic forgetting problem in class incremental learning (CIL) by preserving limited exemplars from previous tasks. With imbalanced sample numbers between old and new classes, the classifier learning can be biased. Existing CIL methods exploit the long-tailed (LT) recognition techniques, e.g., the adjusted losses and the data re-sampling methods, to handle the data imbalance issue within each increment task. In this work, the dynamic nature of data imbalance in CIL is shown and a novel Dynamic Residual Classifier (DRC) is proposed to handle this challenging scenario. Specifically, DRC is built upon a recent advance residual classifier with the branch layer merging to handle the model-growing problem. Moreover, DRC is compatible with different CIL pipelines and substantially improves them. Combining DRC with the model adaptation and fusion (MAF) pipeline, this method achieves state-of-the-art results on both the conventional CIL and the LT-CIL benchmarks. Extensive experiments are also conducted for a detailed analysis. The code is publicly available.

For $1\le p \le \infty$, the Fr\'echet $p$-mean of a probability measure on a metric space is an important notion of central tendency that generalizes the usual notions in the real line of mean ($p=2$) and median ($p=1$). In this work we prove a collection of limit theorems for Fr\'echet means and related objects, which, in general, constitute a sequence of random closed sets. On the one hand, we show that many limit theorems (a strong law of large numbers, an ergodic theorem, and a large deviations principle) can be simply descended from analogous theorems on the space of probability measures via purely topological considerations. On the other hand, we provide the first sufficient conditions for the strong law of large numbers to hold in a $T_2$ topology (in particular, the Fell topology), and we show that this condition is necessary in some special cases. We also discuss statistical and computational implications of the results herein.

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.

Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.

北京阿比特科技有限公司