亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Coresets for $k$-means and $k$-median problems yield a small summary of the data, which preserve the clustering cost with respect to any set of $k$ centers. Recently coresets have also been constructed for constrained $k$-means and $k$-median problems. However, the notion of coresets has the drawback that (i) they can only be applied in settings where the input points are allowed to have weights, and (ii) in general metric spaces, the size of the coresets can depend logarithmically on the number of points. The notion of weak coresets, which have less stringent requirements than coresets, has been studied in the context of classical $k$-means and $k$-median problems. A weak coreset is a pair $(J,S)$ of subsets of points, where $S$ acts as a summary of the point set and $J$ as a set of potential centers. This pair satisfies the properties that (i) $S$ is a good summary of the data as long as the $k$ centers are chosen from $J$ only, and (ii) there is a good choice of $k$ centers in $J$ with cost close to the optimal cost. We develop this framework, which we call universal weak coresets, for constrained clustering settings. In conjunction with recent coreset constructions for constrained settings, our designs give greater data compression, are conceptually simpler, and apply to a wide range of constrained $k$-median and $k$-means problems.

相關內容

Despite widespread adoption in practice, guarantees for the LASSO and Group LASSO are strikingly lacking in settings beyond statistical problems, and these algorithms are usually considered to be a heuristic in the context of sparse convex optimization on deterministic inputs. We give the first recovery guarantees for the Group LASSO for sparse convex optimization with vector-valued features. We show that if a sufficiently large Group LASSO regularization is applied when minimizing a strictly convex function $l$, then the minimizer is a sparse vector supported on vector-valued features with the largest $\ell_2$ norm of the gradient. Thus, repeating this procedure selects the same set of features as the Orthogonal Matching Pursuit algorithm, which admits recovery guarantees for any function $l$ with restricted strong convexity and smoothness via weak submodularity arguments. This answers open questions of Tibshirani et al. and Yasuda et al. Our result is the first to theoretically explain the empirical success of the Group LASSO for convex functions under general input instances assuming only restricted strong convexity and smoothness. Our result also generalizes provable guarantees for the Sequential Attention algorithm, which is a feature selection algorithm inspired by the attention mechanism proposed by Yasuda et al. As an application of our result, we give new results for the column subset selection problem, which is well-studied when the loss is the Frobenius norm or other entrywise matrix losses. We give the first result for general loss functions for this problem that requires only restricted strong convexity and smoothness.

Designing small-sized \emph{coresets}, which approximately preserve the costs of the solutions for large datasets, has been an important research direction for the past decade. We consider coreset construction for a variety of general constrained clustering problems. We introduce a general class of assignment constraints, including capacity constraints on cluster centers, and assignment structure constraints for data points (modeled by a convex body $\mathcal{B}$). We give coresets for constrained clustering problems with such general assignment constraints, significantly generalizing known coreset results for constrained clustering. Notable implications of our general theorem include the first $\epsilon$-coreset for capacitated and fair $k$-Median with $m$ outliers in Euclidean spaces whose size is $\tilde{O}(m + k^2 \epsilon^{-4})$, generalizing and improving upon the prior bounds in [Braverman et al., FOCS'22; Huang et al., ICLR'23] (for capacitated $k$-Median, the coreset size bound obtained in [Braverman et al., FOCS'22] is $\tilde{O}(k^3 \epsilon^{-6})$, and for $k$-Median with $m$ outliers, the coreset size bound obtained in [Huang et al., ICLR'23] is $\tilde{O}(m + k^3 \epsilon^{-5})$), and the first $\epsilon$-coreset of size $\mathrm{poly}(k \epsilon^{-1})$ for fault-tolerant clustering for metric spaces with bounded covering exponent.

We present an efficient labeling scheme for answering connectivity queries in graphs subject to a specified number of vertex failures. Our first result is a randomized construction of a labeling function that assigns vertices $O(f^3\log^5 n)$-bit labels, such that given the labels of $F\cup \{s,t\}$ where $|F|\leq f$, we can correctly report, with probability $1-1/\mathrm{poly}(n)$, whether $s$ and $t$ are connected in $G-F$. However, it is possible that over all $n^{O(f)}$ distinct queries, some are answered incorrectly. Our second result is a deterministic labeling function that produces $O(f^7 \log^{13} n)$-bit labels such that all connectivity queries are answered correctly. Both upper bounds are polynomially off from an $\Omega(f)$-bit lower bound. Our labeling schemes are based on a new low degree decomposition that improves the Duan-Pettie decomposition, and facilitates its distributed representation. We make heavy use of randomization to construct hitting sets, fault-tolerant graph sparsifiers, and in constructing linear sketches. Our derandomized labeling scheme combines a variety of techniques: the method of conditional expectations, hit-miss hash families, and $\epsilon$-nets for axis-aligned rectangles. The prior labeling scheme of Parter and Petruschka shows that $f=1$ and $f=2$ vertex faults can be handled with $O(\log n)$- and $O(\log^3 n)$-bit labels, respectively, and for $f>2$ vertex faults, $\tilde{O}(n^{1-1/2^{f-2}})$-bit labels suffice.

Parity games have witnessed several new quasi-polynomial algorithms since the breakthrough result of Calude et al. (STOC 2017). The combinatorial object underlying these approaches is a universal tree, as identified by Czerwi\'nski et al. (SODA 2019). By providing a quasi-polynomial lower bound on the size of universal trees, they have highlighted a barrier that must be overcome by all existing approaches to attain polynomial running time. This is due to the existence of worst case instances which force these algorithms to explore a large portion of the tree. As an attempt to overcome this barrier, we propose a strategy iteration framework which can be applied on any universal tree. It is at least as fast as its value iteration counterparts, while allowing one to take bigger leaps in the universal tree. Our main technical contribution is an efficient method for computing the least fixed point of 1-player games. This is achieved via a careful adaptation of shortest path algorithms to the setting of ordered trees. By plugging in the universal tree of Jurdzi\'nski and Lazi\'c (LICS 2017), or the Strahler universal tree of Daviaud et al. (ICALP 2020), we obtain instantiations of the general framework that take time $O(mn^2\log n\log d)$ and $O(mn^2\log^3 n \log d)$ respectively per iteration.

We study a fundamental problem in optimization under uncertainty. There are $n$ boxes; each box $i$ contains a hidden reward $x_i$. Rewards are drawn i.i.d. from an unknown distribution $\mathcal{D}$. For each box $i$, we see $y_i$, an unbiased estimate of its reward, which is drawn from a Normal distribution with known standard deviation $\sigma_i$ (and an unknown mean $x_i$). Our task is to select a single box, with the goal of maximizing our reward. This problem captures a wide range of applications, e.g. ad auctions, where the hidden reward is the click-through rate of an ad. Previous work in this model [BKMR12] proves that the naive policy, which selects the box with the largest estimate $y_i$, is suboptimal, and suggests a linear policy, which selects the box $i$ with the largest $y_i - c \cdot \sigma_i$, for some $c > 0$. However, no formal guarantees are given about the performance of either policy (e.g., whether their expected reward is within some factor of the optimal policy's reward). In this work, we prove that both the naive policy and the linear policy are arbitrarily bad compared to the optimal policy, even when $\mathcal{D}$ is well-behaved, e.g. has monotone hazard rate (MHR), and even under a "small tail" condition, which requires that not too many boxes have arbitrarily large noise. On the flip side, we propose a simple threshold policy that gives a constant approximation to the reward of a prophet (who knows the realized values $x_1, \dots, x_n$) under the same "small tail" condition. We prove that when this condition is not satisfied, even an optimal clairvoyant policy (that knows $\mathcal{D}$) cannot get a constant approximation to the prophet, even for MHR distributions, implying that our threshold policy is optimal against the prophet benchmark, up to constants.

This paper studies the message complexity of authenticated Byzantine agreement (BA) in synchronous, fully-connected distributed networks under an honest majority. We focus on the so-called {\em implicit} Byzantine agreement problem where each node starts with an input value and at the end a non-empty subset of the honest nodes should agree on a common input value by satisfying the BA properties (i.e., there can be undecided nodes). We show that a sublinear (in $n$, number of nodes) message complexity BA protocol under honest majority is possible in the standard PKI model when the nodes have access to an unbiased global coin and hash function. In particular, we present a randomized Byzantine agreement algorithm which, with high probability achieves implicit agreement, uses $\tilde{O}(\sqrt{n})$ messages, and runs in $\tilde{O}(1)$ rounds while tolerating $(1/2 - \epsilon)n$ Byzantine nodes for any fixed $\epsilon > 0$, the notation $\Tilde{O}$ hides a $O(\polylog{n})$ factor. The algorithm requires standard cryptographic setup PKI and hash function with a static Byzantine adversary. The algorithm works in the CONGEST model and each node does not need to know the identity of its neighbors, i.e., works in the $KT_0$ model. The message complexity (and also the time complexity) of our algorithm is optimal up to a $\polylog n$ factor, as we show a $\Omega(\sqrt{n})$ lower bound on the message complexity.

We consider a natural generalization of Vertex Cover: the Subset Vertex Cover problem, which is to decide for a graph $G=(V,E)$, a subset $T \subseteq V$ and integer $k$, if $V$ has a subset $S$ of size at most $k$, such that $S$ contains at least one end-vertex of every edge incident to a vertex of $T$. A graph is $H$-free if it does not contain $H$ as an induced subgraph. We solve two open problems from the literature by proving that Subset Vertex Cover is NP-complete on subcubic (claw,diamond)-free planar graphs and on $2$-unipolar graphs, a subclass of $2P_3$-free weakly chordal graphs. Our results show for the first time that Subset Vertex Cover is computationally harder than Vertex Cover (under P $\neq$ NP). We also prove new polynomial time results. We first give a dichotomy on graphs where $G[T]$ is $H$-free. Namely, we show that Subset Vertex Cover is polynomial-time solvable on graphs $G$, for which $G[T]$ is $H$-free, if $H = sP_1 + tP_2$ and NP-complete otherwise. Moreover, we prove that Subset Vertex Cover is polynomial-time solvable for $(sP_1 + P_2 + P_3)$-free graphs and bounded mim-width graphs. By combining our new results with known results we obtain a partial complexity classification for Subset Vertex Cover on $H$-free graphs.

We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

Extreme multi-label text classification (XMC) aims to tag each input text with the most relevant labels from an extremely large label set, such as those that arise in product categorization and e-commerce recommendation. Recently, pretrained language representation models such as BERT achieve remarkable state-of-the-art performance across a wide range of NLP tasks including sentence classification among small label sets (typically fewer than thousands). Indeed, there are several challenges in applying BERT to the XMC problem. The main challenges are: (i) the difficulty of capturing dependencies and correlations among labels, whose features may come from heterogeneous sources, and (ii) the tractability to scale to the extreme label setting as the model size can be very large and scale linearly with the size of the output space. To overcome these challenges, we propose X-BERT, the first feasible attempt to finetune BERT models for a scalable solution to the XMC problem. Specifically, X-BERT leverages both the label and document text to build label representations, which induces semantic label clusters in order to better model label dependencies. At the heart of X-BERT is finetuning BERT models to capture the contextual relations between input text and the induced label clusters. Finally, an ensemble of the different BERT models trained on heterogeneous label clusters leads to our best final model. Empirically, on a Wiki dataset with around 0.5 million labels, X-BERT achieves new state-of-the-art results where the precision@1 reaches 67:80%, a substantial improvement over 32.58%/60.91% of deep learning baseline fastText and competing XMC approach Parabel, respectively. This amounts to a 11.31% relative improvement over Parabel, which is indeed significant since the recent approach SLICE only has 5.53% relative improvement.

Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentence-level true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.

北京阿比特科技有限公司