亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the Set Cover problem, we are given a set system with each set having a weight, and we want to find a collection of sets that cover the universe, whilst having low total weight. There are several approaches known (based on greedy approaches, relax-and-round, and dual-fitting) that achieve a $H_k \approx \ln k + O(1)$ approximation for this problem, where the size of each set is bounded by $k$. Moreover, getting a $\ln k - O(\ln \ln k)$ approximation is hard. Where does the truth lie? Can we close the gap between the upper and lower bounds? An improvement would be particularly interesting for small values of $k$, which are often used in reductions between Set Cover and other combinatorial optimization problems. We consider a non-oblivious local-search approach: to the best of our knowledge this gives the first $H_k$-approximation for Set Cover using an approach based on local-search. Our proof fits in one page, and gives a integrality gap result as well. Refining our approach by considering larger moves and an optimized potential function gives an $(H_k - \Omega(\log^2 k)/k)$-approximation, improving on the previous bound of $(H_k - \Omega(1/k^8))$ (\emph{R.\ Hassin and A.\ Levin, SICOMP '05}) based on a modified greedy algorithm.

相關內容

Much of the literature on optimal design of bandit algorithms is based on minimization of expected regret. It is well known that designs that are optimal over certain exponential families can achieve expected regret that grows logarithmically in the number of arm plays, at a rate governed by the Lai-Robbins lower bound. In this paper, we show that when one uses such optimized designs, the regret distribution of the associated algorithms necessarily has a very heavy tail, specifically, that of a truncated Cauchy distribution. Furthermore, for $p>1$, the $p$'th moment of the regret distribution grows much faster than poly-logarithmically, in particular as a power of the total number of arm plays. We show that optimized UCB bandit designs are also fragile in an additional sense, namely when the problem is even slightly mis-specified, the regret can grow much faster than the conventional theory suggests. Our arguments are based on standard change-of-measure ideas, and indicate that the most likely way that regret becomes larger than expected is when the optimal arm returns below-average rewards in the first few arm plays, thereby causing the algorithm to believe that the arm is sub-optimal. To alleviate the fragility issues exposed, we show that UCB algorithms can be modified so as to ensure a desired degree of robustness to mis-specification. In doing so, we also provide a sharp trade-off between the amount of UCB exploration and the tail exponent of the resulting regret distribution.

Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e.g., Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion. More specifically, under strong but plausible cryptographic assumptions, we construct a task for which there exists an $\varepsilon$-CDP mechanism with $\varepsilon = O(1)$ achieving $1-o(1)$ utility, but any $(\varepsilon, \delta)$-SDP mechanism, including computationally unbounded ones, that achieves a constant utility must use either a super-constant $\varepsilon$ or a non-negligible $\delta$. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is "private" against a certain class of decision tree adversaries, and then we use cryptographic constructions to "lift" this into privacy against computational adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.

Machine learning is the dominant approach to artificial intelligence, through which computers learn from data and experience. In the framework of supervised learning, for a computer to learn from data accurately and efficiently, some auxiliary information about the data distribution and target function should be provided to it through the learning model. This notion of auxiliary information relates to the concept of regularization in statistical learning theory. A common feature among real-world datasets is that data domains are multiscale and target functions are well-behaved and smooth. In this paper, we propose a learning model that exploits this multiscale data structure and discuss its statistical and computational benefits. The hierarchical learning model is inspired by the logical and progressive easy-to-hard learning mechanism of human beings and has interpretable levels. The model apportions computational resources according to the complexity of data instances and target functions. This property can have multiple benefits, including higher inference speed and computational savings in training a model for many users or when training is interrupted. We provide a statistical analysis of the learning mechanism using multiscale entropies and show that it can yield significantly stronger guarantees than uniform convergence bounds.

The paper considers the SUPPORTED model of distributed computing introduced by Schmid and Suomela [HotSDN'13], generalizing the LOCAL and CONGEST models. In this framework, multiple instances of the same problem, differing from each other by the subnetwork to which they apply, recur over time, and need to be solved efficiently online. To do that, one may rely on an initial preprocessing phase for computing some useful information. This preprocessing phase makes it possible, in some cases, to overcome locality-based time lower bounds. A first contribution of the current paper is expanding the spectrum of problem types to which the SUPPORTED model applies. In addition to subnetwork-defined recurrent problems, we introduce also recurrent problems of two additional types: (i) instances defined by partial client sets, and (ii) instances defined by partially fixed outputs. Our second contribution is illustrating the versatility of the SUPPORTED framework by examining recurrent variants of three classical graph problems. The first problem is Minimum Client Dominating Set (CDS), a recurrent version of the classical dominating set problem with each recurrent instance requiring us to dominate a partial client set. We provide a constant time approximation scheme for CDS on trees and planar graphs. The second problem is Color Completion (CC), a recurrent version of the coloring problem in which each recurrent instance comes with a partially fixed coloring (of some of the vertices) that must be completed. We study the minimum number of new colors and the minimum total number of colors necessary for completing this task. The third problem we study is a recurrent version of Locally Checkable Labellings (LCL) on paths of length $n$. We show that such problems have complexities that are either $\Theta(1)$ or $\Theta(n)$, extending the results of Foerster et al. [INFOCOM'19].

How can we better understand the broad, diverse, shifting, and invisible crowd workforce, so that we can better support it? We present findings from online observations and analysis of publicly available postings from a community forum of crowd workers. In particular, we observed recurring tensions between crowd workers and journalists regarding media depictions of crowd work. We found that crowd diversity makes any one-dimensional representation inadequate in addressing the wide-ranging experiences of crowd work. We argue that the scale, diversity, invisibility, and the crowds' resistance to publicity make a worker-centered approach to crowd work particularly challenging, necessitating better understanding the diversity of workers and their lived experiences.

We consider constrained sampling problems in paid research studies or clinical trials. When qualified volunteers are more than the budget allowed, we recommend a D-optimal sampling strategy based on the optimal design theory and develop a constrained lift-one algorithm to find the optimal allocation. Unlike the literature which mainly dealt with linear models, our solution solves the constrained sampling problem under fairly general statistical models, including generalized linear models and multinomial logistic models, and with more general constraints. We justify theoretically the optimality of our sampling strategy and show by simulation studies and real world examples the advantages over simple random sampling and proportionally stratified sampling strategies.

Consider an agent exploring an unknown graph in search of some goal state. As it walks around the graph, it learns the nodes and their neighbors. The agent only knows where the goal state is when it reaches it. How do we reach this goal while moving only a small distance? This problem seems hopeless, even on trees of bounded degree, unless we give the agent some help. This setting with ''help'' often arises in exploring large search spaces (e.g., huge game trees) where we assume access to some score/quality function for each node, which we use to guide us towards the goal. In our case, we assume the help comes in the form of distance predictions: each node $v$ provides a prediction $f(v)$ of its distance to the goal vertex. Naturally if these predictions are correct, we can reach the goal along a shortest path. What if the predictions are unreliable and some of them are erroneous? Can we get an algorithm whose performance relates to the error of the predictions? In this work, we consider the problem on trees and give deterministic algorithms whose total movement cost is only $O(OPT + \Delta \cdot ERR)$, where $OPT$ is the distance from the start to the goal vertex, $\Delta$ the maximum degree, and the $ERR$ is the total number of vertices whose predictions are erroneous. We show this guarantee is optimal. We then consider a ''planning'' version of the problem where the graph and predictions are known at the beginning, so the agent can use this global information to devise a search strategy of low cost. For this planning version, we go beyond trees and give an algorithms which gets good performance on (weighted) graphs with bounded doubling dimension.

The recent thought-provoking paper by Hansen [2022, Econometrica] proved that the Gauss-Markov theorem continues to hold without the requirement that competing estimators are linear in the vector of outcomes. Despite the elegant proof, it was shown by the authors and other researchers that the main result in the earlier version of Hansen's paper does not extend the classic Gauss-Markov theorem because no nonlinear unbiased estimator exists under his conditions. To address the issue, Hansen [2022] added statements in the latest version with new conditions under which nonlinear unbiased estimators exist. Motivated by the lively discussion, we study a fundamental problem: what estimators are unbiased for a given class of linear models? We first review a line of highly relevant work dating back to the 1960s, which, unfortunately, have not drawn enough attention. Then, we introduce notation that allows us to restate and unify results from earlier work and Hansen [2022]. The new framework also allows us to highlight differences among previous conclusions. Lastly, we establish new representation theorems for unbiased estimators under different restrictions on the linear model, allowing the coefficients and covariance matrix to take only a finite number of values, the higher moments of the estimator and the dependent variable to exist, and the error distribution to be discrete, absolutely continuous, or dominated by another probability measure. Our results substantially generalize the claims of parallel commentaries on Hansen [2022] and a remarkable result by Koopmann [1982].

Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.

北京阿比特科技有限公司