亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study online matching in the Euclidean $2$-dimesional plane with non-crossing constraint. The offline version was introduced by Atallah in 1985 and the online version was introduced and studied more recently by Bose et al. The input to the problem consists of a sequence of points, and upon arrival of a point an algorithm can match it with a previously unmatched point provided that line segments corresponding to the matched edges do not intersect. The decisions are irrevocable, and while an optimal offline solution always matches all the points, an online algorithm cannot match all the points in the worst case, unless it is given some side information, i.e., advice. We study two versions of this problem -- monomchromatic (MNM) and bichromatic (BNM). We show that advice complexity of solving BNM optimally on a circle (or, more generally, on inputs in a convex position) is tightly bounded by the logarithm of the $n^\text{th}$ Catalan number from above and below. This result corrects the previous claim of Bose et al. that the advice complexity is $\log(n!)$. At the heart of the result is a connection between non-crossing constraint in online inputs and $231$-avoiding property of permutations of $n$ elements We also show a lower bound of $n/3-1$ and an upper bound of $3n$ on the advice complexity for MNM on a plane. This gives an exponential improvement over the previously best known lower bound and an improvement in the constant of the leading term in the upper bound. In addition, we establish a lower bound of $\frac{\alpha}{2}\infdiv{\frac{2(1-\alpha)}{\alpha}}{1/4}n$ on the advice complexity for achieving competitive ratio $\alpha$ for MNM on a circle. Standard tools from advice complexity, such as partition trees and reductions from string guessing problem, do not seem to apply to MNM/BNM, so we have to design our lower bounds from first principles.

相關內容

The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. We focus on the problem of constructing a perfect matching in as few rounds as possible. In particular, we present an adaptive strategy for the player which achieves a perfect matching in $\beta n$ rounds, where the value of $\beta < 1.206$ is derived from a solution to some system of differential equations. This improves upon the previously best known upper bound of $(1+2/e+o(1)) \, n < 1.736 \, n$ rounds. We also improve the previously best lower bound of $(\ln 2 + o(1)) \, n > 0.693 \, n$ and show that the player cannot achieve the desired property in less than $\alpha n$ rounds, where the value of $\alpha > 0.932$ is derived from a solution to another system of differential equations. As a result, the gap between the upper and lower bounds is decreased roughly four times.

Let $X$ and $Y$ be two real-valued random variables. Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots$ be independent identically distributed copies of $(X,Y)$. Suppose there are two players A and B. Player A has access to $X_{1},X_{2},\ldots$ and player B has access to $Y_{1},Y_{2},\ldots$. Without communication, what joint probability distributions can players A and B jointly simulate? That is, if $k,m$ are fixed positive integers, what probability distributions on $\{1,\ldots,m\}^{2}$ are equal to the distribution of $(f(X_{1},\ldots,X_{k}),\,g(Y_{1},\ldots,Y_{k}))$ for some $f,g\colon\mathbb{R}^{k}\to\{1,\ldots,m\}$? When $X$ and $Y$ are standard Gaussians with fixed correlation $\rho\in(-1,1)$, we show that the set of probability distributions that can be noninteractively simulated from $k$ Gaussian samples is the same for any $k\geq m^{2}$. Previously, it was not even known if this number of samples $m^{2}$ would be finite or not, except when $m\leq 2$. Consequently, a straightforward brute-force search deciding whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ correlated Gaussian samples has run time bounded by $(5/\epsilon)^{m(\log(\epsilon/2) / \log|\rho|)^{m^{2}}}$, improving a bound of Ghazi, Kamath and Raghavendra. A nonlinear central limit theorem (i.e. invariance principle) of Mossel then generalizes this result to decide whether or not a probability distribution on $\{1,\ldots,m\}^{2}$ is within distance $0<\epsilon<|\rho|$ of being noninteractively simulated from $k$ samples of a given finite discrete distribution $(X,Y)$ in run time that does not depend on $k$, with constants that again improve a bound of Ghazi, Kamath and Raghavendra.

We study the problems of adjacency sketching, small-distance sketching, and approximate distance threshold sketching for monotone classes of graphs. The problem is to obtain randomized sketches of the vertices of any graph G in the class, so that adjacency, exact distance thresholds, or approximate distance thresholds of two vertices u, v can be decided (with high probability) from the sketches of u and v, by a decoder that does not know the graph. The goal is to determine when sketches of constant size exist. We show that, for monotone classes of graphs, there is a strict hierarchy: approximate distance threshold sketches imply small-distance sketches, which imply adjacency sketches, whereas the reverse implications are each false. The existence of an adjacency sketch is equivalent to the condition of bounded arboricity, while the existence of small-distance sketches is equivalent to the condition of bounded expansion. Classes of constant expansion admit approximate distance threshold sketches, while a monotone graph class can have arbitrarily small non-constant expansion without admitting an approximate distance threshold sketch.

We introduce and study the weakly single-crossing domain on trees which is a generalization of the well-studied single-crossing domain in social choice theory. We design a polynomial-time algorithm for recognizing preference profiles which belong to this domain. We then develop an efficient elicitation algorithm for this domain which works even if the preferences can be accessed only sequentially and the underlying single-crossing tree structure is not known beforehand. We also prove matching lower bound on the query complexity of our elicitation algorithm when the number of voters is large compared to the number of candidates. We also prove a lower bound of $\Omega(m^2\log n)$ on the number of queries that any algorithm needs to ask to elicit single crossing profile when random queries are allowed. This resolves an open question in an earlier paper and proves optimality of their preference elicitation algorithm when random queries are allowed.

We propose a model for online graph problems where algorithms are given access to an oracle that predicts (e.g., based on past data) the degrees of nodes in the graph. Within this model, we study the classic problem of online bipartite matching, and a natural greedy matching algorithm called MinPredictedDegree, which uses predictions of the degrees of offline nodes. For the bipartite version of a stochastic graph model due to Chung, Lu, and Vu where the expected values of the offline degrees are known and used as predictions, we show that MinPredictedDegree stochastically dominates any other online algorithm, i.e., it is optimal for graphs drawn from this model. Since the "symmetric" version of the model, where all online nodes are identical, is a special case of the well-studied "known i.i.d. model", it follows that the competitive ratio of MinPredictedDegree on such inputs is at least 0.7299. For the special case of graphs with power law degree distributions, we show that MinPredictedDegree frequently produces matchings almost as large as the true maximum matching on such graphs. We complement these results with an extensive empirical evaluation showing that MinPredictedDegree compares favorably to state-of-the-art online algorithms for online matching.

In previous work we have proposed an efficient pattern matching algorithm based on the notion of set automaton. In this article we investigate how set automata can be exploited to implement efficient term rewriting procedures. These procedures interleave pattern matching steps and rewriting steps and thus smoothly integrate redex discovery and subterm replacement. Concretely, we propose an optimised algorithm for outermost rewriting of left-linear term rewriting systems, prove its correctness, and present the results of some implementation experiments.

In scheduling, we are given a set of jobs, together with a number of machines and our goal is to decide for every job, when and on which machine(s) it should be scheduled in order to minimize some objective function. Different machine models, job characteristics and objective functions result in a multitude of scheduling problems and many of them are NP-hard, even for a fixed number of identical machines. However, using pseudo-polynomial or approximation algorithms, we can still hope to solve some of these problems efficiently. In this work, we give conditional running time lower bounds for a large number of scheduling problems, indicating the optimality of some classical algorithms. In particular, we show that the dynamic programming algorithm by Lawler and Moore is probably optimal for $1||\sum w_jU_j$ and $Pm||C_{max}$. Moreover, the FPTAS by Gens and Levner for $1||\sum w_jU_j$ and the algorithm by Lee and Uzsoy for $P2||\sum w_jC_j$ are probably optimal as well. There is still small room for improvement for the $1|Rej\leq Q|\sum w_jU_j$ algorithm by Zhang et al. and the algorithm for $1||\sum T_j$ by Lawler. We also give a lower bound for $P2|any|C_{max}$ and improve the dynamic program by Du and Leung from $\mathcal{O}(nP^2)$ to $\mathcal{O}(nP)$ to match this lower bound. Here, $P$ is the sum of all processing times. The same idea also improves the algorithm for $P3|any|C_{max}$ by Du and Leung from $\mathcal{O}(nP^5)$ to $\mathcal{O}(nP^2)$. The lower bounds in this work all either rely on the (Strong) Exponential Time Hypothesis or the $(\min,+)$-conjecture. While our results suggest the optimality of some classical algorithms, they also motivate future research in cases where the best known algorithms do not quite match the lower bounds.

Decision-makers often face the "many bandits" problem, where one must simultaneously learn across related but heterogeneous contextual bandit instances. For instance, a large retailer may wish to dynamically learn product demand across many stores to solve pricing or inventory problems, making it desirable to learn jointly for stores serving similar customers; alternatively, a hospital network may wish to dynamically learn patient risk across many providers to allocate personalized interventions, making it desirable to learn jointly for hospitals serving similar patient populations. We study the setting where the unknown parameter in each bandit instance can be decomposed into a global parameter plus a sparse instance-specific term. Then, we propose a novel two-stage estimator that exploits this structure in a sample-efficient way by using a combination of robust statistics (to learn across similar instances) and LASSO regression (to debias the results). We embed this estimator within a bandit algorithm, and prove that it improves asymptotic regret bounds in the context dimension $d$; this improvement is exponential for data-poor instances. We further demonstrate how our results depend on the underlying network structure of bandit instances. Finally, we illustrate the value of our approach on synthetic and real datasets.

Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs. In this paper, we introduce the topic entity graph, a local sub-graph of an entity, to represent entities with their contextual information in KG. From this view, the KB-alignment task can be formulated as a graph matching problem; and we further propose a graph-attention based solution, which first matches all entities in two topic entity graphs, and then jointly model the local matching information to derive a graph-level matching vector. Experiments show that our model outperforms previous state-of-the-art methods by a large margin.

In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuffs (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior works either simply aggregate the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or use a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in sentence as context and infer the image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% in text retrieval from image query, and 18.2% in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% and image retrieval by 16.6% (based on Recall@1 using the 5K test set).

北京阿比特科技有限公司