亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 1965 Erd\H{o}s asked, what is the largest size of a family of $k$-elements subsets of an $n$-element set that does not have a matching of size $s+1$? In this note, we improve upon a recent result of Frankl and resolve this problem for $s>101k^{3}$ and $(s+1)k\le n<(s+1)(k+\frac{1}{100k})$.

相關內容

Recent advances have shown the success of using reinforcement learning and search to solve NP-hard graph-related tasks, such as Traveling Salesman Optimization, Graph Edit Distance computation, etc. However, it remains unclear how one can efficiently and accurately detect the occurrences of a small query graph in a large target graph, which is a core operation in graph database search, biomedical analysis, social group finding, etc. This task is called Subgraph Matching which essentially performs subgraph isomorphism check between a query graph and a large target graph. One promising approach to this classical problem is the "learning-to-search" paradigm, where a reinforcement learning (RL) agent is designed with a learned policy to guide a search algorithm to quickly find the solution without any solved instances for supervision. However, for the specific task of Subgraph Matching, though the query graph is usually small given by the user as input, the target graph is often orders-of-magnitude larger. It poses challenges to the neural network design and can lead to solution and reward sparsity. In this paper, we propose N-BLS with two innovations to tackle the challenges: (1) A novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs at each search state; (2) A Monte Carlo Tree Search enhanced bi-level search framework for training the policy and value networks. Experiments on five large real-world target graphs show that N-BLS can significantly improve the subgraph matching performance.

To guarantee all agents are matched, the classic Deferred Acceptance algorithm needs complete preference lists. In practice, preference lists are short, yet stable matching still works well. This raises two questions: $\bullet$ Why? $\bullet$ Which proposals should agents include in their preference lists? We study these questions in a model, introduced by Lee (2016), with preferences based on correlated cardinal utilities: these utilities are based on common public ratings of each agent together with individual private adjustments. Lee showed that for suitable utility functions, in large markets, with high probability, for most agents, all stable matchings yield similar valued utilities. By means of a new analysis, we strengthen Lee's result, showing that in large markets, with high probability, for all but the agents with the lowest public ratings, all stable matchings yield similar valued utilities. We can then deduce that for all but the agents with the lowest public ratings, each agent has an easily identified length $O(\log n)$ preference list that includes all of its stable matches, addressing the second question above. We note that this identification uses an initial communication phase. We extend these results to settings where the two sides have unequal numbers of agents, to many-to-one settings, e.g. employers and workers, and we also show the existence of an $\epsilon$-Bayes-Nash equilibrium in which every agent makes relatively few proposals. These results all rely on a new technique for sidestepping the conditioning between the matching events that occur over the course of a run of the Deferred Acceptance algorithm. We complement these theoretical results with an experimental study.

Spiral Galaxies is a pencil-and-paper puzzle played on a grid of unit squares: given a set of points called centers, the goal is to partition the grid into polyominoes such that each polyomino contains exactly one center and is 180{\deg} rotationally symmetric about its center. We show that this puzzle is NP-complete, ASP-complete, and #P-complete even if (a) all solutions to the puzzle have rectangles for polyominoes; or (b) the polyominoes are required to be rectangles and all solutions to the puzzle have just 1$\times$1, 1$\times$3, and 3$\times$1 rectangles. The proof for the latter variant also implies NP/ASP/#P-completeness of finding a noncrossing perfect matching in distance-2 grid graphs where edges connect vertices of Euclidean distance 2. Moreover, we prove NP-completeness of the design problem of minimizing the number of centers such that there exists a set of galaxies that exactly cover a given shape

Mark-point dependence plays a critical role in research problems that can be fitted into the general framework of marked point processes. In this work, we focus on adjusting for mark-point dependence when estimating the mean and covariance functions of the mark process, given independent replicates of the marked point process. We assume that the mark process is a Gaussian process and the point process is a log-Gaussian Cox process, where the mark-point dependence is generated through the dependence between two latent Gaussian processes. Under this framework, naive local linear estimators ignoring the mark-point dependence can be severely biased. We show that this bias can be corrected using a local linear estimator of the cross-covariance function and establish uniform convergence rates of the bias-corrected estimators. Furthermore, we propose a test statistic based on local linear estimators for mark-point independence, which is shown to converge to an asymptotic normal distribution in a parametric $\sqrt{n}$-convergence rate. Model diagnostics tools are developed for key model assumptions and a robust functional permutation test is proposed for a more general class of mark-point processes. The effectiveness of the proposed methods is demonstrated using extensive simulations and applications to two real data examples.

Given an integer $k$ and a graph where every edge is colored either red or blue, the goal of the exact matching problem is to find a perfect matching with the property that exactly $k$ of its edges are red. Soon after Papadimitriou and Yannakakis introduced the problem in 1982, a randomized polynomial-time algorithm solving the problem was described by Mulmuley et al. Despite a lot of effort, it is still not known today whether a deterministic polynomial-time algorithm exists. This makes the exact matching problem an important candidate to test the popular conjecture that the complexity classes P and RP are equal. In a recent article, progress was made towards this goal by showing that for (bipartite) graphs of bounded (bipartite) independence number, a polynomial time algorithm exists. In terms of parameterized complexity, this algorithm was an XP-algorithm parameterized by the independence number. In this article, we improve the techniques to obtain an FPT-algorithm for bipartite graphs. If the input is a general graph we show that one can at least compute a perfect matching $M$ which has the correct number of red edges modulo 2. This is motivated by our last result, in which we prove that an FPT algorithm for general graphs reduces to the problem of finding in polynomial time a perfect matching $M$ with the correct number of red edges modulo 2 and additionally at most $k$ red edges.

3D shape matching is a long-standing problem in computer vision and computer graphics. While deep neural networks were shown to lead to state-of-the-art results in shape matching, existing learning-based approaches are limited in the context of multi-shape matching: (i) either they focus on matching pairs of shapes only and thus suffer from cycle-inconsistent multi-matchings, or (ii) they require an explicit template shape to address the matching of a collection of shapes. In this paper, we present a novel approach for deep multi-shape matching that ensures cycle-consistent multi-matchings while not depending on an explicit template shape. To this end, we utilise a shape-to-universe multi-matching representation that we combine with powerful functional map regularisation, so that our multi-shape matching neural network can be trained in a fully unsupervised manner. While the functional map regularisation is only considered during training time, functional maps are not computed for predicting correspondences, thereby allowing for fast inference. We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets, and, most remarkably, that our unsupervised method even outperforms recent supervised methods.

We present a new approach for finding matchings in dense graphs by building on Szemer\'edi's celebrated Regularity Lemma. This allows us to obtain non-trivial albeit slight improvements over longstanding bounds for matchings in streaming and dynamic graphs. In particular, we establish the following results for $n$-vertex graphs: * A deterministic single-pass streaming algorithm that finds a $(1-o(1))$-approximate matching in $o(n^2)$ bits of space. This constitutes the first single-pass algorithm for this problem in sublinear space that improves over the $\frac{1}{2}$-approximation of the greedy algorithm. * A randomized fully dynamic algorithm that with high probability maintains a $(1-o(1))$-approximate matching in $o(n)$ worst-case update time per each edge insertion or deletion. The algorithm works even against an adaptive adversary. This is the first $o(n)$ update-time dynamic algorithm with approximation guarantee arbitrarily close to one. Given the use of regularity lemma, the improvement obtained by our algorithms over trivial bounds is only by some $(\log^*{n})^{\Theta(1)}$ factor. Nevertheless, in each case, they show that the ``right'' answer to the problem is not what is dictated by the previous bounds. Finally, in the streaming model, we also present a randomized $(1-o(1))$-approximation algorithm whose space can be upper bounded by the density of certain Ruzsa-Szemer\'edi (RS) graphs. While RS graphs by now have been used extensively to prove streaming lower bounds, ours is the first to use them as an upper bound tool for designing improved streaming algorithms.

In the 1970s, Gy\H{o}ri and Lov\'{a}sz showed that for a $k$-connected $n$-vertex graph, a given set of terminal vertices $t_1, \dots, t_k$ and natural numbers $n_1, \dots, n_k$ satisfying $\sum_{i=1}^{k} n_i = n$, a connected vertex partition $S_1, \dots, S_k$ satisfying $t_i \in S_i$ and $|S_i| = n_i$ exists. However, polynomial algorithms to actually compute such partitions are known so far only for $k \leq 4$. This motivates us to take a new approach and constrain this problem to particular graph classes instead of restricting the values of $k$. More precisely, we consider $k$-connected chordal graphs and a broader class of graphs related to them. For the first, we give an algorithm with $O(n^2)$ running time that solves the problem exactly, and for the second, an algorithm with $O(n^4)$ running time that deviates on at most one vertex from the given required vertex partition sizes.

In this paper, we consider a variant of the classical algorithmic problem of checking whether a given word $v$ is a subsequence of another word $w$. More precisely, we consider the problem of deciding, given a number $p$ (defining a range-bound) and two words $v$ and $w$, whether there exists a factor $w[i:i+p-1]$ (or, in other words, a range of length $p$) of $w$ having $v$ as subsequence (i.\,e., $v$ occurs as a subsequence in the bounded range $w[i:i+p-1]$). We give matching upper and lower quadratic bounds for the time complexity of this problem. Further, we consider a series of algorithmic problems in this setting, in which, for given integers $k$, $p$ and a word $w$, we analyse the set $p$-Subseq$_{k}(w)$ of all words of length $k$ which occur as subsequence of some factor of length $p$ of $w$. Among these, we consider the $k$-universality problem, the $k$-equivalence problem, as well as problems related to absent subsequences. Surprisingly, unlike the case of the classical model of subsequences in words where such problems have efficient solutions in general, we show that most of these problems become intractable in the new setting when subsequences in bounded ranges are considered. Finally, we provide an example of how some of our results can be applied to subsequence matching problems for circular words.

Multi-label text classification refers to the problem of assigning each given document its most relevant labels from the label set. Commonly, the metadata of the given documents and the hierarchy of the labels are available in real-world applications. However, most existing studies focus on only modeling the text information, with a few attempts to utilize either metadata or hierarchy signals, but not both of them. In this paper, we bridge the gap by formalizing the problem of metadata-aware text classification in a large label hierarchy (e.g., with tens of thousands of labels). To address this problem, we present the MATCH solution -- an end-to-end framework that leverages both metadata and hierarchy information. To incorporate metadata, we pre-train the embeddings of text and metadata in the same space and also leverage the fully-connected attentions to capture the interrelations between them. To leverage the label hierarchy, we propose different ways to regularize the parameters and output probability of each child label by its parents. Extensive experiments on two massive text datasets with large-scale label hierarchies demonstrate the effectiveness of MATCH over state-of-the-art deep learning baselines.

北京阿比特科技有限公司