亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

There is a large and important collection of Ramsey-type combinatorial problems, closely related to central problems in complexity theory, that can be formulated in terms of the asymptotic growth of the size of the maximum independent sets in powers of a fixed small (directed or undirected) hypergraph, also called the Shannon capacity. An important instance of this is the corner problem studied in the context of multiparty communication complexity in the Number On the Forehead (NOF) model. Versions of this problem and the NOF connection have seen much interest (and progress) in recent works of Linial, Pitassi and Shraibman (ITCS 2019) and Linial and Shraibman (CCC 2021). We introduce and study a general algebraic method for lower bounding the Shannon capacity of directed hypergraphs via combinatorial degenerations, a combinatorial kind of "approximation" of subgraphs that originates from the study of matrix multiplication in algebraic complexity theory (and which play an important role there) but which we use in a novel way. Using the combinatorial degeneration method, we make progress on the corner problem by explicitly constructing a corner-free subset in $F_2^n \times F_2^n$ of size $\Omega(3.39^n/poly(n))$, which improves the previous lower bound $\Omega(2.82^n)$ of Linial, Pitassi and Shraibman (ITCS 2019) and which gets us closer to the best upper bound $4^{n - o(n)}$. Our new construction of corner-free sets implies an improved NOF protocol for the Eval problem. In the Eval problem over a group $G$, three players need to determine whether their inputs $x_1, x_2, x_3 \in G$ sum to zero. We find that the NOF communication complexity of the Eval problem over $F_2^n$ is at most $0.24n + O(\log n)$, which improves the previous upper bound $0.5n + O(\log n)$.

相關內容

We introduce a new periodicity detection algorithm for binary time series of event onsets, the Gaussian Mixture Periodicity Detection Algorithm (GMPDA). The algorithm approaches the periodicity detection problem to infer the parameters of a generative model. We specified two models - the Clock and Random Walk - which describe two different periodic phenomena and provide a generative framework. The algorithm achieved strong results on test cases for single and multiple periodicity detection and varying noise levels. The performance of GMPDA was also evaluated on real data, recorded leg movements during sleep, where GMPDA was able to identify the expected periodicities despite high noise levels. The paper's key contributions are two new models for generating periodic event behavior and the GMPDA algorithm for multiple periodicity detection, which is highly accurate under noise.

Future Event Generation aims to generate fluent and reasonable future event descriptions given preceding events. It requires not only fluent text generation but also commonsense reasoning to maintain the coherence of the entire event story. However, existing FEG methods are easily trapped into repeated or general events without imposing any logical constraint to the generation process. In this paper, we propose a novel explainable FEG framework that consists of a commonsense inference model (IM) and an event generation model (GM). The IM, which is pre-trained on a commonsense knowledge graph ATOMIC, learns to interpret the preceding events and conducts commonsense reasoning to reveal the characters psychology such as intent, reaction, and needs as latent variables. GM further takes the commonsense knowledge as prompts to guide and enforce the generation of logistically coherent future events. As unique merit, the commonsense prompts can be further decoded into textual descriptions, yielding explanations for the future event. Automatic and human evaluation demonstrate that our approach can generate more coherent, specific, and logical future events than the strong baselines.

A strict bramble of a graph $G$ is a collection of pairwise-intersecting connected subgraphs of $G.$ The order of a strict bramble ${\cal B}$ is the minimum size of a set of vertices intersecting all sets of ${\cal B}.$ The strict bramble number of $G,$ denoted by ${\sf sbn}(G),$ is the maximum order of a strict bramble in $G.$ The strict bramble number of $G$ can be seen as a way to extend the notion of acyclicity, departing from the fact that (non-empty) acyclic graphs are exactly the graphs where every strict bramble has order one. We initiate the study of this graph parameter by providing three alternative definitions, each revealing different structural characteristics. The first is a min-max theorem asserting that ${\sf sbn}(G)$ is equal to the minimum $k$ for which $G$ is a minor of the lexicographic product of a tree and a clique on $k$ vertices (also known as the lexicographic tree product number). The second characterization is in terms of a new variant of a tree decomposition called lenient tree decomposition. We prove that ${\sf sbn}(G)$ is equal to the minimum $k$ for which there exists a lenient tree decomposition of $G$ of width at most $k.$ The third characterization is in terms of extremal graphs. For this, we define, for each $k,$ the concept of a $k$-domino-tree and we prove that every edge-maximal graph of strict bramble number at most $k$ is a $k$-domino-tree. We also identify three graphs that constitute the minor-obstruction set of the class of graphs with strict bramble number at most two. We complete our results by proving that, given some $G$ and $k,$ deciding whether ${\sf sbn}(G) \leq k$ is an ${\sf NP}$-complete problem.

Link prediction is a very fundamental task on graphs. Inspired by traditional path-based methods, in this paper we propose a general and flexible representation learning framework based on paths for link prediction. Specifically, we define the representation of a pair of nodes as the generalized sum of all path representations, with each path representation as the generalized product of the edge representations in the path. Motivated by the Bellman-Ford algorithm for solving the shortest path problem, we show that the proposed path formulation can be efficiently solved by the generalized Bellman-Ford algorithm. To further improve the capacity of the path formulation, we propose the Neural Bellman-Ford Network (NBFNet), a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm. The NBFNet parameterizes the generalized Bellman-Ford algorithm with 3 neural components, namely INDICATOR, MESSAGE and AGGREGATE functions, which corresponds to the boundary condition, multiplication operator, and summation operator respectively. The NBFNet is very general, covers many traditional path-based methods, and can be applied to both homogeneous graphs and multi-relational graphs (e.g., knowledge graphs) in both transductive and inductive settings. Experiments on both homogeneous graphs and knowledge graphs show that the proposed NBFNet outperforms existing methods by a large margin in both transductive and inductive settings, achieving new state-of-the-art results.

Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce. In this work, we address the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph. Graphical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long-distance dependencies, and structural variety. We introduce a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints. Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text. Automatic and human evaluations show that our technique produces more informative texts which exhibit better document structure than competitive encoder-decoder methods.

Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs.

Driven by successes in deep learning, computer vision research has begun to move beyond object detection and image classification to more sophisticated tasks like image captioning or visual question answering. Motivating such endeavors is the desire for models to capture not only objects present in an image, but more fine-grained aspects of a scene such as relationships between objects and their attributes. Scene graphs provide a formal construct for capturing these aspects of an image. Despite this, there have been only a few recent efforts to generate scene graphs from imagery. Previous works limit themselves to settings where bounding box information is available at train time and do not attempt to generate scene graphs with attributes. In this paper we propose a method, based on recent advancements in Generative Adversarial Networks, to overcome these deficiencies. We take the approach of first generating small subgraphs, each describing a single statement about a scene from a specific region of the input image chosen using an attention mechanism. By doing so, our method is able to produce portions of the scene graphs with attribute information without the need for bounding box labels. Then, the complete scene graph is constructed from these subgraphs. We show that our model improves upon prior work in scene graph generation on state-of-the-art data sets and accepted metrics. Further, we demonstrate that our model is capable of handling a larger vocabulary size than prior work has attempted.

Computer poetry generation is our first step towards computer writing. Writing must have a theme. The current approaches of using sequence-to-sequence models with attention often produce non-thematic poems. We present a novel conditional variational autoencoder with a hybrid decoder adding the deconvolutional neural networks to the general recurrent neural networks to fully learn topic information via latent variables. This approach significantly improves the relevance of the generated poems by representing each line of the poem not only in a context-sensitive manner but also in a holistic way that is highly related to the given keyword and the learned topic. A proposed augmented word2vec model further improves the rhythm and symmetry. Tests show that the generated poems by our approach are mostly satisfying with regulated rules and consistent themes, and 73.42% of them receive an Overall score no less than 3 (the highest score is 5).

Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot and few-shot learning problems. Our approach is based on a novel Class Adapting Principal Directions (CAPD) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class, and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen-unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot and few/one-shot learning problems.

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

北京阿比特科技有限公司