亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We are interested in embedding trees T with maximum degree at most four in a rectangular grid, such that the vertices of T correspond to grid points, while edges of T correspond to non-intersecting straight segments of the grid lines. Such embeddings are called straight models. While each edge is represented by a straight segment, a path of T is represented in the model by the union of the segments corresponding to its edges, which may consist of a path in the model having several bends. The aim is to determine a straight model of a given tree T minimizing the maximum number of bends over all paths of T. We provide a quadratic-time algorithm for this problem. We also show how to construct straight models that have k as its minimum number of bends and with the least number of vertices possible. As an application of our algorithm, we provide an upper bound on the number of bends of EPG models of graphs that are both VPT and EPT.

相關內容

Let a polytope $\mathcal{P}$ be defined by one of the following ways: (i) $\mathcal{P} = \{x \in \mathbb{R}^n \colon A x \leq b\}$, where $A \in \mathbb{Z}^{(n+m) \times n}$, $b \in \mathbb{Z}^{(n+m)}$, and $rank(A) = n$, (ii) $\mathcal{P} = \{x \in \mathbb{R}_+^n \colon A x = b\}$, where $A \in \mathbb{Z}^{m \times n}$, $b \in \mathbb{Z}^{m}$, and $rank(A) = m$, and let all the rank minors of $A$ be bounded by $\Delta$ in the absolute values. We show that $|\mathcal{P} \cap \mathbb{Z}^n|$ can be computed with an algorithm, having the arithmetic complexity bound $$ O\bigl( \nu(d,m,\Delta) \cdot d^3 \cdot \Delta^4 \cdot \log(\Delta) \bigr), $$ where $d = \dim(\mathcal{P})$ and $\nu(d,m,\Delta)$ is the maximal possible number of vertices in a $d$-dimensional polytope $P$, defined by one of the systems above. Using the obtained result, we have the following arithmetical complexity bounds to compute $|P \cap \mathbb{Z}^n|$: 1) The bound $O(\frac{d}{m}+1)^m \cdot d^3 \cdot \Delta^4 \cdot \log(\Delta)$ that is polynomial on $d$ and $\Delta$, for any fixed $m$; 2) The bound $O\bigl(\frac{m}{d}+1\bigr)^{\frac{d}{2}} \cdot d^4 \cdot \Delta^4 \cdot \log(\Delta)$ that is polynomial on $m$ and $\Delta$, for any fixed $d$; 3) The bound $O(d)^{4 + \frac{d}{2}} \cdot \Delta^{4+d} \cdot \log(\Delta)$ that is polynomial on $\Delta$, for any fixed $d$. Given bounds can be used to obtain faster algorithms for the ILP feasibility problem, and for the problem to count integer points in a simplex or in an unbounded Subset-Sum polytope. Unbounded and parametric versions of the above problem are also considered.

In this paper we study the maximum degree of interaction which may emerge in distributed systems. It is assumed that a distributed system is represented by a graph of nodes interacting over edges. Each node has some amount of data. The intensity of interaction over an edge is proportional to the product of the amounts of data in each node at either end of the edge. The maximum sum of interactions over the edges is searched for. This model can be extended to other interacting entities. For bipartite graphs and odd-length cycles we prove that the greatest degree of interaction emerge when the whole data is concentrated in an arbitrary pair of neighbors. Equal partitioning of the load is shown to be optimum for complete graphs. Finally, we show that in general graphs for maximum interaction the data should be distributed equally between the nodes of the largest clique in the graph. We also present in this context a result of Motzkin and Straus from 1965 for the maximal interaction objective.

The embedding and extraction of useful knowledge is a recent trend in machine learning applications, e.g., to supplement existing datasets that are small. Whilst, as the increasing use of machine learning models in security-critical applications, the embedding and extraction of malicious knowledge are equivalent to the notorious backdoor attack and its defence, respectively. This paper studies the embedding and extraction of knowledge in tree ensemble classifiers, and focuses on knowledge expressible with a generic form of Boolean formulas, e.g., robustness properties and backdoor attacks. For the embedding, it is required to be preservative(the original performance of the classifier is preserved), verifiable(the knowledge can be attested), and stealthy(the embedding cannot be easily detected). To facilitate this, we propose two novel, and effective, embedding algorithms, one of which is for black-box settings and the other for white-box settings.The embedding can be done in PTIME. Beyond the embedding, we develop an algorithm to extract the embedded knowledge, by reducing the problem to be solvable with an SMT (satisfiability modulo theories) solver. While this novel algorithm can successfully extract knowledge, the reduction leads to an NP computation. Therefore, if applying embedding as backdoor attacks and extraction as defence, our results suggest a complexity gap (P vs. NP) between the attack and defence when working with tree ensemble classifiers. We apply our algorithms toa diverse set of datasets to validate our conclusion extensively.

A finite word $w$ is called \emph{rich} if it contains $\vert w\vert+1$ distinct palindromic factors including the empty word. For every finite rich word $w$ there are distinct nonempty palindromes $w_1, w_2,\dots,w_p$ such that $w=w_pw_{p-1}\cdots w_1$ and $w_i$ is the longest palindromic suffix of $w_pw_{p-1}\cdots w_i$, where $1\leq i\leq p$. This palindromic factorization is called \emph{UPS-factorization}. Let $luf(w)=p$ be \emph{the length of UPS-factorization} of $w$. In 2017, it was proved that there is a constant $c$ such that if $w$ is a finite rich word and $n=\vert w\vert$ then $luf(w)\leq c\frac{n}{\ln{n}}$. We improve this result as follows: There are constants $\mu, \pi$ such that if $w$ is a finite rich word and $n=\vert w\vert$ then \[luf(w)\leq \mu\frac{n}{e^{\pi\sqrt{\ln{n}}}}\mbox{.}\] The constants $c,\mu,\pi$ depend on the size of the alphabet.

In 1971, Tutte wrote in an article that "it is tempting to conjecture that every 3-connected bipartite cubic graph is hamiltonian". Motivated by this remark, Horton constructed a counterexample on 96 vertices. In a sequence of articles by different authors several smaller counterexamples were presented. The smallest of these graphs is a graph on 50 vertices which was discovered independently by Georges and Kelmans. In this article we show that there is no smaller counterexample. As all non-hamiltonian 3-connected bipartite cubic graphs in the literature have cyclic 4-cuts -- even if they have girth 6 -- it is natural to ask whether this is a necessary prerequisite. In this article we answer this question in the negative and give a construction of an infinite family of non-hamiltonian cyclically 5-connected bipartite cubic graphs. In 1969, Barnette gave a weaker version of the conjecture stating that 3-connected planar bipartite cubic graphs are hamiltonian. We show that Barnette's conjecture is true up to at least 90 vertices. We also report that a search of small non-hamiltonian 3-connected bipartite cubic graphs did not find any with genus less than 4.

Bipartite graph embedding has recently attracted much attention due to the fact that bipartite graphs are widely used in various application domains. Most previous methods, which adopt random walk-based or reconstruction-based objectives, are typically effective to learn local graph structures. However, the global properties of bipartite graph, including community structures of homogeneous nodes and long-range dependencies of heterogeneous nodes, are not well preserved. In this paper, we propose a bipartite graph embedding called BiGI to capture such global properties by introducing a novel local-global infomax objective. Specifically, BiGI first generates a global representation which is composed of two prototype representations. BiGI then encodes sampled edges as local representations via the proposed subgraph-level attention mechanism. Through maximizing the mutual information between local and global representations, BiGI enables nodes in bipartite graph to be globally relevant. Our model is evaluated on various benchmark datasets for the tasks of top-K recommendation and link prediction. Extensive experiments demonstrate that BiGI achieves consistent and significant improvements over state-of-the-art baselines. Detailed analyses verify the high effectiveness of modeling the global properties of bipartite graph.

Search in social networks such as Facebook poses different challenges than in classical web search: besides the query text, it is important to take into account the searcher's context to provide relevant results. Their social graph is an integral part of this context and is a unique aspect of Facebook search. While embedding-based retrieval (EBR) has been applied in eb search engines for years, Facebook search was still mainly based on a Boolean matching model. In this paper, we discuss the techniques for applying EBR to a Facebook Search system. We introduce the unified embedding framework developed to model semantic embeddings for personalized search, and the system to serve embedding-based retrieval in a typical search system based on an inverted index. We discuss various tricks and experiences on end-to-end optimization of the whole system, including ANN parameter tuning and full-stack optimization. Finally, we present our progress on two selected advanced topics about modeling. We evaluated EBR on verticals for Facebook Search with significant metrics gains observed in online A/B experiments. We believe this paper will provide useful insights and experiences to help people on developing embedding-based retrieval systems in search engines.

We present an analysis of embeddings extracted from different pre-trained models for content-based image retrieval. Specifically, we study embeddings from image classification and object detection models. We discover that even with additional human annotations such as bounding boxes and segmentation masks, the discriminative power of the embeddings based on modern object detection models is significantly worse than their classification counterparts for the retrieval task. At the same time, our analysis also unearths that object detection model can help retrieval task by acting as a hard attention module for extracting object embeddings that focus on salient region from the convolutional feature map. In order to efficiently extract object embeddings, we introduce a simple guided student-teacher training paradigm for learning discriminative embeddings within the object detection framework. We support our findings with strong experimental results.

Large scale knowledge graph embedding has attracted much attention from both academia and industry in the field of Artificial Intelligence. However, most existing methods concentrate solely on fact triples contained in the given knowledge graph. Inspired by the fact that logic rules can provide a flexible and declarative language for expressing rich background knowledge, it is natural to integrate logic rules into knowledge graph embedding, to transfer human knowledge to entity and relation embedding, and strengthen the learning process. In this paper, we propose a novel logic rule-enhanced method which can be easily integrated with any translation based knowledge graph embedding model, such as TransE . We first introduce a method to automatically mine the logic rules and corresponding confidences from the triples. And then, to put both triples and mined logic rules within the same semantic space, all triples in the knowledge graph are represented as first-order logic. Finally, we define several operations on the first-order logic and minimize a global loss over both of the mined logic rules and the transformed first-order logics. We conduct extensive experiments for link prediction and triple classification on three datasets: WN18, FB166, and FB15K. Experiments show that the rule-enhanced method can significantly improve the performance of several baselines. The highlight of our model is that the filtered Hits@1, which is a pivotal evaluation in the knowledge inference task, has a significant improvement (up to 700% improvement).

We present a new approach for learning graph embeddings, that relies on structural measures of node similarities for generation of training data. The model learns node embeddings that are able to approximate a given measure, such as the shortest path distance or any other. Evaluations of the proposed model on semantic similarity and word sense disambiguation tasks (using WordNet as the source of gold similarities) show that our method yields state-of-the-art results, but also is capable in certain cases to yield even better performance than the input similarity measure. The model is computationally efficient, orders of magnitude faster than the direct computation of graph distances.

北京阿比特科技有限公司