{mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The orthogonality dimension of a graph over $\mathbb{R}$ is the smallest integer $d$ for which one can assign to every vertex a nonzero vector in $\mathbb{R}^d$ such that every two adjacent vertices receive orthogonal vectors. For an integer $d$, the $d$-Ortho-Dim$_\mathbb{R}$ problem asks to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $d$. We prove that for every integer $d \geq 3$, the $d$-Ortho-Dim$_\mathbb{R}$ problem parameterized by the vertex cover number $k$ admits a kernel with $O(k^{d-1})$ vertices and bit-size $O(k^{d-1} \cdot \log k)$. We complement this result by a nearly matching lower bound, showing that for any $\varepsilon > 0$, the problem admits no kernel of bit-size $O(k^{d-1-\varepsilon})$ unless $\mathsf{NP} \subseteq \mathsf{coNP/poly}$. We further study the kernelizability of orthogonality dimension problems in additional settings, including over general fields and under various structural parameterizations.

相關內容

We present the first polynomial-time algorithm to exactly compute the number of labeled chordal graphs on $n$ vertices. Our algorithm solves a more general problem: given $n$ and $\omega$ as input, it computes the number of $\omega$-colorable labeled chordal graphs on $n$ vertices, using $O(n^7)$ arithmetic operations. A standard sampling-to-counting reduction then yields a polynomial-time exact sampler that generates an $\omega$-colorable labeled chordal graph on $n$ vertices uniformly at random. Our counting algorithm improves upon the previous best result by Wormald (1985), which computes the number of labeled chordal graphs on $n$ vertices in time exponential in $n$. An implementation of the polynomial-time counting algorithm gives the number of labeled chordal graphs on up to $30$ vertices in less than three minutes on a standard desktop computer. Previously, the number of labeled chordal graphs was only known for graphs on up to $15$ vertices. In addition, we design two approximation algorithms: (1) an approximate counting algorithm that computes a $(1\pm\varepsilon)$-approximation of the number of $n$-vertex labeled chordal graphs, and (2) an approximate sampling algorithm that generates a random labeled chordal graph according to a distribution whose total variation distance from the uniform distribution is at most $\varepsilon$. The approximate counting algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ time, and the approximate sampling algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ expected time.

Given two sets $R$ and $B$ of $n$ points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in $R$ from the "blue" points in $B$ and is robust to outliers. More precisely, we find a region $\mathcal{W}_B$ bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points $B$, and few red points. Our running times vary between optimal $O(n\log n)$ and around $O(n^3)$, depending on the type of region $\mathcal{W}_B$ and whether we wish to minimize only red outliers, only blue outliers, or both.

Semiring semantics of first-order logic generalises classical Boolean semantics by permitting truth values from a commutative semiring, which can model information such as costs or access restrictions. This raises the question to what extent classical model theoretic properties still apply, and how this depends on the algebraic properties of the semiring. In this paper, we study this question for the classical locality theorems due to Hanf and Gaifman. We prove that Hanf's Locality Theorem generalises to all semirings with idempotent operations, but fails for many non-idempotent semirings. We then consider Gaifman normal forms and show that for formulae with free variables, Gaifman's Theorem does not generalise beyond the Boolean semiring. Also for sentences, it fails in the natural semiring and the tropical semiring. Our main result, however, is a constructive proof of the existence of Gaifman normal forms for min-max and lattice semirings. The proof implies a stronger version of Gaifman's classical theorem in Boolean semantics: every sentence has a Gaifman normal form which does not add negations.

A graph $G$ is a pairwise compatibility graph (PCG) if there exists an edge-weighted tree and an interval $I$, such that each leaf of the tree is a vertex of the graph, and there is an edge $\{ x, y \}$ in $G$ if and only if the weight of the path in the tree connecting $x$ and $y$ lies within the interval $I$. Originating in phylogenetics, PCGs are closely connected to important graph classes like leaf-powers and multi-threshold graphs, widely applied in bioinformatics, especially in understanding evolutionary processes. In this paper we introduce two natural generalizations of the PCG class, namely $k$-OR-PCG and $k$-AND-PCG, which are the classes of graphs that can be expressed as union and intersection, respectively, of $k$ PCGs. These classes can be also described using the concepts of the covering number and the intersection dimension of a graph in relation to the PCG class. We investigate how the classes of OR-PCG and AND-PCG are related to PCGs, $k$-interval-PCGs and other graph classes known in the literature. In particular, we provide upper bounds on the minimum $k$ for which an arbitrary graph $G$ belongs to $k$-interval-PCGs, $k$-OR-PCG or $k$-AND-PCG classes. For particular graph classes we improve these general bounds. Moreover, we show that, for every integer $k$, there exists a bipartite graph that is not in the $k$-interval-PCGs class, proving that there is no finite $k$ for which the $k$-interval-PCG class contains all the graphs. This answers an open question of Ahmed and Rahman from 2017. Finally, using a Ramsey theory argument, we show that for any $k$, there exists graphs that are not in $k$-AND-PCG, and graphs that are not in $k$-OR-PCG.

We present the first mini-batch algorithm for maximizing a non-negative monotone decomposable submodular function, $F=\sum_{i=1}^N f^i$, under a set of constraints. We consider two sampling approaches: uniform and weighted. We first show that mini-batch with weighted sampling improves over the state of the art sparsifier based approach both in theory and in practice. Surprisingly, our experimental results show that uniform sampling is superior to weighted sampling. However, it is impossible to explain this using worst-case analysis. Our main contribution is using smoothed analysis to provide a theoretical foundation for our experimental results. We show that, under very mild assumptions, uniform sampling is superior for both the mini-batch and the sparsifier approaches. We empirically verify that these assumptions hold for our datasets. Uniform sampling is simple to implement and has complexity independent of $N$, making it the perfect candidate to tackle massive real-world datasets.

Significant work has been done on computing the ``average'' optimal solution value for various $\mathsf{NP}$-complete problems using the Erd\"{o}s-R\'{e}nyi model to establish \emph{critical thresholds}. Critical thresholds define narrow bounds for the optimal solution of a problem instance such that the probability that the solution value lies outside these bounds vanishes as the instance size approaches infinity. In this paper, we extend the Erd\"{o}s-R\'{e}nyi model to general hypergraphs on $n$ vertices and $M$ hyperedges. We consider the problem of determining critical thresholds for the largest cardinality matching, and we show that for $M=o(1.155^n)$ the size of the maximum cardinality matching is almost surely 1. On the other hand, if $M=\Theta(2^n)$ then the size of the maximum cardinality matching is $\Omega(n^{\frac12-\gamma})$ for an arbitrary $\gamma >0$. Lastly, we address the gap where $\Omega(1.155^n)=M=o(2^n)$ empirically through computer simulations.

We consider the problem of finding an informative path through a graph, given initial and terminal nodes and a given maximum path length. We assume that a linear noise corrupted measurement is taken at each node of an underlying unknown vector that we wish to estimate. The informativeness is measured by the reduction in uncertainty in our estimate, evaluated using several metrics. We present a convex relaxation for this informative path planning problem, which we can readily solve to obtain a bound on the possible performance. We develop an approximate sequential method where the path is constructed segment by segment through dynamic programming. This involves solving an orienteering problem, with the node reward acting as a surrogate for informativeness, taking the first step, and then repeating the process. The method scales to very large problem instances and achieves performance not too far from the bound produced by the convex relaxation. We also demonstrate our method's ability to handle adaptive objectives, multimodal sensing, and multi-agent variations of the informative path planning problem.

We prove that the constructive and intuitionistic variants of the modal logic $\mathsf{KB}$ coincide. This result contrasts with a recent result by Das and Marin, who showed that the constructive and intuitionistic variants of $\mathsf{K}$ do not prove the same diamond-free formulas.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

北京阿比特科技有限公司
\mathbb{R}$ problem asks to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $d$. We prove that for every integer $d \geq 3$, the $d$-Ortho-Dim 成年人日屄视频免费观看,日本一区二区三区不卡网站,Z欧美一区二区不卡视频 {mayi_des}

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The orthogonality dimension of a graph over $\mathbb{R}$ is the smallest integer $d$ for which one can assign to every vertex a nonzero vector in $\mathbb{R}^d$ such that every two adjacent vertices receive orthogonal vectors. For an integer $d$, the $d$-Ortho-Dim$_\mathbb{R}$ problem asks to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $d$. We prove that for every integer $d \geq 3$, the $d$-Ortho-Dim$_\mathbb{R}$ problem parameterized by the vertex cover number $k$ admits a kernel with $O(k^{d-1})$ vertices and bit-size $O(k^{d-1} \cdot \log k)$. We complement this result by a nearly matching lower bound, showing that for any $\varepsilon > 0$, the problem admits no kernel of bit-size $O(k^{d-1-\varepsilon})$ unless $\mathsf{NP} \subseteq \mathsf{coNP/poly}$. We further study the kernelizability of orthogonality dimension problems in additional settings, including over general fields and under various structural parameterizations.

相關內容

We present the first polynomial-time algorithm to exactly compute the number of labeled chordal graphs on $n$ vertices. Our algorithm solves a more general problem: given $n$ and $\omega$ as input, it computes the number of $\omega$-colorable labeled chordal graphs on $n$ vertices, using $O(n^7)$ arithmetic operations. A standard sampling-to-counting reduction then yields a polynomial-time exact sampler that generates an $\omega$-colorable labeled chordal graph on $n$ vertices uniformly at random. Our counting algorithm improves upon the previous best result by Wormald (1985), which computes the number of labeled chordal graphs on $n$ vertices in time exponential in $n$. An implementation of the polynomial-time counting algorithm gives the number of labeled chordal graphs on up to $30$ vertices in less than three minutes on a standard desktop computer. Previously, the number of labeled chordal graphs was only known for graphs on up to $15$ vertices. In addition, we design two approximation algorithms: (1) an approximate counting algorithm that computes a $(1\pm\varepsilon)$-approximation of the number of $n$-vertex labeled chordal graphs, and (2) an approximate sampling algorithm that generates a random labeled chordal graph according to a distribution whose total variation distance from the uniform distribution is at most $\varepsilon$. The approximate counting algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ time, and the approximate sampling algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ expected time.

Given two sets $R$ and $B$ of $n$ points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in $R$ from the "blue" points in $B$ and is robust to outliers. More precisely, we find a region $\mathcal{W}_B$ bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points $B$, and few red points. Our running times vary between optimal $O(n\log n)$ and around $O(n^3)$, depending on the type of region $\mathcal{W}_B$ and whether we wish to minimize only red outliers, only blue outliers, or both.

Semiring semantics of first-order logic generalises classical Boolean semantics by permitting truth values from a commutative semiring, which can model information such as costs or access restrictions. This raises the question to what extent classical model theoretic properties still apply, and how this depends on the algebraic properties of the semiring. In this paper, we study this question for the classical locality theorems due to Hanf and Gaifman. We prove that Hanf's Locality Theorem generalises to all semirings with idempotent operations, but fails for many non-idempotent semirings. We then consider Gaifman normal forms and show that for formulae with free variables, Gaifman's Theorem does not generalise beyond the Boolean semiring. Also for sentences, it fails in the natural semiring and the tropical semiring. Our main result, however, is a constructive proof of the existence of Gaifman normal forms for min-max and lattice semirings. The proof implies a stronger version of Gaifman's classical theorem in Boolean semantics: every sentence has a Gaifman normal form which does not add negations.

A graph $G$ is a pairwise compatibility graph (PCG) if there exists an edge-weighted tree and an interval $I$, such that each leaf of the tree is a vertex of the graph, and there is an edge $\{ x, y \}$ in $G$ if and only if the weight of the path in the tree connecting $x$ and $y$ lies within the interval $I$. Originating in phylogenetics, PCGs are closely connected to important graph classes like leaf-powers and multi-threshold graphs, widely applied in bioinformatics, especially in understanding evolutionary processes. In this paper we introduce two natural generalizations of the PCG class, namely $k$-OR-PCG and $k$-AND-PCG, which are the classes of graphs that can be expressed as union and intersection, respectively, of $k$ PCGs. These classes can be also described using the concepts of the covering number and the intersection dimension of a graph in relation to the PCG class. We investigate how the classes of OR-PCG and AND-PCG are related to PCGs, $k$-interval-PCGs and other graph classes known in the literature. In particular, we provide upper bounds on the minimum $k$ for which an arbitrary graph $G$ belongs to $k$-interval-PCGs, $k$-OR-PCG or $k$-AND-PCG classes. For particular graph classes we improve these general bounds. Moreover, we show that, for every integer $k$, there exists a bipartite graph that is not in the $k$-interval-PCGs class, proving that there is no finite $k$ for which the $k$-interval-PCG class contains all the graphs. This answers an open question of Ahmed and Rahman from 2017. Finally, using a Ramsey theory argument, we show that for any $k$, there exists graphs that are not in $k$-AND-PCG, and graphs that are not in $k$-OR-PCG.

We present the first mini-batch algorithm for maximizing a non-negative monotone decomposable submodular function, $F=\sum_{i=1}^N f^i$, under a set of constraints. We consider two sampling approaches: uniform and weighted. We first show that mini-batch with weighted sampling improves over the state of the art sparsifier based approach both in theory and in practice. Surprisingly, our experimental results show that uniform sampling is superior to weighted sampling. However, it is impossible to explain this using worst-case analysis. Our main contribution is using smoothed analysis to provide a theoretical foundation for our experimental results. We show that, under very mild assumptions, uniform sampling is superior for both the mini-batch and the sparsifier approaches. We empirically verify that these assumptions hold for our datasets. Uniform sampling is simple to implement and has complexity independent of $N$, making it the perfect candidate to tackle massive real-world datasets.

Significant work has been done on computing the ``average'' optimal solution value for various $\mathsf{NP}$-complete problems using the Erd\"{o}s-R\'{e}nyi model to establish \emph{critical thresholds}. Critical thresholds define narrow bounds for the optimal solution of a problem instance such that the probability that the solution value lies outside these bounds vanishes as the instance size approaches infinity. In this paper, we extend the Erd\"{o}s-R\'{e}nyi model to general hypergraphs on $n$ vertices and $M$ hyperedges. We consider the problem of determining critical thresholds for the largest cardinality matching, and we show that for $M=o(1.155^n)$ the size of the maximum cardinality matching is almost surely 1. On the other hand, if $M=\Theta(2^n)$ then the size of the maximum cardinality matching is $\Omega(n^{\frac12-\gamma})$ for an arbitrary $\gamma >0$. Lastly, we address the gap where $\Omega(1.155^n)=M=o(2^n)$ empirically through computer simulations.

We consider the problem of finding an informative path through a graph, given initial and terminal nodes and a given maximum path length. We assume that a linear noise corrupted measurement is taken at each node of an underlying unknown vector that we wish to estimate. The informativeness is measured by the reduction in uncertainty in our estimate, evaluated using several metrics. We present a convex relaxation for this informative path planning problem, which we can readily solve to obtain a bound on the possible performance. We develop an approximate sequential method where the path is constructed segment by segment through dynamic programming. This involves solving an orienteering problem, with the node reward acting as a surrogate for informativeness, taking the first step, and then repeating the process. The method scales to very large problem instances and achieves performance not too far from the bound produced by the convex relaxation. We also demonstrate our method's ability to handle adaptive objectives, multimodal sensing, and multi-agent variations of the informative path planning problem.

We prove that the constructive and intuitionistic variants of the modal logic $\mathsf{KB}$ coincide. This result contrasts with a recent result by Das and Marin, who showed that the constructive and intuitionistic variants of $\mathsf{K}$ do not prove the same diamond-free formulas.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

北京阿比特科技有限公司
\mathbb{R}$ problem parameterized by the vertex cover number $k$ admits a kernel with $O(k^{d-1})$ vertices and bit-size $O(k^{d-1} \cdot \log k)$. We complement this result by a nearly matching lower bound, showing that for any $\varepsilon > 0$, the problem admits no kernel of bit-size $O(k^{d-1-\varepsilon})$ unless $\mathsf{NP} \subseteq \mathsf{coNP/poly}$. We further study the kernelizability of orthogonality dimension problems in additional settings, including over general fields and under various structural parameterizations. ">

亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The orthogonality dimension of a graph over $\mathbb{R}$ is the smallest integer $d$ for which one can assign to every vertex a nonzero vector in $\mathbb{R}^d$ such that every two adjacent vertices receive orthogonal vectors. For an integer $d$, the $d$-Ortho-Dim$_\mathbb{R}$ problem asks to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $d$. We prove that for every integer $d \geq 3$, the $d$-Ortho-Dim$_\mathbb{R}$ problem parameterized by the vertex cover number $k$ admits a kernel with $O(k^{d-1})$ vertices and bit-size $O(k^{d-1} \cdot \log k)$. We complement this result by a nearly matching lower bound, showing that for any $\varepsilon > 0$, the problem admits no kernel of bit-size $O(k^{d-1-\varepsilon})$ unless $\mathsf{NP} \subseteq \mathsf{coNP/poly}$. We further study the kernelizability of orthogonality dimension problems in additional settings, including over general fields and under various structural parameterizations.

相關內容

We present the first polynomial-time algorithm to exactly compute the number of labeled chordal graphs on $n$ vertices. Our algorithm solves a more general problem: given $n$ and $\omega$ as input, it computes the number of $\omega$-colorable labeled chordal graphs on $n$ vertices, using $O(n^7)$ arithmetic operations. A standard sampling-to-counting reduction then yields a polynomial-time exact sampler that generates an $\omega$-colorable labeled chordal graph on $n$ vertices uniformly at random. Our counting algorithm improves upon the previous best result by Wormald (1985), which computes the number of labeled chordal graphs on $n$ vertices in time exponential in $n$. An implementation of the polynomial-time counting algorithm gives the number of labeled chordal graphs on up to $30$ vertices in less than three minutes on a standard desktop computer. Previously, the number of labeled chordal graphs was only known for graphs on up to $15$ vertices. In addition, we design two approximation algorithms: (1) an approximate counting algorithm that computes a $(1\pm\varepsilon)$-approximation of the number of $n$-vertex labeled chordal graphs, and (2) an approximate sampling algorithm that generates a random labeled chordal graph according to a distribution whose total variation distance from the uniform distribution is at most $\varepsilon$. The approximate counting algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ time, and the approximate sampling algorithm runs in $O(n^3\log{n}\log^7(1/\varepsilon))$ expected time.

Given two sets $R$ and $B$ of $n$ points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in $R$ from the "blue" points in $B$ and is robust to outliers. More precisely, we find a region $\mathcal{W}_B$ bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points $B$, and few red points. Our running times vary between optimal $O(n\log n)$ and around $O(n^3)$, depending on the type of region $\mathcal{W}_B$ and whether we wish to minimize only red outliers, only blue outliers, or both.

Semiring semantics of first-order logic generalises classical Boolean semantics by permitting truth values from a commutative semiring, which can model information such as costs or access restrictions. This raises the question to what extent classical model theoretic properties still apply, and how this depends on the algebraic properties of the semiring. In this paper, we study this question for the classical locality theorems due to Hanf and Gaifman. We prove that Hanf's Locality Theorem generalises to all semirings with idempotent operations, but fails for many non-idempotent semirings. We then consider Gaifman normal forms and show that for formulae with free variables, Gaifman's Theorem does not generalise beyond the Boolean semiring. Also for sentences, it fails in the natural semiring and the tropical semiring. Our main result, however, is a constructive proof of the existence of Gaifman normal forms for min-max and lattice semirings. The proof implies a stronger version of Gaifman's classical theorem in Boolean semantics: every sentence has a Gaifman normal form which does not add negations.

A graph $G$ is a pairwise compatibility graph (PCG) if there exists an edge-weighted tree and an interval $I$, such that each leaf of the tree is a vertex of the graph, and there is an edge $\{ x, y \}$ in $G$ if and only if the weight of the path in the tree connecting $x$ and $y$ lies within the interval $I$. Originating in phylogenetics, PCGs are closely connected to important graph classes like leaf-powers and multi-threshold graphs, widely applied in bioinformatics, especially in understanding evolutionary processes. In this paper we introduce two natural generalizations of the PCG class, namely $k$-OR-PCG and $k$-AND-PCG, which are the classes of graphs that can be expressed as union and intersection, respectively, of $k$ PCGs. These classes can be also described using the concepts of the covering number and the intersection dimension of a graph in relation to the PCG class. We investigate how the classes of OR-PCG and AND-PCG are related to PCGs, $k$-interval-PCGs and other graph classes known in the literature. In particular, we provide upper bounds on the minimum $k$ for which an arbitrary graph $G$ belongs to $k$-interval-PCGs, $k$-OR-PCG or $k$-AND-PCG classes. For particular graph classes we improve these general bounds. Moreover, we show that, for every integer $k$, there exists a bipartite graph that is not in the $k$-interval-PCGs class, proving that there is no finite $k$ for which the $k$-interval-PCG class contains all the graphs. This answers an open question of Ahmed and Rahman from 2017. Finally, using a Ramsey theory argument, we show that for any $k$, there exists graphs that are not in $k$-AND-PCG, and graphs that are not in $k$-OR-PCG.

We present the first mini-batch algorithm for maximizing a non-negative monotone decomposable submodular function, $F=\sum_{i=1}^N f^i$, under a set of constraints. We consider two sampling approaches: uniform and weighted. We first show that mini-batch with weighted sampling improves over the state of the art sparsifier based approach both in theory and in practice. Surprisingly, our experimental results show that uniform sampling is superior to weighted sampling. However, it is impossible to explain this using worst-case analysis. Our main contribution is using smoothed analysis to provide a theoretical foundation for our experimental results. We show that, under very mild assumptions, uniform sampling is superior for both the mini-batch and the sparsifier approaches. We empirically verify that these assumptions hold for our datasets. Uniform sampling is simple to implement and has complexity independent of $N$, making it the perfect candidate to tackle massive real-world datasets.

Significant work has been done on computing the ``average'' optimal solution value for various $\mathsf{NP}$-complete problems using the Erd\"{o}s-R\'{e}nyi model to establish \emph{critical thresholds}. Critical thresholds define narrow bounds for the optimal solution of a problem instance such that the probability that the solution value lies outside these bounds vanishes as the instance size approaches infinity. In this paper, we extend the Erd\"{o}s-R\'{e}nyi model to general hypergraphs on $n$ vertices and $M$ hyperedges. We consider the problem of determining critical thresholds for the largest cardinality matching, and we show that for $M=o(1.155^n)$ the size of the maximum cardinality matching is almost surely 1. On the other hand, if $M=\Theta(2^n)$ then the size of the maximum cardinality matching is $\Omega(n^{\frac12-\gamma})$ for an arbitrary $\gamma >0$. Lastly, we address the gap where $\Omega(1.155^n)=M=o(2^n)$ empirically through computer simulations.

We consider the problem of finding an informative path through a graph, given initial and terminal nodes and a given maximum path length. We assume that a linear noise corrupted measurement is taken at each node of an underlying unknown vector that we wish to estimate. The informativeness is measured by the reduction in uncertainty in our estimate, evaluated using several metrics. We present a convex relaxation for this informative path planning problem, which we can readily solve to obtain a bound on the possible performance. We develop an approximate sequential method where the path is constructed segment by segment through dynamic programming. This involves solving an orienteering problem, with the node reward acting as a surrogate for informativeness, taking the first step, and then repeating the process. The method scales to very large problem instances and achieves performance not too far from the bound produced by the convex relaxation. We also demonstrate our method's ability to handle adaptive objectives, multimodal sensing, and multi-agent variations of the informative path planning problem.

We prove that the constructive and intuitionistic variants of the modal logic $\mathsf{KB}$ coincide. This result contrasts with a recent result by Das and Marin, who showed that the constructive and intuitionistic variants of $\mathsf{K}$ do not prove the same diamond-free formulas.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.

北京阿比特科技有限公司