亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The \textsl{branchwidth} of a graph has been introduced by Roberson and Seymour as a measure of the tree-decomposability of a graph, alternative to treewidth. Branchwidth is polynomially computable on planar graphs by the celebrated ``Ratcatcher''-algorithm of Seymour and Thomas. We investigate an extension of this algorithm to minor-closed graph classes, further than planar graphs as follows: Let $H_{0}$ be a graph embeddedable in the projective plane and $H_{1}$ be a graph embeddedable in the torus. We prove that every $\{H_{0},H_{1}\}$-minor free graph $G$ contains a subgraph $G'$ where the difference between the branchwidth of $G$ and the branchwidth of $G'$ is bounded by some constant, depending only on $H_{0}$ and $H_{1}$. Moreover, the graph $G'$ admits a tree decomposition where all torsos are planar. This decomposition can be used for deriving an EPTAS for branchwidth: For $\{H_{0},H_{1}\}$-minor free graphs, there is a function $f\colon\mathbb{N}\to\mathbb{N}$ and a $(1+\epsilon)$-approximation algorithm for branchwidth, running in time $\mathcal{O}(n^3+f(\frac{1}{\epsilon})\cdot n),$ for every $\epsilon>0$.

相關內容

We propose a unifying framework for smoothed analysis of combinatorial local optimization problems and show how a diverse selection of problems within the complexity class PLS can be cast within this model. This abstraction allows us to identify key structural properties, and corresponding parameters, that determine the smoothed running time of local search dynamics. We formalize this via a black-box tool that provides concrete bounds on the expected maximum number of steps needed until local search reaches an exact local optimum. This bound is particularly strong, in the sense that it holds for any starting feasible solution, any choice of pivoting rule, and does not rely on the choice of specific noise distributions that are applied on the input, but it is parameterized by just a global upper bound $\phi$ on the probability density. We then demonstrate the power of this tool by instantiating it for various PLS-hard problems to derive efficient smoothed running times. This not only unifies, and greatly simplifies, prior existing positive results, but also allows us to extend or improve them. Notable problems on which we provide such a contribution are Max-Cut, the Travelling Salesman problem, and Network Coordination Games. Additionally, in this paper we propose novel smoothed analysis formulations, and prove polynomial smoothed running times, for important local optimization problems that have not been studied before from this perspective. Importantly, we provide an extensive study of the problem of finding pure Nash equilibria in general and Network Congestion Games under various representation models, including explicit, step-function, and polynomial latencies. We show that all the problems we study can be solved by their standard local search algorithms in polynomial smoothed time on PLS-hard instances in which these algorithms have exponential worst-case running time.

We introduce a termination method for the algebraic graph transformation framework PBPO+, in which we weigh objects by summing a class of weighted morphisms targeting them. The method is well-defined in rm-adhesive quasitoposes (which include toposes and therefore many graph categories of interest), and is applicable to non-linear rules. The method is also defined for other frameworks, including DPO and SqPO, because we have previously shown that they are naturally encodable into PBPO+ in the quasitopos setting.

We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb R^d$. We study a computationally efficient estimator initially proposed by Pooladian and Niles-Weed (2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.

Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.

Overparameterized neural networks (NNs) are observed to generalize well even when trained to perfectly fit noisy data. This phenomenon motivated a large body of work on "benign overfitting", where interpolating predictors achieve near-optimal performance. Recently, it was conjectured and empirically observed that the behavior of NNs is often better described as "tempered overfitting", where the performance is non-optimal yet also non-trivial, and degrades as a function of the noise level. However, a theoretical justification of this claim for non-linear NNs has been lacking so far. In this work, we provide several results that aim at bridging these complementing views. We study a simple classification setting with 2-layer ReLU NNs, and prove that under various assumptions, the type of overfitting transitions from tempered in the extreme case of one-dimensional data, to benign in high dimensions. Thus, we show that the input dimension has a crucial role on the type of overfitting in this setting, which we also validate empirically for intermediate dimensions. Overall, our results shed light on the intricate connections between the dimension, sample size, architecture and training algorithm on the one hand, and the type of resulting overfitting on the other hand.

We introduce monoidal width as a measure of complexity for morphisms in monoidal categories. Inspired by well-known structural width measures for graphs, like tree width and rank width, monoidal width is based on a notion of syntactic decomposition: a monoidal decomposition of a morphism is an expression in the language of monoidal categories, where operations are monoidal products and compositions, that specifies this morphism. Monoidal width penalises the composition operation along ``big'' objects, while it encourages the use of monoidal products. We show that, by choosing the correct categorical algebra for decomposing graphs, we can capture tree width and rank width. For matrices, monoidal width is related to the rank. These examples suggest monoidal width as a good measure for structural complexity of processes modelled as morphisms in monoidal categories.

Message Passing Neural Networks (MPNNs) are instances of Graph Neural Networks that leverage the graph to send messages over the edges. This inductive bias leads to a phenomenon known as over-squashing, where a node feature is insensitive to information contained at distant nodes. Despite recent methods introduced to mitigate this issue, an understanding of the causes for over-squashing and of possible solutions are lacking. In this theoretical work, we prove that: (i) Neural network width can mitigate over-squashing, but at the cost of making the whole network more sensitive; (ii) Conversely, depth cannot help mitigate over-squashing: increasing the number of layers leads to over-squashing being dominated by vanishing gradients; (iii) The graph topology plays the greatest role, since over-squashing occurs between nodes at high commute (access) time. Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under graph rewiring.

This paper deals with Niho functions which are one of the most important classes of functions thanks to their close connections with a wide variety of objects from mathematics, such as spreads and oval polynomials or from applied areas, such as symmetric cryptography, coding theory and sequences. In this paper, we investigate specifically the $c$-differential uniformity of the power function $F(x)=x^{s(2^m-1)+1}$ over the finite field $\mathbb{F}_{2^n}$, where $n=2m$, $m$ is odd and $s=(2^k+1)^{-1}$ is the multiplicative inverse of $2^k+1$ modulo $2^m+1$, and show that the $c$-differential uniformity of $F(x)$ is $2^{\gcd(k,m)}+1$ by carrying out some subtle manipulation of certain equations over $\mathbb{F}_{2^n}$. Notably, $F(x)$ has a very low $c$-differential uniformity equals $3$ when $k$ and $m$ are coprime.

Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.

Graph-based semi-supervised learning (SSL) is an important learning problem where the goal is to assign labels to initially unlabeled nodes in a graph. Graph Convolutional Networks (GCNs) have recently been shown to be effective for graph-based SSL problems. GCNs inherently assume existence of pairwise relationships in the graph-structured data. However, in many real-world problems, relationships go beyond pairwise connections and hence are more complex. Hypergraphs provide a natural modeling tool to capture such complex relationships. In this work, we explore the use of GCNs for hypergraph-based SSL. In particular, we propose HyperGCN, an SSL method which uses a layer-wise propagation rule for convolutional neural networks operating directly on hypergraphs. To the best of our knowledge, this is the first principled adaptation of GCNs to hypergraphs. HyperGCN is able to encode both the hypergraph structure and hypernode features in an effective manner. Through detailed experimentation, we demonstrate HyperGCN's effectiveness at hypergraph-based SSL.

北京阿比特科技有限公司