亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is known that testing isomorphism of chordal graphs is as hard as the general graph isomorphism problem. Every chordal graph can be represented as the intersection graph of some subtrees of a tree. The leafage of a chordal graph, is defined to be the minimum number of leaves in the representing tree. We construct a fixed-parameter tractable algorithm testing isomorphism of chordal graphs with bounded leafage. The key point is a fixed-parameter tractable algorithm finding the automorphism group of a colored order-3 hypergraph with bounded sizes of color classes of vertices.

相關內容

A collection of graphs is \textit{nearly disjoint} if every pair of them intersects in at most one vertex. We prove that if $G_1, \dots, G_m$ are nearly disjoint graphs of maximum degree at most $D$, then the following holds. For every fixed $C$, if each vertex $v \in \bigcup_{i=1}^m V(G_i)$ is contained in at most $C$ of the graphs $G_1, \dots, G_m$, then the (list) chromatic number of $\bigcup_{i=1}^m G_i$ is at most $D + o(D)$. This result confirms a special case of a conjecture of Vu and generalizes Kahn's bound on the list chromatic index of linear uniform hypergraphs of bounded maximum degree. In fact, this result holds for the correspondence (or DP) chromatic number and thus implies a recent result of Molloy, and we derive this result from a more general list coloring result in the setting of `color degrees' that also implies a result of Reed and Sudakov.

Coloring unit-disk graphs efficiently is an important problem in the global and distributed setting, with applications in radio channel assignment problems when the communication relies on omni-directional antennas of the same power. In this context it is important to bound not only the complexity of the coloring algorithms, but also the number of colors used. In this paper, we consider two natural distributed settings. In the location-aware setting (when nodes know their coordinates in the plane), we give a constant time distributed algorithm coloring any unit-disk graph $G$ with at most $(3+\epsilon)\omega(G)+6$ colors, for any constant $\epsilon>0$, where $\omega(G)$ is the clique number of $G$. This improves upon a classical 3-approximation algorithm for this problem, for all unit-disk graphs whose chromatic number significantly exceeds their clique number. When nodes do not know their coordinates in the plane, we give a distributed algorithm in the LOCAL model that colors every unit-disk graph $G$ with at most $5.68\omega(G)$ colors in (see \cref{sec:comput}) rounds. Moreover, when $\omega(G)=O(1)$, the algorithm runs in $O(\log^* n)$ rounds. This algorithm is based on a study of the local structure of unit-disk graphs, which is of independent interest. We conjecture that every unit-disk graph $G$ has average degree at most $4\omega(G)$, which would imply the existence of a $O(\log n)$ round algorithm coloring any unit-disk graph $G$ with (approximatively) $4\omega(G)$ colors.

A streaming algorithm is considered to be adversarially robust if it provides correct outputs with high probability even when the stream updates are chosen by an adversary who may observe and react to the past outputs of the algorithm. We grow the burgeoning body of work on such algorithms in a new direction by studying robust algorithms for the problem of maintaining a valid vertex coloring of an $n$-vertex graph given as a stream of edges. Following standard practice, we focus on graphs with maximum degree at most $\Delta$ and aim for colorings using a small number $f(\Delta)$ of colors. A recent breakthrough (Assadi, Chen, and Khanna; SODA~2019) shows that in the standard, non-robust, streaming setting, $(\Delta+1)$-colorings can be obtained while using only $\widetilde{O}(n)$ space. Here, we prove that an adversarially robust algorithm running under a similar space bound must spend almost $\Omega(\Delta^2)$ colors and that robust $O(\Delta)$-coloring requires a linear amount of space, namely $\Omega(n\Delta)$. We in fact obtain a more general lower bound, trading off the space usage against the number of colors used. From a complexity-theoretic standpoint, these lower bounds provide (i)~the first significant separation between adversarially robust algorithms and ordinary randomized algorithms for a natural problem on insertion-only streams and (ii)~the first significant separation between randomized and deterministic coloring algorithms for graph streams, since deterministic streaming algorithms are automatically robust. We complement our lower bounds with a suite of positive results, giving adversarially robust coloring algorithms using sublinear space. In particular, we can maintain an $O(\Delta^2)$-coloring using $\widetilde{O}(n \sqrt{\Delta})$ space and an $O(\Delta^3)$-coloring using $\widetilde{O}(n)$ space.

A key challenge facing deep learning is that neural networks are often not robust to shifts in the underlying data distribution. We study this problem from the perspective of the statistical concept of parameter identification. Generalization bounds from learning theory often assume that the test distribution is close to the training distribution. In contrast, if we can identify the "true" parameters, then the model generalizes to arbitrary distribution shifts. However, neural networks are typically overparameterized, making parameter identification impossible. We show that for quadratic neural networks, we can identify the function represented by the model even though we cannot identify its parameters. Thus, we can obtain robust generalization bounds even in the overparameterized setting. We leverage this result to obtain new bounds for contextual bandits and transfer learning with quadratic neural networks. Overall, our results suggest that we can improve robustness of neural networks by designing models that can represent the true data generating process. In practice, the true data generating process is often very complex; thus, we study how our framework might connect to neural module networks, which are designed to break down complex tasks into compositions of simpler ones. We prove robust generalization bounds when individual neural modules are identifiable.

We show that for every $r \ge 2$ there exists $\epsilon_r > 0$ such that any $r$-uniform hypergraph on $m$ edges with bounded vertex degree has a set of at most $(\frac{1}{2} - \epsilon_r)m$ edges the removal of which breaks the hypergraph into connected components with at most $m/2$ edges. We use this to give an algorithm running in time $d^{(1 - \epsilon_r)m}$ that decides satisfiability of $m$-variable $(d, k)$-CSPs in which every variable appears in at most $r$ constraints, where $\epsilon_r$ depends only on $r$ and $k\in o(\sqrt{m})$. Furthermore our algorithm solves the corresponding #CSP-SAT and Max-CSP-SAT of these CSPs. We also show that CNF representations of unsatisfiable $(2, k)$-CSPs with variable frequency $r$ can be refuted in tree-like resolution in size $2^{(1 - \epsilon_r)m}$. Furthermore for Tseitin formulas on graphs with degree at most $k$ (which are $(2, k)$-CSPs) we give a deterministic algorithm finding such a refutation.

The Weisfeiler-Leman procedure is a widely-used technique for graph isomorphism testing that works by iteratively computing an isomorphism-invariant coloring of vertex tuples. Meanwhile, a fundamental tool in structural graph theory, which is often exploited in approaches to tackle the graph isomorphism problem, is the decomposition into 2- and 3-connected components. We prove that the 2-dimensional Weisfeiler-Leman algorithm implicitly computes the decomposition of a graph into its 3-connected components. This implies that the dimension of the algorithm needed to distinguish two given non-isomorphic graphs is at most the dimension required to distinguish non-isomorphic 3-connected components of the graphs (assuming dimension at least 2). To obtain our decomposition result, we show that, for k >= 2, the k-dimensional algorithm distinguishes k-separators, i.e., k-tuples of vertices that separate the graph, from other vertex k-tuples. As a byproduct, we also obtain insights about the connectivity of constituent graphs of association schemes. In an application of the results, we show the new upper bound of k on the Weisfeiler-Leman dimension of the class of graphs of treewidth at most k. Using a construction by Cai, F\"urer, and Immerman, we also provide a new lower bound that is asymptotically tight up to a factor of 2.

Domination-type parameters are difficult to manage in Cartesian product graphs and there is usually no general relationship between the parameter in both factors and in the product graph. This is the situation of the domination number, the Roman domination number or the $2$-domination number, among others. Contrary to what happens with the domination number and the Roman domination number, the $2$-domination number remains unknown in cylinders, that is, the Cartesian product of a cycle and a path and in this paper, we will compute this parameter in the cylinders with small cycles. We will develop two algorithms involving the $(\min,+)$ matrix product that will allow us to compute the desired values of $\gamma_2(C_n\Box P_m)$, with $3\leq n\leq 15$ and $m\geq 2$. We will also pose a conjecture about the general formulae for the $2$-domination number in this graph class.

Neural networks are increasingly used to estimate parameters in quantitative MRI, in particular in magnetic resonance fingerprinting. Their advantages over the gold standard non-linear least square fitting are their superior speed and their immunity to the non-convexity of many fitting problems. We find, however, that in heterogeneous parameter spaces, i.e. in spaces in which the variance of the estimated parameters varies considerably, good performance is hard to achieve and requires arduous tweaking of the loss function, hyper parameters, and the distribution of the training data in parameter space. Here, we address these issues with a theoretically well-founded loss function: the Cram\'er-Rao bound (CRB) provides a theoretical lower bound for the variance of an unbiased estimator and we propose to normalize the squared error with respective CRB. With this normalization, we balance the contributions of hard-to-estimate and not-so-hard-to-estimate parameters and areas in parameter space, and avoid a dominance of the former in the overall training loss. Further, the CRB-based loss function equals one for a maximally-efficient unbiased estimator, which we consider the ideal estimator. Hence, the proposed CRB-based loss function provides an absolute evaluation metric. We compare a network trained with the CRB-based loss with a network trained with the commonly used means squared error loss and demonstrate the advantages of the former in numerical, phantom, and in vivo experiments.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司