亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The notion of graph covers is a discretization of covering spaces introduced and deeply studied in topology. In discrete mathematics and theoretical computer science, they have attained a lot of attention from both the structural and complexity perspectives. Nonetheless, disconnected graphs were usually omitted from the considerations with the explanation that it is sufficient to understand coverings of the connected components of the target graph by components of the source one. However, different (but equivalent) versions of the definition of covers of connected graphs generalize to non-equivalent definitions for disconnected graphs. The aim of this paper is to summarize this issue and to compare three different approaches to covers of disconnected graphs: 1) locally bijective homomorphisms, 2) globally surjective locally bijective homomorphisms (which we call \emph{surjective covers}), and 3) locally bijective homomorphisms which cover every vertex the same number of times (which we call \emph{equitable covers}). The standpoint of our comparison is the complexity of deciding if an input graph covers a fixed target graph. We show that both surjective and equitable covers satisfy what certainly is a natural and welcome property: covering a disconnected graph is polynomial-time decidable if such it is for every connected component of the graph, and it is NP-complete if it is NP-complete for at least one of its components. We further argue that the third variant, equitable covers, is the most natural one, namely when considering covers of colored graphs. Moreover, the complexity of surjective and equitable covers differ from the fixed parameter complexity point of view. In line with the current trends in topological graph theory, as well as its applications in mathematical physics, we consider graphs in a very general sense[...]

相關內容

Seese's conjecture for finite graphs states that monadic second-order logic (MSO) is undecidable on all graph classes of unbounded clique-width. We show that to establish this it would suffice to show that grids of unbounded size can be interpreted in two families of graph classes: minimal hereditary classes of unbounded clique-width; and antichains of unbounded clique-width under the induced subgraph relation. We explore all the currently known classes of the former category and establish that grids of unbounded size can indeed be interpreted in them.

We propose a novel sparse sliced inverse regression method based on random projections in a large $p$ small $n$ setting. Embedded in a generalized eigenvalue framework, the proposed approach finally reduces to parallel execution of low-dimensional (generalized) eigenvalue decompositions, which facilitates high computational efficiency. Theoretically, we prove that this method achieves the minimax optimal rate of convergence under suitable assumptions. Furthermore, our algorithm involves a delicate reweighting scheme, which can significantly enhance the identifiability of the active set of covariates. Extensive numerical studies demonstrate high superiority of the proposed algorithm in comparison to competing methods.

We present a linear-time algorithm that, given as input (i) a bipartite Pfaffian graph $G$ of minimum degree three, (ii) a Hamiltonian cycle $H$ in $G$, and (iii) an edge $e$ in $H$, outputs at least three other Hamiltonian cycles through the edge $e$ in $G$. This linear-time complexity of finding another Hamiltonian cycle given one is in sharp contrast to the problem of deciding the existence of a Hamiltonian cycle, which is NP-complete already for cubic bipartite planar graphs; such graphs are Pfaffian. Also, without the degree requirement, we show that it is NP-hard to find another Hamiltonian cycle in a bipartite Pfaffian graph. We present further improved algorithms for finding optimal traveling salesperson tours and counting Hamiltonian cycles in bipartite planar graphs with running times that are not known to hold in general planar graphs. We prove our results by a new structural technique that efficiently witnesses each Hamiltonian cycle $H$ through an arbitrary fixed anchor edge $e$ in a bipartite Pfaffian graph using a two-coloring of the vertices as advice that is unique to $H$. Previous techniques -- the Cut&Count technique of Cygan et al. [FOCS'11, TALG'22] in particular -- were able to reduce the Hamiltonian cycle problem only to essentially counting problems; our results show that counting can be avoided by leveraging properties of bipartite Pfaffian graphs. Our technique also has purely graph-theoretical consequences; for example, we show that every cubic bipartite Pfaffian graph has either zero or at least six distinct Hamiltonian cycles; the latter case is tight for the cube graph.

A function is called quasiperiodic if its fundamental frequencies are linearly independent over the rationals. With appropriate parameters, the sliding window point clouds of such functions can be shown to be dense in tori with dimension equal to the number of independent frequencies. In this paper, we develop theoretical and computational techniques to study the persistent homology of such sets. Specifically, we provide parameter optimization schemes for sliding windows of quasiperiodic functions, and present theoretical lower bounds on their Rips persistent homology. The latter leverages a recent persistent K\"{u}nneth formula. The theory is illustrated via computational examples and an application to dissonance detection in music audio samples.

A fundamental question in computational geometry is for a dynamic collection of geometric objects in Euclidean space, whether it is possible to maintain a maximum independent set in polylogarithmic update time. Already, for a set of intervals, it is known that no dynamic algorithm can maintain an exact maximum independent set with sublinear update time. Therefore, the typical objective is to explore the trade-off between update time and solution size. Substantial efforts have been made in recent years to understand this question for various families of geometric objects, such as intervals, hypercubes, hyperrectangles, and fat objects. We present the first fully dynamic approximation algorithm for disks of arbitrary radii in the plane that maintains a constant-factor approximate maximum independent set in polylogarithmic update time. First, we show that for a fully dynamic set of $n$ unit disks in the plane, a $12$-approximate maximum independent set can be maintained with worst-case update time $O(\log^2 n)$, and optimal output-sensitive reporting. Moreover, this result generalizes to fat objects of comparable sizes in any fixed dimension $d$, where the approximation ratio depends on the dimension and the fatness parameter. Our main result is that for a fully dynamic set of disks of arbitrary radii in the plane, an $O(1)$-approximate maximum independent set can be maintained in polylogarithmic expected amortized update time.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

We describe the new field of mathematical analysis of deep learning. This field emerged around a list of research questions that were not answered within the classical framework of learning theory. These questions concern: the outstanding generalization power of overparametrized neural networks, the role of depth in deep architectures, the apparent absence of the curse of dimensionality, the surprisingly successful optimization performance despite the non-convexity of the problem, understanding what features are learned, why deep architectures perform exceptionally well in physical problems, and which fine aspects of an architecture affect the behavior of a learning task in which way. We present an overview of modern approaches that yield partial answers to these questions. For selected approaches, we describe the main ideas in more detail.

Knowledge graph completion aims to predict missing relations between entities in a knowledge graph. While many different methods have been proposed, there is a lack of a unifying framework that would lead to state-of-the-art results. Here we develop PathCon, a knowledge graph completion method that harnesses four novel insights to outperform existing methods. PathCon predicts relations between a pair of entities by: (1) Considering the Relational Context of each entity by capturing the relation types adjacent to the entity and modeled through a novel edge-based message passing scheme; (2) Considering the Relational Paths capturing all paths between the two entities; And, (3) adaptively integrating the Relational Context and Relational Path through a learnable attention mechanism. Importantly, (4) in contrast to conventional node-based representations, PathCon represents context and path only using the relation types, which makes it applicable in an inductive setting. Experimental results on knowledge graph benchmarks as well as our newly proposed dataset show that PathCon outperforms state-of-the-art knowledge graph completion methods by a large margin. Finally, PathCon is able to provide interpretable explanations by identifying relations that provide the context and paths that are important for a given predicted relation.

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.

北京阿比特科技有限公司