亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hamiltonian cycles in graphs were first studied in the 1850s. Since then, an impressive amount of research has been dedicated to identifying classes of graphs that allow Hamiltonian cycles, and to related questions. The corresponding decision problem, that asks whether a given graph is Hamiltonian (i.\,e.\ admits a Hamiltonian cycle), is one of Karp's famous NP-complete problems. In this paper we study graphs of bounded degree that are \emph{far} from being Hamiltonian, where a graph $G$ on $n$ vertices is \emph{far} from being Hamiltonian, if modifying a constant fraction of $n$ edges is necessary to make $G$ Hamiltonian. We give an explicit deterministic construction of a class of graphs of bounded degree that are locally Hamiltonian, but (globally) far from being Hamiltonian. Here, \emph{locally Hamiltonian} means that every subgraph induced by the neighbourhood of a small vertex set appears in some Hamiltonian graph. More precisely, we obtain graphs which differ in $\Theta(n)$ edges from any Hamiltonian graph, but non-Hamiltonicity cannot be detected in the neighbourhood of $o(n)$ vertices. Our class of graphs yields a class of hard instances for one-sided error property testers with linear query complexity. It is known that any property tester (even with two-sided error) requires a linear number of queries to test Hamiltonicity (Yoshida, Ito, 2010). This is proved via a randomised construction of hard instances. In contrast, our construction is deterministic. So far only very few deterministic constructions of hard instances for property testing are known. We believe that our construction may lead to future insights in graph theory and towards a characterisation of the properties that are testable in the bounded-degree model.

相關內容

We present a new generalization of the bin covering problem that is known to be a strongly NP-hard problem. In our generalization there is a positive constant $\Delta$, and we are given a set of items each of which has a positive size. We would like to find a partition of the items into bins. We say that a bin is near exact covered if the total size of items packed into the bin is between $1$ and $1+\Delta$. Our goal is to maximize the number of near exact covered bins. If $\Delta=0$ or $\Delta>0$ is given as part of the input, our problem is shown here to have no approximation algorithm with a bounded asymptotic approximation ratio (assuming that $P\neq NP$). However, for the case where $\Delta>0$ is seen as a constant, we present an asymptotic fully polynomial time approximation scheme (AFPTAS) that is our main contribution.

The study of Markov processes and broadcasting on trees has deep connections to a variety of areas including statistical physics, graphical models, phylogenetic reconstruction, MCMC algorithms, and community detection in random graphs. Notably, the celebrated Belief Propagation (BP) algorithm achieves optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves. Recently, the analysis of low-degree polynomials has emerged as a valuable tool for predicting computational-to-statistical gaps. In this work, we investigate the performance of low-degree polynomials for the reconstruction problem. Perhaps surprisingly, we show that there are simple tree models of fixed arity $d$ and growing depth $\ell$ (so $N = 2^{\ell \log_2(d)}$ leaves) where (1) nontrivial reconstruction of the root value is possible with a simple polynomial time algorithm and with robustness to noise, but not with any polynomial of degree $2^{c \ell} = N^{c/\log_2(d)}$ for $c > 0$ a constant, and (2) when the tree is unknown and given multiple samples with correlated root assignments, nontrivial reconstruction of the root value is possible with a simple, noise-robust, and computationally efficient SQ algorithm but not with any polynomial of degree $2^{c \ell}$. These results clarify limitations of low-degree polynomials vs. polynomial time algorithms for Bayesian estimation problems. They also complement recent work of Moitra, Mossel, and Sandon who studied the circuit complexity of Belief Propagation. As a consequence of our main result, we show that for some $c' > 0$, $\exp(2^{c'\ell}) = \exp(N^{c'/\log_2(d)})$ many samples are needed for RBF kernel regression to obtain nontrivial correlation with the true regression function (BP). We pose related open questions about low-degree polynomials and the Kesten-Stigum threshold.

Bayesian learning using Gaussian processes provides a foundational framework for making decisions in a manner that balances what is known with what could be learned by gathering data. In this dissertation, we develop techniques for broadening the applicability of Gaussian processes. This is done in two ways. Firstly, we develop pathwise conditioning techniques for Gaussian processes, which allow one to express posterior random functions as prior random functions plus a dependent update term. We introduce a wide class of efficient approximations built from this viewpoint, which can be randomly sampled once in advance, and evaluated at arbitrary locations without any subsequent stochasticity. This key property improves efficiency and makes it simpler to deploy Gaussian process models in decision-making settings. Secondly, we develop a collection of Gaussian process models over non-Euclidean spaces, including Riemannian manifolds and graphs. We derive fully constructive expressions for the covariance kernels of scalar-valued Gaussian processes on Riemannian manifolds and graphs. Building on these ideas, we describe a formalism for defining vector-valued Gaussian processes on Riemannian manifolds. The introduced techniques allow all of these models to be trained using standard computational methods. In total, these contributions make Gaussian processes easier to work with and allow them to be used within a wider class of domains in an effective and principled manner. This, in turn, makes it possible to potentially apply Gaussian processes to novel decision-making settings.

Real-world networks evolve over time via additions or removals of nodes and edges. In current network evolution models, node degree varies or grows arbitrarily. A recently introduced degree-preserving network growth (DPG) family of models preserves node degree, resulting in structures significantly different from and more diverse than previous models (Nature Physics 2021, DOI:10.1038/s41567-021-01417-7). Here we present a rigorous mathematical theory underlying the DPG family of network growth models. We prove that the general problem of deciding whether a simple graph can be obtained via the DPG process from a small "kernel" graph (DPG feasibility) is NP-complete, in contrast with the surprising numerical observation that most real-world networks are actually easily constructible by this process; a dichotomy that still needs to be understood. We demonstrate how some of the well-known network models can be constructed via the DPG process, using proper parametrization.

We study the problem of learning a hypergraph via edge detecting queries. In this problem, a learner queries subsets of vertices of a hidden hypergraph and observes whether these subsets contain an edge or not. In general, learning a hypergraph with $m$ edges of maximum size $d$ requires $\Omega((2m/d)^{d/2})$ queries. In this paper, we aim to identify families of hypergraphs that can be learned without suffering from a query complexity that grows exponentially in the size of the edges. We show that hypermatchings and low-degree near-uniform hypergraphs with $n$ vertices are learnable with poly$(n)$ queries. For learning hypermatchings (hypergraphs of maximum degree $ 1$), we give an $O(\log^3 n)$-round algorithm with $O(n \log^5 n)$ queries. We complement this upper bound by showing that there are no algorithms with poly$(n)$ queries that learn hypermatchings in $o(\log \log n)$ adaptive rounds. For hypergraphs with maximum degree $\Delta$ and edge size ratio $\rho$, we give a non-adaptive algorithm with $O((2n)^{\rho \Delta+1}\log^2 n)$ queries. To the best of our knowledge, these are the first algorithms with poly$(n, m)$ query complexity for learning non-trivial families of hypergraphs that have a super-constant number of edges of super-constant size.

Given an unknown $n \times n$ matrix $A$ having non-negative entries, the \emph{inner product} (IP) oracle takes as inputs a specified row (or a column) of $A$ and a vector $v \in \mathbb{R}^{n}$, and returns their inner product. A derivative of IP is the induced degree query in an unknown graph $G=(V(G), E(G))$ that takes a vertex $u \in V(G)$ and a subset $S \subseteq V(G)$ as input and reports the number of neighbors of $u$ that are present in $S$. The goal of this paper is to understand the strength of the inner product oracle. Our results in that direction are as follows: (I) IP oracle can solve bilinear form estimation, i.e., estimate the value of ${\bf x}^{T}A\bf{y}$ given two vectors ${\bf x},\, {\bf y} \in \mathbb{R}^{n}$ with non-negative entries and can sample almost uniformly entries of a matrix with non-negative entries; (ii) We tackle for the first time weighted edge estimation and weighted sampling of edges that follow as an application to the bilinear form estimation and almost uniform sampling problems, respectively; (iii) induced degree query, a derivative of IP can solve edge estimation and an almost uniform edge sampling in induced subgraphs. To the best of our knowledge, these are the first set of Oracle-based query complexity results for induced subgraphs. We show that IP/induced degree queries over the whole graph can simulate local queries in any induced subgraph; (iv) Apart from the above, we also show that IP can solve several problems related to matrix, like testing if the matrix is diagonal, symmetric, doubly stochastic, etc.

Listing dense subgraphs in large graphs plays a key task in varieties of network analysis applications like community detection. Clique, as the densest model, has been widely investigated. However, in practice, communities rarely form as cliques for various reasons, e.g., data noise. Therefore, $k$-plex, -- graph with each vertex adjacent to all but at most $k$ vertices, is introduced as a relaxed version of clique. Often, to better simulate cohesive communities, an emphasis is placed on connected $k$-plexes with small $k$. In this paper, we continue the research line of listing all maximal $k$-plexes and maximal $k$-plexes of prescribed size. Our first contribution is algorithm ListPlex that lists all maximal $k$-plexes in $O^*(\gamma^D)$ time for each constant $k$, where $\gamma$ is a value related to $k$ but strictly smaller than 2, and $D$ is the degeneracy of the graph that is far less than the vertex number $n$ in real-word graphs. Compared to the trivial bound of $2^n$, the improvement is significant, and our bound is better than all previously known results. In practice, we further use several techniques to accelerate listing $k$-plexes of a given size, such as structural-based prune rules, cache-efficient data structures, and parallel techniques. All these together result in a very practical algorithm. Empirical results show that our approach outperforms the state-of-the-art solutions by up to orders of magnitude.

The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. We focus on the problem of constructing a perfect matching in as few rounds as possible. In particular, we present an adaptive strategy for the player which achieves a perfect matching in $\beta n$ rounds, where the value of $\beta < 1.206$ is derived from a solution to some system of differential equations. This improves upon the previously best known upper bound of $(1+2/e+o(1)) \, n < 1.736 \, n$ rounds. We also improve the previously best lower bound of $(\ln 2 + o(1)) \, n > 0.693 \, n$ and show that the player cannot achieve the desired property in less than $\alpha n$ rounds, where the value of $\alpha > 0.932$ is derived from a solution to another system of differential equations. As a result, the gap between the upper and lower bounds is decreased roughly four times.

Solving polynomial systems whose solution set is finite is usually done in two main steps: compute a Gr\"obner basis for the degree reverse lexicographic order, and perform a change of order to find the lexicographic Gr\"obner basis. The second step is generally considered as better understood, in terms of algorithms and complexity. Yet, after two decades of progress on the first step, it turns out that the change of order now takes a large part of the solving time for many instances, including those that are generic or reached after applying a random change of variables. Like the fastest known change of order algorithms, this work focuses on the latter situation, where the ideal defined by the system satisfies structural properties. First, the ideal has a shape lexicographic Gr\"obner basis. Second, the set of leading terms with respect to the degree reverse lexicographic order has a stability property; in particular, the multiplication matrix of the smallest variable is computed for free from the input Gr\"obner basis. The current fastest algorithms rely on the sparsity of this multiplication matrix to find its minimal polynomial efficiently using Wiedemann's approach. This paper starts from the observation that this sparsity is a consequence of an algebraic structure, which can be exploited to represent the matrix concisely as a univariate polynomial matrix. We show that the Hermite normal form of that matrix yields the sought lexicographic Gr\"obner basis, under assumptions which cover the shape position case. This leads to an improved complexity bound for the second step. The practical benefit is also confirmed via implementations based on the state-of-the-art software libraries msolve and PML.

The ZX-calculus is a graphical language for reasoning about quantum computation using ZX-diagrams, a certain flexible generalisation of quantum circuits that can be used to represent linear maps from $m$ to $n$ qubits for any $m,n \geq 0$. Some applications for the ZX-calculus, such as quantum circuit optimisation and synthesis, rely on being able to efficiently translate a ZX-diagram back into a quantum circuit of comparable size. While several sufficient conditions are known for describing families of ZX-diagrams that can be efficiently transformed back into circuits, it has previously been conjectured that the general problem of circuit extraction is hard. That is, that it should not be possible to efficiently convert an arbitrary ZX-diagram describing a unitary linear map into an equivalent quantum circuit. In this paper we prove this conjecture by showing that the circuit extraction problem is #P-hard, and so is itself at least as hard as strong simulation of quantum circuits. In addition to our main hardness result, which relies specifically on the circuit representation, we give a representation-agnostic hardness result. Namely, we show that any oracle that takes as input a ZX-diagram description of a unitary and produces samples of the output of the associated quantum computation enables efficient probabilistic solutions to NP-complete problems.

北京阿比特科技有限公司