亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of deterministically enumerating all minimum $k$-cut-sets in a given hypergraph for any fixed $k$. The input here is a hypergraph $G = (V, E)$ with non-negative hyperedge costs. A subset $F$ of hyperedges is a $k$-cut-set if the number of connected components in $G - F$ is at least $k$ and it is a minimum $k$-cut-set if it has the least cost among all $k$-cut-sets. For fixed $k$, we call the problem of finding a minimum $k$-cut-set as Hypergraph-$k$-Cut and the problem of enumerating all minimum $k$-cut-sets as Enum-Hypergraph-$k$-Cut. The special cases of Hypergraph-$k$-Cut and Enum-Hypergraph-$k$-Cut restricted to graph inputs are well-known to be solvable in (randomized as well as deterministic) polynomial time. In contrast, it is only recently that polynomial-time algorithms for Hypergraph-$k$-Cut were developed. The randomized polynomial-time algorithm for Hypergraph-$k$-Cut that was designed in 2018 (Chandrasekaran, Xu, and Yu, SODA 2018) showed that the number of minimum $k$-cut-sets in a hypergraph is $O(n^{2k-2})$, where $n$ is the number of vertices in the input hypergraph, and that they can all be enumerated in randomized polynomial time, thus resolving Enum-Hypergraph-$k$-Cut in randomized polynomial time. A deterministic polynomial-time algorithm for Hypergraph-$k$-Cut was subsequently designed in 2020 (Chandrasekaran and Chekuri, FOCS 2020), but it is not guaranteed to enumerate all minimum $k$-cut-sets. In this work, we give the first deterministic polynomial-time algorithm to solve Enum-Hypergraph-$k$-Cut (this is non-trivial even for $k = 2$). Our algorithms are based on new structural results that allow for efficient recovery of all minimum $k$-cut-sets by solving minimum $(S,T)$-terminal cuts. Our techniques give new structural insights even for enumerating all minimum cut-sets (i.e., minimum 2-cut-sets) in a given hypergraph.

相關內容

Given a graph, the shortest-path problem requires finding a sequence of edges with minimum cumulative length that connects a source vertex to a target vertex. We consider a generalization of this classical problem in which the position of each vertex in the graph is a continuous decision variable, constrained to lie in a corresponding convex set. The length of an edge is then defined as a convex function of the positions of the vertices it connects. Problems of this form arise naturally in motion planning of autonomous vehicles, robot navigation, and even optimal control of hybrid dynamical systems. The price for such a wide applicability is the complexity of this problem, which is easily seen to be NP-hard. Our main contribution is a strong mixed-integer convex formulation based on perspective functions. This formulation has a very tight convex relaxation and makes it possible to efficiently find globally-optimal paths in large graphs and in high-dimensional spaces.

We study the problem of identifying the source of a stochastic diffusion process spreading on a graph based on the arrival times of the diffusion at a few queried nodes. In a graph $G=(V,E)$, an unknown source node $v^* \in V$ is drawn uniformly at random, and unknown edge weights $w(e)$ for $e\in E$, representing the propagation delays along the edges, are drawn independently from a Gaussian distribution of mean $1$ and variance $\sigma^2$. An algorithm then attempts to identify $v^*$ by querying nodes $q \in V$ and being told the length of the shortest path between $q$ and $v^*$ in graph $G$ weighted by $w$. We consider two settings: non-adaptive, in which all query nodes must be decided in advance, and adaptive, in which each query can depend on the results of the previous ones. Both settings are motivated by an application of the problem to epidemic processes (where the source is called patient zero), which we discuss in detail. We characterize the query complexity when $G$ is an $n$-node path. In the non-adaptive setting, $\Theta(n\sigma^2)$ queries are needed for $\sigma^2 \leq 1$, and $\Theta(n)$ for $\sigma^2 \geq 1$. In the adaptive setting, somewhat surprisingly, only $\Theta(\log\log_{1/\sigma}n)$ are needed when $\sigma^2 \leq 1/2$, and $\Theta(\log \log n)+O_\sigma(1)$ when $\sigma^2 \geq 1/2$. This is the first mathematical study of source identification with time queries in a non-deterministic diffusion process.

Enumerating all connected subgraphs of a given order from graphs is a computationally challenging task. In this paper, we propose two algorithms for enumerating all connected induced subgraphs of a given order from connected undirected graphs. The first algorithm is a variant of a previous well-known algorithm. The algorithm enumerates all connected induced subgraphs of order $k$ in a bottom-up manner. The data structures that lead to unit time element checking and linear space are presented. Different from previous algorithms that either work in a bottom-up manner or in a reverse search manner, an algorithm that enumerates all connected induced subgraphs of order $k$ in a top-down manner by recursively deleting vertices is proposed. The data structures used in the implementation are also presented. The correctness and complexity of the top-down algorithm is analysed and proven. Experimental results show that the variant bottom-up algorithm outperforms the other algorithms for enumerating connected induced subgraphs of small order, and the top-down algorithm is fastest among the state-of-the-art algorithms for enumerating connected induced subgraphs of large order.

Mining maximal subgraphs with cohesive structures from a bipartite graph has been widely studied. One important cohesive structure on bipartite graphs is k-biplex, where each vertex on one side disconnects at most k vertices on the other side. In this paper, we study the maximal k-biplex enumeration problem which enumerates all maximal k-biplexes. Existing methods suffer from efficiency and/or scalability issues and have the time of waiting for the next output exponential w.r.t. the size of the input bipartite graph (i.e., an exponential delay). In this paper, we adopt a reverse search framework called bTraversal, which corresponds to a depth-first search (DFS) procedure on an implicit solution graph on top of all maximal k-biplexes. We then develop a series of techniques for improving and implementing this framework including (1) carefully selecting an initial solution to start DFS, (2) pruning the vast majority of links from the solution graph of bTraversal, and (3) implementing abstract procedures of the framework. The resulting algorithm is called iTraversal, which has its underlying solution graph significantly sparser than (around 0.1% of) that of bTraversal. Besides, iTraversal provides a guarantee of polynomial delay. Our experimental results on real and synthetic graphs, where the largest one contains more than one billion edges, show that our algorithm is up to four orders of magnitude faster than existing algorithms.

Let $Q_{n}^{r}$ be the graph with vertex set $\{-1,1\}^{n}$ in which two vertices are joined if their Hamming distance is at most $r$. The edge-isoperimetric problem for $Q_{n}^{r}$ is that: For every $(n,r,M)$ such that $1\le r\le n$ and $1\le M\le2^{n}$, determine the minimum edge-boundary size of a subset of vertices of $Q_{n}^{r}$ with a given size $M$. In this paper, we apply two different approaches to prove bounds for this problem. The first approach is a linear programming approach and the second is a probabilistic approach. Our bound derived by the first approach generalizes the tight bound for $M=2^{n-1}$ derived by Kahn, Kalai, and Linial in 1989. Moreover, our bound is also tight for $M=2^{n-2}$ and $r\le\frac{n}{2}-1$. Our bounds derived by the second approach are expressed in terms of the \emph{noise stability}, and they are shown to be asymptotically tight as $n\to\infty$ when $r=2\lfloor\frac{\beta n}{2}\rfloor+1$ and $M=\lfloor\alpha2^{n}\rfloor$ for fixed $\alpha,\beta\in(0,1)$, and is tight up to a factor $2$ when $r=2\lfloor\frac{\beta n}{2}\rfloor$ and $M=\lfloor\alpha2^{n}\rfloor$. In fact, the edge-isoperimetric problem is equivalent to a ball-noise stability problem which is a variant of the traditional (i.i.d.-) noise stability problem. Our results can be interpreted as bounds for the ball-noise stability problem.

Inspired by [4] we present a new algorithm for uniformly random generation of ordered trees in which all occuring outdegrees can be specified by a given sequence of numbers. The method can be used for random generation of binary or n-ary trees, or ones with various arities. We show that the algorithm is correct and has $O(n)$ time complexity for $n$ being the desired number of nodes in the resulting tree. In the discussion part we show how some selected formulas can be derived with the use of ideas developed in the proof of correctness of the algorithm.

In this paper we prove upper and lower bounds on the minimal spherical dispersion. In particular, we see that the inverse $N(\varepsilon,d)$ of the minimal spherical dispersion is, for fixed $\varepsilon>0$, up to logarithmic terms linear in the dimension $d$. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere.

This work studies an experimental design problem where $x$'s are to be selected with the goal of estimating a function $m(x)$, which is observed with noise. A linear model is fitted to $m(x)$ but it is not assumed that the model is correctly specified. It follows that the quantity of interest is the best linear approximation of $m(x)$, which is denoted by $\ell(x)$. It is shown that in this framework the ordinary least squares estimator typically leads to an inconsistent estimation of $\ell(x)$, and rather weighted least squares should be considered. An asymptotic minimax criterion is formulated for this estimator, and a design that minimizes the criterion is constructed. An important feature of this problem is that the $x$'s should be random, rather than fixed. Otherwise, the minimax risk is infinite. It is shown that the optimal random minimax design is different from its deterministic counterpart, which was studied previously, and a simulation study indicates that it generally performs better when $m(x)$ is a quadratic or a cubic function. Another finding is that when the variance of the noise goes to infinity, the random and deterministic minimax designs coincide. The results are illustrated for polynomial regression models and different generalizations are presented.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司