亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A subset $S$ of vertices in a graph $G=(V, E)$ is a Dominating Set if each vertex in $V(G)\setminus S$ is adjacent to at least one vertex in $S$. Chellali et al. in 2013, by restricting the number of neighbors in $S$ of a vertex outside $S$, introduced the concept of $[1,j]$-dominating set. A set $D \subseteq V$ of a graph $G = (V, E)$ is called a $[1,j]$-Dominating Set of $G$ if every vertex not in $D$ has at least one neighbor and at most $j$ neighbors in $D$. The Minimum $[1,j]$-Domination problem is the problem of finding the minimum $[1,j]$-dominating set $D$. Given a positive integer $k$ and a graph $G = (V, E)$, the $[1,j]$-Domination Decision problem is to decide whether $G$ has a $[1,j]$-dominating set of cardinality at most $k$. A polynomial-time algorithm was obtained in split graphs for a constant $j$ in contrast to the Dominating Set problem which is NP-hard for split graphs. This result motivates us to investigate the effect of restriction $j$ on the complexity of $[1,j]$-domination problem on various classes of graphs. Although for $j\geq 3$, it has been proved that the minimum of classical domination is equal to minimum $[1,j]$-domination in interval graphs, the complexity of finding the minimum $[1,2]$-domination in interval graphs is still outstanding. In this paper, we propose a polynomial-time algorithm for computing a minimum $[1,2]$-dominating set on interval graphs by a dynamic programming technique. Next, on the negative side, we show that the minimum $[1,2]$-dominating set problem on circle graphs is $NP$-complete.

相關內容

We introduce $5/2$- and $7/2$-order $L^2$-accurate randomized Runge-Kutta-Nystr\"{o}m methods, tailored for approximating Hamiltonian flows within non-reversible Markov chain Monte Carlo samplers, such as unadjusted Hamiltonian Monte Carlo and unadjusted kinetic Langevin Monte Carlo. We establish quantitative $5/2$-order $L^2$-accuracy upper bounds under gradient and Hessian Lipschitz assumptions on the potential energy function. The numerical experiments demonstrate the superior efficiency of the proposed unadjusted samplers on a variety of well-behaved, high-dimensional target distributions.

A property $\Pi$ on a finite set $U$ is \emph{monotone} if for every $X \subseteq U$ satisfying $\Pi$, every superset $Y \subseteq U$ of $X$ also satisfies $\Pi$. Many combinatorial properties can be seen as monotone properties. The problem of finding a minimum subset of $U$ satisfying $\Pi$ is a central problem in combinatorial optimization. Although many approximate/exact algorithms have been developed to solve this kind of problem on numerous properties, a solution obtained by these algorithms is often unsuitable for real-world applications due to the difficulty of building accurate mathematical models on real-world problems. A promising approach to overcome this difficulty is to \emph{enumerate} multiple small solutions rather than to \emph{find} a single small solution. To this end, given a weight function $w: U \to \mathbb N$ and an integer $k$, we devise algorithms that \emph{approximately} enumerate all minimal subsets of $U$ with weight at most $k$ satisfying $\Pi$ for various monotone properties $\Pi$, where "approximate enumeration" means that algorithms output all minimal subsets satisfying $\Pi$ whose weight at most $k$ and may output some minimal subsets satisfying $\Pi$ whose weight exceeds $k$ but is at most $ck$ for some constant $c \ge 1$. These algorithms allow us to efficiently enumerate minimal vertex covers, minimal dominating sets in bounded degree graphs, minimal feedback vertex sets, minimal hitting sets in bounded rank hypergraphs, etc., of weight at most $k$ with constant approximation factors.

This paper presents an optimal construction of $N$-bit-delay almost instantaneous fixed-to-variable-length (AIFV) codes, the general form of binary codes we can make when finite bits of decoding delay are allowed. The presented method enables us to optimize lossless codes among a broader class of codes compared to the conventional FV and AIFV codes. The paper first discusses the problem of code construction, which contains some essential partial problems, and defines three classes of optimality to clarify how far we can solve the problems. The properties of the optimal codes are analyzed theoretically, showing the sufficient conditions for achieving the optimum. Then, we propose an algorithm for constructing $N$-bit-delay AIFV codes for given stationary memory-less sources. The optimality of the constructed codes is discussed both theoretically and empirically. They showed shorter expected code lengths when $N\ge 3$ than the conventional AIFV-$m$ and extended Huffman codes. Moreover, in the random numbers simulation, they performed higher compression efficiency than the 32-bit-precision range codes under reasonable conditions.

Significant work has been done on computing the ``average'' optimal solution value for various $\mathsf{NP}$-complete problems using the Erd\"{o}s-R\'{e}nyi model to establish \emph{critical thresholds}. Critical thresholds define narrow bounds for the optimal solution of a problem instance such that the probability that the solution value lies outside these bounds vanishes as the instance size approaches infinity. In this paper, we extend the Erd\"{o}s-R\'{e}nyi model to general hypergraphs on $n$ vertices and $M$ hyperedges. We consider the problem of determining critical thresholds for the largest cardinality matching, and we show that for $M=o(1.155^n)$ the size of the maximum cardinality matching is almost surely 1. On the other hand, if $M=\Theta(2^n)$ then the size of the maximum cardinality matching is $\Omega(n^{\frac12-\gamma})$ for an arbitrary $\gamma >0$. Lastly, we address the gap where $\Omega(1.155^n)=M=o(2^n)$ empirically through computer simulations.

We consider minimum time multicasting problems in directed and undirected graphs: given a root node and a subset of $t$ terminal nodes, multicasting seeks to find the minimum number of rounds within which all terminals can be informed with a message originating at the root. In each round, the telephone model we study allows the information to move via a matching from the informed nodes to the uninformed nodes. Since minimum time multicasting in digraphs is poorly understood compared to the undirected variant, we study an intermediate problem in undirected graphs that specifies a target $k < t$, and requires the only $k$ of the terminals be informed in the minimum number of rounds. For this problem, we improve implications of prior results and obtain an $\tilde{O}(t^{1/3})$ multiplicative approximation. For the directed version, we obtain an {\em additive} $\tilde{O}(k^{1/2})$ approximation algorithm (with a poly-logarithmic multiplicative factor). Our algorithms are based on reductions to the related problems of finding $k$-trees of minimum poise (sum of maximum degree and diameter) and applying a combination of greedy network decomposition techniques and set covering under partition matroid constraints.

The random walk $d$-ary cuckoo hashing algorithm was defined by Fotakis, Pagh, Sanders, and Spirakis to generalize and improve upon the standard cuckoo hashing algorithm of Pagh and Rodler. Random walk $d$-ary cuckoo hashing has low space overhead, guaranteed fast access, and fast in practice insertion time. In this paper, we give a theoretical insertion time bound for this algorithm. More precisely, for every $d\ge 3$ hashes, let $c_d^*$ be the sharp threshold for the load factor at which a valid assignment of $cm$ objects to a hash table of size $m$ likely exists. We show that for any $d\ge 4$ hashes and load factor $c<c_d^*$, the expectation of the random walk insertion time is $O(1)$, that is, a constant depending only on $d$ and $c$ but not $m$.

We consider the $k$-min-sum-radii ($k$-MSR) clustering problem with fairness constraints. The $k$-min-sum-radii problem is a mixture of the classical $k$-center and $k$-median problems. We are given a set of points $P$ in a metric space and a number $k$ and aim to partition the points into $k$ clusters, each of the clusters having one designated center. The objective to minimize is the sum of the radii of the $k$ clusters (where in $k$-center we would only consider the maximum radius and in $k$-median we would consider the sum of the individual points' costs). Various notions of fair clustering have been introduced lately, and we follow the definitions due to Chierichetti, Kumar, Lattanzi and Vassilvitskii [NeurIPS 2017] which demand that cluster compositions shall follow the proportions of the input point set with respect to some given sensitive attribute. For the easier case where the sensitive attribute only has two possible values and each is equally frequent in the input, the aim is to compute a clustering where all clusters have a 1:1 ratio with respect to this attribute. We call this the 1:1 case. There has been a surge of FPT-approximation algorithms for the $k$-MSR problem lately, solving the problem both in the unconstrained case and in several constrained problem variants. We add to this research area by designing an FPT $(6+\epsilon)$-approximation that works for $k$-MSR under the mentioned general fairness notion. For the special 1:1 case, we improve our algorithm to achieve a $(3+\epsilon)$-approximation.

The {\em Spanning Tree Congestion} problem is an easy-to-state NP-hard problem: given a graph $G$, construct a spanning tree $T$ of $G$ minimizing its maximum edge congestion where the congestion of an edge $e\in T$ is the number of edges $uv$ in $G$ such that the unique path between $u$ and $v$ in $T$ passes through $e$; the optimum value for a given graph $G$ is denoted $STC(G)$. It is known that {\em every} spanning tree is an $n/2$-approximation. A long-standing problem is to design a better approximation algorithm. Our contribution towards this goal is an $O(\Delta\cdot\log^{3/2}n)$-approximation algorithm for the minimum congestion spanning tree problem where $\Delta$ is the maximum degree in $G$. For graphs with maximum degree bounded by polylog of the number of vertices, this is an exponential improvement over the previous best approximation. For graphs with maximum degree bounded by $o(n/\log^{3/2}n)$, we get $o(n)$-approximation; this is the largest class of graphs that we know of, for which sublinear approximation is known for this problem. Our main tool for the algorithm is a new lower bound on the spanning tree congestion which is of independent interest. We prove that for every graph $G$, $STC(G)\geq \Omega(hb(G)/\Delta)$ where $hb(G)$ denotes the maximum bisection width over all subgraphs of $G$.

Hofstadter's $G$ function is recursively defined via $G(0)=0$ and then $G(n)=n-G(G(n-1))$. Following Hofstadter, a family $(F_k)$ of similar functions is obtained by varying the number $k$ of nested recursive calls in this equation. We establish here that this family is ordered pointwise: for all $k$ and $n$, $F_k(n) \le F_{k+1}(n)$. For achieving this, a detour is made via infinite morphic words generalizing the Fibonacci word. Various properties of these words are proved, concerning the lengths of substituted prefixes of these words and the counts of some specific letters in these prefixes. We also relate the limits of $\frac{1}{n}F_k(n)$ to the frequencies of letters in the considered words.

Error-bounded lossy compression is a critical technique for significantly reducing scientific data volumes. Compared to CPU-based compressors, GPU-based compressors exhibit substantially higher throughputs, fitting better for today's HPC applications. However, the critical limitations of existing GPU-based compressors are their low compression ratios and qualities, severely restricting their applicability. To overcome these, we introduce a new GPU-based error-bounded scientific lossy compressor named cuSZ-$i$, with the following contributions: (1) A novel GPU-optimized interpolation-based prediction method significantly improves the compression ratio and decompression data quality. (2) The Huffman encoding module in cuSZ-$i$ is optimized for better efficiency. (3) cuSZ-$i$ is the first to integrate the NVIDIA Bitcomp-lossless as an additional compression-ratio-enhancing module. Evaluations show that cuSZ-$i$ significantly outperforms other latest GPU-based lossy compressors in compression ratio under the same error bound (hence, the desired quality), showcasing a 476% advantage over the second-best. This leads to cuSZ-$i$'s optimized performance in several real-world use cases.

北京阿比特科技有限公司