亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A set $S\subseteq V$ of a graph $G=(V,E)$ is a dominating set if each vertex has a neighbor in $S$ or belongs to $S$. Dominating Set is the problem of deciding, given a graph $G$ and an integer $k\geq 1$, if $G$ has a dominating set of size at most $k$. It is well known that this problem is $\mathsf{NP}$-complete even for claw-free graphs. We give a complexity dichotomy for Dominating Set for the class of claw-free graphs with diameter $d$. We show that the problem is $\mathsf{NP}$-complete for every fixed $d\ge 3$ and polynomial time solvable for $d\le 2$. To prove the case $d=2$, we show that Minimum Maximal Matching can be solved in polynomial time for $2K_2$-free graphs.

相關內容

Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by \citet{mannor2011} and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order $\sqrt{\alpha T}$, where $\alpha$ is the independence number of the graph, and $T$ is the time horizon. However, this is proven only when the number of rounds $T$ is larger than $\alpha^3$, which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity $R^*$, called the \emph{problem complexity}, and prove that the minimax regret is proportional to $R^*$ for any graph and time horizon $T$. Introducing an intricate exploration strategy, we define the \mainAlgorithm algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if $T$ is smaller than $\alpha^3$.

Public observation logic (POL) reasons about agent expectations and agent observations in various real world situations. The expectations of agents take shape based on certain protocols about the world around and they remove those possible scenarios where their expectations and observations do not match. This in turn influences the epistemic reasoning of these agents. In this work, we study the computational complexity of the satisfaction problems of various fragments of POL. In the process, we also highlight the inevitable link that these fragments have with the well-studied Public announcement logic.

Dahlquist, Liniger, and Nevanlinna design a family of one-leg, two-step methods (the DLN method) that is second order, A- and G-stable for arbitrary, non-uniform time steps. Recently, the implementation of the DLN method can be simplified by the refactorization process (adding time filters on backward Euler scheme). Due to these fine properties, the DLN method has strong potential for the numerical simulation of time-dependent fluid models. In the report, we propose a semi-implicit DLN algorithm for the Navier Stokes equations (avoiding non-linear solver at each time step) and prove the unconditional, long-term stability and second-order convergence with the moderate time step restriction. Moreover, the adaptive DLN algorithms by the required error or numerical dissipation criterion are presented to balance the accuracy and computational cost. Numerical tests will be given to support the main conclusions.

We consider the problem of recovering conditional independence relationships between $p$ jointly distributed Hilbertian random elements given $n$ realizations thereof. We operate in the sparse high-dimensional regime, where $n \ll p$ and no element is related to more than $d \ll p$ other elements. In this context, we propose an infinite-dimensional generalization of the graphical lasso. We prove model selection consistency under natural assumptions and extend many classical results to infinite dimensions. In particular, we do not require finite truncation or additional structural restrictions. The plug-in nature of our method makes it applicable to any observational regime, whether sparse or dense, and indifferent to serial dependence. Importantly, our method can be understood as naturally arising from a coherent maximum likelihood philosophy.

A dictionary data structure maintains a set of at most $n$ keys from the universe $[U]$ under key insertions and deletions, such that given a query $x \in [U]$, it returns if $x$ is in the set. Some variants also store values associated to the keys such that given a query $x$, the value associated to $x$ is returned when $x$ is in the set. This fundamental data structure problem has been studied for six decades since the introduction of hash tables in 1953. A hash table occupies $O(n\log U)$ bits of space with constant time per operation in expectation. There has been a vast literature on improving its time and space usage. The state-of-the-art dictionary by Bender, Farach-Colton, Kuszmaul, Kuszmaul and Liu [BFCK+22] has space consumption close to the information-theoretic optimum, using a total of \[ \log\binom{U}{n}+O(n\log^{(k)} n) \] bits, while supporting all operations in $O(k)$ time, for any parameter $k \leq \log^* n$. The term $O(\log^{(k)} n) = O(\underbrace{\log\cdots\log}_k n)$ is referred to as the wasted bits per key. In this paper, we prove a matching cell-probe lower bound: For $U=n^{1+\Theta(1)}$, any dictionary with $O(\log^{(k)} n)$ wasted bits per key must have expected operational time $\Omega(k)$, in the cell-probe model with word-size $w=\Theta(\log U)$. Furthermore, if a dictionary stores values of $\Theta(\log U)$ bits, we show that regardless of the query time, it must have $\Omega(k)$ expected update time. It is worth noting that this is the first cell-probe lower bound on the trade-off between space and update time for general data structures.

Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg [VBK21] introduced the generalized $p$-mean densest subgraph problem which captures the maxcore problem when $p=-\infty$ and the densest subgraph problem when $p=1$. They observed that the objective leads to a supermodular function when $p \ge 1$ and hence can be solved in polynomial time; for this case, they also developed a simple greedy peeling algorithm with a bounded approximation ratio. In this paper, we make several contributions. First, we prove that for any $p \in (-\frac{1}{8}, 0) \cup (0, \frac{1}{4})$ the problem is NP-Hard and for any $p \in (-3,0) \cup (0,1)$ the weighted version of the problem is NP-Hard, partly resolving a question left open in [VBK21]. Second, we describe two simple $1/2$-approximation algorithms for all $p < 1$, and show that our analysis of these algorithms is tight. For $p > 1$ we develop a fast near-linear time implementation of the greedy peeling algorithm from [VBK21]. This allows us to plug it into the iterative peeling algorithm that was shown to converge to an optimum solution [CQT22]. We demonstrate the efficacy of our algorithms by running extensive experiments on large graphs. Together, our results provide a comprehensive understanding of the complexity of the $p$-mean densest subgraph problem and lead to fast and provably good algorithms for the full range of $p$.

The task of the broadcast problem is, given a graph G and a source vertex s, to compute the minimum number of rounds required to disseminate a piece of information from s to all vertices in the graph. It is assumed that, at each round, an informed vertex can transmit the information to at most one of its neighbors. The broadcast problem is known to NP-hard. We show that the problem is FPT when parametrized by the size k of a feedback edge-set, or by the size k of a vertex-cover, or by k=n-t where t is the input deadline for the broadcast protocol to complete.

In this paper, we propose new techniques for solving geometric optimization problems involving interpoint distances of a point set in the plane. Given a set $P$ of $n$ points in the plane and an integer $1 \leq k \leq \binom{n}{2}$, the distance selection problem is to find the $k$-th smallest interpoint distance among all pairs of points of $P$. The previously best deterministic algorithm solves the problem in $O(n^{4/3} \log^2 n)$ time [Katz and Sharir, SIAM J. Comput. 1997 and SoCG 1993]. In this paper, we improve their algorithm to $O(n^{4/3} \log n)$ time. Using similar techniques, we also give improved algorithms on both the two-sided and the one-sided discrete Fr\'{e}chet distance with shortcuts problem for two point sets in the plane. For the two-sided problem (resp., one-sided problem), we improve the previous work [Avraham, Filtser, Kaplan, Katz, and Sharir, ACM Trans. Algorithms 2015 and SoCG 2014] by a factor of roughly $\log^2(m+n)$ (resp., $(m+n)^{\epsilon}$), where $m$ and $n$ are the sizes of the two input point sets, respectively. Other problems whose solutions can be improved by our techniques include the reverse shortest path problems for unit-disk graphs. Our techniques are quite general and we believe they will find many other applications in future.

We initiate the study of the algorithmic problem of certifying lower bounds on the discrepancy of random matrices: given an input matrix $A \in \mathbb{R}^{m \times n}$, output a value that is a lower bound on $\mathsf{disc}(A) = \min_{x \in \{\pm 1\}^n} ||Ax||_\infty$ for every $A$, but is close to the typical value of $\mathsf{disc}(A)$ with high probability over the choice of a random $A$. This problem is important because of its connections to conjecturally-hard average-case problems such as negatively-spiked PCA, the number-balancing problem and refuting random constraint satisfaction problems. We give the first polynomial-time algorithms with non-trivial guarantees for two main settings. First, when the entries of $A$ are i.i.d. standard Gaussians, it is known that $\mathsf{disc} (A) = \Theta (\sqrt{n}2^{-n/m})$ with high probability. Our algorithm certifies that $\mathsf{disc}(A) \geq \exp(- O(n^2/m))$ with high probability. As an application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein on the computational hardness of the detection problem in the negatively-spiked Wishart model. Second, we consider the integer partitioning problem: given $n$ uniformly random $b$-bit integers $a_1, \ldots, a_n$, certify the non-existence of a perfect partition, i.e. certify that $\mathsf{disc} (A) \geq 1$ for $A = (a_1, \ldots, a_n)$. Under the scaling $b = \alpha n$, it is known that the probability of the existence of a perfect partition undergoes a phase transition from 1 to 0 at $\alpha = 1$; our algorithm certifies the non-existence of perfect partitions for some $\alpha = O(n)$. We also give efficient non-deterministic algorithms with significantly improved guarantees. Our algorithms involve a reduction to the Shortest Vector Problem.

For a graph $G$, a subset $S \subseteq V(G)$ is called a \emph{resolving set} if for any two vertices $u,v \in V(G)$, there exists a vertex $w \in S$ such that $d(w,u) \neq d(w,v)$. The {\sc Metric Dimension} problem takes as input a graph $G$ and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. This problem was introduced in the 1970s and is known to be \NP-hard~[GT~61 in Garey and Johnson's book]. In the realm of parameterized complexity, Hartung and Nichterlein~[CCC~2013] proved that the problem is \W[2]-hard when parameterized by the natural parameter $k$. They also observed that it is \FPT\ when parameterized by the vertex cover number and asked about its complexity under \emph{smaller} parameters, in particular the feedback vertex set number. We answer this question by proving that {\sc Metric Dimension} is \W[1]-hard when parameterized by the combined parameter feedback vertex set number plus pathwidth. This also improves the result of Bonnet and Purohit~[IPEC 2019] which states that the problem is \W[1]-hard parameterized by the pathwidth. On the positive side, we show that {\sc Metric Dimension} is \FPT\ when parameterized by either the distance to cluster or the distance to co-cluster, both of which are smaller parameters than the vertex cover number.

北京阿比特科技有限公司