亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide a general framework to exclude parameterized running times of the form $O(\ell^\beta+ n^\gamma)$ for problems that have polynomial running time lower bounds under hypotheses from fine-grained complexity. Our framework is based on cross-compositions from parameterized complexity. We (conditionally) exclude running times of the form $O(\ell^{{\gamma}/{(\gamma-1)} - \epsilon} + n^\gamma)$ for any $1<\gamma<2$ and $\epsilon>0$ for the following problems: - Longest Common Subsequence: Given two length-$n$ strings and $\ell\in\mathbb{N}$, is there a common subsequence of length $\ell$? - Discrete Fr\'echet Distance: Given two lists of $n$ points each and $k\in \mathbb{N}$, is the Fr\'echet distance of the lists at most $k$? Here $\ell$ is the maximum number of points which one list is ahead of the other list in an optimum traversal. Moreover, we exclude running times $O(\ell^{{2\gamma}/{(\gamma -1)}-\epsilon} + n^\gamma)$ for any $1<\gamma<3$ and $\epsilon>0$ for: - Negative Triangle: Given an edge-weighted graph with $n$ vertices, is there a triangle whose sum of edge-weights is negative? Here $\ell$ is the order of a maximum connected component. - Triangle Collection: Given a vertex-colored graph with $n$ vertices, is there for each triple of colors a triangle whose vertices have these three colors? Here $\ell$ is the order of a maximum connected component. - 2nd Shortest Path: Given an $n$-vertex edge-weighted directed graph, two vertices $s$ and $t$, and $k \in \mathbb{N}$, has the second longest $s$-$t$-path length at most $k$? Here $\ell$ is the directed feedback vertex set. Except for 2nd Shortest Path all these running time bounds are tight, that is, algorithms with running time $O(\ell^{{\gamma}/{(\gamma-1)}} + n^\gamma )$ for any $1 < \gamma < 2$ and $O(\ell^{{2\gamma}/{(\gamma -1)}} + n^\gamma)$ for any $1 < \gamma < 3$, respectively, are known.

相關內容

We present a Newton-type method that converges fast from any initialization and for arbitrary convex objectives with Lipschitz Hessians. We achieve this by merging the ideas of cubic regularization with a certain adaptive Levenberg--Marquardt penalty. In particular, we show that the iterates given by $x^{k+1}=x^k - \bigl(\nabla^2 f(x^k) + \sqrt{H\|\nabla f(x^k)\|} \mathbf{I}\bigr)^{-1}\nabla f(x^k)$, where $H>0$ is a constant, converge globally with a $\mathcal{O}(\frac{1}{k^2})$ rate. Our method is the first variant of Newton's method that has both cheap iterations and provably fast global convergence. Moreover, we prove that locally our method converges superlinearly when the objective is strongly convex. To boost the method's performance, we present a line search procedure that does not need prior knowledge of $H$ and is provably efficient.

We study a family of generalizations of Edge Dominating Set on directed graphs called Directed $(p,q)$-Edge Dominating Set. In this problem an arc $(u,v)$ is said to dominate itself, as well as all arcs which are at distance at most $q$ from $v$, or at distance at most $p$ to $u$. First, we give significantly improved FPT algorithms for the two most important cases of the problem, $(0,1)$-dEDS and $(1,1)$-dEDS (that correspond to versions of Dominating Set on line graphs), as well as polynomial kernels. We also improve the best-known approximation for these cases from logarithmic to constant. In addition, we show that $(p,q)$-dEDS is FPT parameterized by $p+q+tw$, but W-hard parameterized by $tw$ (even if the size of the optimal is added as a second parameter), where $tw$ is the treewidth of the underlying graph of the input. We then go on to focus on the complexity of the problem on tournaments. Here, we provide a complete classification for every possible fixed value of $p,q$, which shows that the problem exhibits a surprising behavior, including cases which are in P; cases which are solvable in quasi-polynomial time but not in P; and a single case $(p=q=1)$ which is NP-hard (under randomized reductions) and cannot be solved in sub-exponential time, under standard assumptions.

Andreae (1986) proved that the cop number of connected $H$-minor-free graphs is bounded for every graph $H$. In particular, the cop number is at most $|E(H-h)|$ if $H-h$ contains no isolated vertex. The main result of this paper is an improvement on this bound, which is most significant when $H$ is small or sparse, for instance when $H-h$ can be obtained from another graph by multiple edge subdivisions. Some consequences of this result are improvements on the upper bound for the cop number of $K_{3,m}$-minor-free graphs, $K_{2,m}$-minor free graphs and linklessly embeddable graphs.

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an $O((1+\sigma^{2}\log(1/\delta))/T+\sigma/\sqrt{T})$ convergence rate when the number of iterations $T$ is known and an $O((1+\sigma^{2}\log(T/\delta))/\sqrt{T})$ convergence rate when $T$ is unknown for SGD, where $1-\delta$ is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.

Motivated by the planarization of 2-layered straight-line drawings, we consider the problem of modifying a graph such that the resulting graph has pathwidth at most 1. The problem Pathwidth-One Vertex Explosion (POVE) asks whether such a graph can be obtained using at most $k$ vertex explosions, where a vertex explosion replaces a vertex $v$ by deg$(v)$ degree-1 vertices, each incident to exactly one edge that was originally incident to $v$. For POVE, we give an FPT algorithm with running time $O(4^k \cdot m)$ and a quadratic kernel, thereby improving over the $O(k^6)$-kernel by Ahmed et al. [GD 22] in a more general setting. Similarly, a vertex split replaces a vertex $v$ by two distinct vertices $v_1$ and $v_2$ and distributes the edges originally incident to $v$ arbitrarily to $v_1$ and $v_2$. Analogously to POVE, we define the problem variant Pathwidth-One Vertex Splitting (POVS) that uses the split operation instead of vertex explosions. Here we obtain a linear kernel and an algorithm with running time $O((6k+12)^k \cdot m)$. This answers an open question by Ahmed et al. [GD22]. Finally, we consider the problem $\Pi$ Vertex Splitting ($\Pi$-VS), which generalizes the problem POVS and asks whether a given graph can be turned into a graph of a specific graph class $\Pi$ using at most $k$ vertex splits. For graph classes $\Pi$ that can be tested in monadic second-order graph logic (MSO$_2$), we show that the problem $\Pi$-VS can be expressed as an MSO$_2$ formula, resulting in an FPT algorithm for $\Pi$-VS parameterized by $k$ if $\Pi$ additionally has bounded treewidth. We obtain the same result for the problem variant using vertex explosions.

We adopt an information-theoretic framework to analyze the generalization behavior of the class of iterative, noisy learning algorithms. This class is particularly suitable for study under information-theoretic metrics as the algorithms are inherently randomized, and it includes commonly used algorithms such as Stochastic Gradient Langevin Dynamics (SGLD). Herein, we use the maximal leakage (equivalently, the Sibson mutual information of order infinity) metric, as it is simple to analyze, and it implies both bounds on the probability of having a large generalization error and on its expected value. We show that, if the update function (e.g., gradient) is bounded in $L_2$-norm, then adding isotropic Gaussian noise leads to optimal generalization bounds: indeed, the input and output of the learning algorithm in this case are asymptotically statistically independent. Furthermore, we demonstrate how the assumptions on the update function affect the optimal (in the sense of minimizing the induced maximal leakage) choice of the noise. Finally, we compute explicit tight upper bounds on the induced maximal leakage for several scenarios of interest.

The problems of determining the minimum-sized \emph{identifying}, \emph{locating-dominating} and \emph{open locating-dominating codes} of an input graph are special search problems that are challenging from both theoretical and computational viewpoints. In these problems, one selects a dominating set $C$ of a graph $G$ such that the vertices of a chosen subset of $V(G)$ (i.e. either $V(G)\setminus C$ or $V(G)$ itself) are uniquely determined by their neighborhoods in $C$. A typical line of attack for these problems is to determine tight bounds for the minimum codes in various graphs classes. In this work, we present tight lower and upper bounds for all three types of codes for \emph{block graphs} (i.e. diamond-free chordal graphs). Our bounds are in terms of the number of maximal cliques (or \emph{blocks}) of a block graph and the order of the graph. Two of our upper bounds verify conjectures from the literature - with one of them being now proven for block graphs in this article. As for the lower bounds, we prove them to be linear in terms of both the number of blocks and the order of the block graph. We provide examples of families of block graphs whose minimum codes attain these bounds, thus showing each bound to be tight.

The Set Packing problem is, given a collection of sets $\mathcal{S}$ over a ground set $\mathcal{U}$, to find a maximum collection of sets that are pairwise disjoint. The problem is among the most fundamental NP-hard optimization problems that have been studied extensively in various computational regimes. The focus of this work is on parameterized complexity, Parameterized Set Packing (PSP): Given $r \in {\mathbb N}$, is there a collection $ \mathcal{S}' \subseteq \mathcal{S}: |\mathcal{S}'| = r$ such that the sets in $\mathcal{S}'$ are pairwise disjoint? Unfortunately, the problem is not fixed parameter tractable unless $\mathsf{W[1] = FPT}$, and, in fact, an "enumeration" running time of $|\mathcal{S}|^{\Omega(r)}$ is required unless the exponential time hypothesis (ETH) fails. This paper is a quest for tractable instances of Set Packing from parameterized complexity perspectives. We say that the input $(\mathcal{U},\mathcal{S})$ is "compact" if $|\mathcal{U}| = f(r)\cdot\Theta(\textsf{poly}( \log |\mathcal{S}|))$, for some $f(r) \ge r$. In the Compact Set Packing problem, we are given a compact instance of PSP. In this direction, we present a "dichotomy" result of PSP: When $|\mathcal{U}| = f(r)\cdot o(\log |\mathcal{S}|)$, PSP is in $\textsf{FPT}$, while for $|\mathcal{U}| = r\cdot\Theta(\log (|\mathcal{S}|))$, the problem is $W[1]$-hard; moreover, assuming ETH, Compact PSP does not even admit $|\mathcal{S}|^{o(r/\log r)}$ time algorithm. Although certain results in the literature imply hardness of compact versions of related problems such as Set $r$-Covering and Exact $r$-Covering, these constructions fail to extend to Compact PSP. A novel contribution of our work is the identification and construction of a gadget, which we call Compatible Intersecting Set System pair, that is crucial in obtaining the hardness result for Compact PSP.

Nearly all simulation-based games have environment parameters that affect incentives in the interaction but are not explicitly incorporated into the game model. To understand the impact of these parameters on strategic incentives, typical game-theoretic analysis involves selecting a small set of representative values, and constructing and analyzing separate game models for each value. We introduce a novel technique to learn a single model representing a family of closely related games that differ in the number of symmetric players or other ordinal environment parameters. Prior work trains a multi-headed neural network to output mixed-strategy deviation payoffs, which can be used to compute symmetric $\varepsilon$-Nash equilibria. We extend this work by making environment parameters into input dimensions of the regressor, enabling a single model to learn patterns which generalize across the parameter space. For continuous and discrete parameters, our results show that these generalized models outperform existing approaches, achieving better accuracy with far less data. This technique makes thorough analysis of the parameter space more tractable, and promotes analyses that capture relationships between parameters and incentives.

This work considers the low-rank approximation of a matrix $A(t)$ depending on a parameter $t$ in a compact set $D \subset \mathbb{R}^d$. Application areas that give rise to such problems include computational statistics and dynamical systems. Randomized algorithms are an increasingly popular approach for performing low-rank approximation and they usually proceed by multiplying the matrix with random dimension reduction matrices (DRMs). Applying such algorithms directly to $A(t)$ would involve different, independent DRMs for every $t$, which is not only expensive but also leads to inherently non-smooth approximations. In this work, we propose to use constant DRMs, that is, $A(t)$ is multiplied with the same DRM for every $t$. The resulting parameter-dependent extensions of two popular randomized algorithms, the randomized singular value decomposition and the generalized Nystr\"{o}m method, are computationally attractive, especially when $A(t)$ admits an affine linear decomposition with respect to $t$. We perform a probabilistic analysis for both algorithms, deriving bounds on the expected value as well as failure probabilities for the $L^2$ approximation error when using Gaussian random DRMs. Both, the theoretical results and numerical experiments, show that the use of constant DRMs does not impair their effectiveness; our methods reliably return quasi-best low-rank approximations.

北京阿比特科技有限公司