亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a graph $G=(V,E)$ and an integer $k\in \mathbb{N}$, we study {\sc 2-Eigenvalue Vertex Deletion} (2-EVD), where the goal is to remove at most $k$ vertices such that the adjacency matrix of the resulting graph has at most 2 eigenvalues. It is known that the adjacency matrix of a graph has at most 2 eigenvalues if and only if the graph is a collection of equal sized cliques. So {\sc 2-Eigenvalue Vertex Deletion} amounts to removing a set of at most $k$ vertices such that the resulting graph is a collection of equal sized cliques. The {\sc 2-Eigenvalue Edge Editing} (2-EEE), {\sc 2-Eigenvalue Edge Deletion} (2-EED) and {\sc 2-Eigenvalue Edge Addition} (2-EEA) problems are defined analogously. We provide a kernel of size $\mathcal{O}(k^{3})$ for {\sc $2$-EVD}. For the problems {\sc $2$-EEE} and {\sc $2$-EED}, we provide kernels of size $\mathcal{O}(k^{2})$. Finally, we provide a linear kernel of size $6k$ for {\sc $2$-EEA}. We thereby resolve three open questions listed by Misra et al. (ISAAC 2023) concerning the complexity of these problems parameterized by the solution size.

相關內容

In this paper we study the problem of finding $(\epsilon, \phi)$-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a $\phi$-expander, while the number of inter-cluster edges is only an $\epsilon$ fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a $(O(\phi \log n), \phi)$-expander decomposition of an $n$-vertex graph using $\widetilde{O}(n/\phi^2)$ bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter $\phi$ is inherent. We move towards answering this question on two fronts. We prove that a $(O(\phi \log n), \phi)$-expander decomposition can be found using $\widetilde{O}(n)$ space, for every $\phi$. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of $\phi$, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of $(O(\phi \log n), \phi)$-expander decompositions requires ${\widetilde{\Omega}}(n/\phi)$ bits of space, even in insertion only streams.

In this paper, we study a class of non-smooth non-convex problems in the form of $\min_{x}[\max_{y\in Y}\phi(x, y) - \max_{z\in Z}\psi(x, z)]$, where both $\Phi(x) = \max_{y\in Y}\phi(x, y)$ and $\Psi(x)=\max_{z\in Z}\psi(x, z)$ are weakly convex functions, and $\phi(x, y), \psi(x, z)$ are strongly concave functions in terms of $y$ and $z$, respectively. It covers two families of problems that have been studied but are missing single-loop stochastic algorithms, i.e., difference of weakly convex functions and weakly convex strongly-concave min-max problems. We propose a stochastic Moreau envelope approximate gradient method dubbed SMAG, the first single-loop algorithm for solving these problems, and provide a state-of-the-art non-asymptotic convergence rate. The key idea of the design is to compute an approximate gradient of the Moreau envelopes of $\Phi, \Psi$ using only one step of stochastic gradient update of the primal and dual variables. Empirically, we conduct experiments on positive-unlabeled (PU) learning and partial area under ROC curve (pAUC) optimization with an adversarial fairness regularizer to validate the effectiveness of our proposed algorithms.

We provide the sandwiched R\'enyi divergence of order $\alpha\in(\frac{1}{2},1)$, as well as its induced quantum information quantities, with an operational interpretation in the characterization of the exact strong converse exponents of quantum tasks. Specifically, we consider (a) smoothing of the max-relative entropy, (b) quantum privacy amplification, and (c) quantum information decoupling. We solve the problem of determining the exact strong converse exponents for these three tasks, with the performance being measured by the fidelity or purified distance. The results are given in terms of the sandwiched R\'enyi divergence of order $\alpha\in(\frac{1}{2},1)$, and its induced quantum R\'enyi conditional entropy and quantum R\'enyi mutual information. This is the first time to find the precise operational meaning for the sandwiched R\'enyi divergence with R\'enyi parameter in the interval $\alpha\in(\frac{1}{2},1)$.

We study Leaky ResNets, which interpolate between ResNets ($\tilde{L}=0$) and Fully-Connected nets ($\tilde{L}\to\infty$) depending on an 'effective depth' hyper-parameter $\tilde{L}$. In the infinite depth limit, we study 'representation geodesics' $A_{p}$: continuous paths in representation space (similar to NeuralODEs) from input $p=0$ to output $p=1$ that minimize the parameter norm of the network. We give a Lagrangian and Hamiltonian reformulation, which highlight the importance of two terms: a kinetic energy which favors small layer derivatives $\partial_{p}A_{p}$ and a potential energy that favors low-dimensional representations, as measured by the 'Cost of Identity'. The balance between these two forces offers an intuitive understanding of feature learning in ResNets. We leverage this intuition to explain the emergence of a bottleneck structure, as observed in previous work: for large $\tilde{L}$ the potential energy dominates and leads to a separation of timescales, where the representation jumps rapidly from the high dimensional inputs to a low-dimensional representation, move slowly inside the space of low-dimensional representations, before jumping back to the potentially high-dimensional outputs. Inspired by this phenomenon, we train with an adaptive layer step-size to adapt to the separation of timescales.

Given an edge-weighted (metric/general) complete graph with $n$ vertices, the maximum weight (metric/general) $k$-cycle/path packing problem is to find a set of $\frac{n}{k}$ vertex-disjoint $k$-cycles/paths such that the total weight is maximized. In this paper, we consider approximation algorithms. For metric $k$-cycle packing, we improve the previous approximation ratio from $3/5$ to $7/10$ for $k=5$, and from $7/8\cdot(1-1/k)^2$ for $k>5$ to $(7/8-0.125/k)(1-1/k)$ for constant odd $k>5$ and to $7/8\cdot (1-1/k+\frac{1}{k(k-1)})$ for even $k>5$. For metric $k$-path packing, we improve the approximation ratio from $7/8\cdot (1-1/k)$ to $\frac{27k^2-48k+16}{32k^2-36k-24}$ for even $10\geq k\geq 6$. For the case of $k=4$, we improve the approximation ratio from $3/4$ to $5/6$ for metric 4-cycle packing, from $2/3$ to $3/4$ for general 4-cycle packing, and from $3/4$ to $14/17$ for metric 4-path packing.

We study the following generalization of the Hamiltonian cycle problem: Given integers $a,b$ and graph $G$, does there exist a closed walk in $G$ that visits every vertex at least $a$ times and at most $b$ times? Equivalently, does there exist a connected $[2a,2b]$ factor of $2b \cdot G$ with all degrees even? This problem is NP-hard for any constants $1 \leq a \leq b$. However, the graphs produced by known reductions have maximum degree growing linearly in $b$. The case $a = b = 1 $ -- i.e. Hamiltonicity -- remains NP-hard even in $3$-regular graphs; a natural question is whether this is true for other $a$, $b$. In this work, we study which $a, b$ permit polynomial time algorithms and which lead to NP-hardness in graphs with constrained degrees. We give tight characterizations for regular graphs and graphs of bounded max-degree, both directed and undirected.

We give a quantum approximation scheme (i.e., $(1 + \varepsilon)$-approximation for every $\varepsilon > 0$) for the classical $k$-means clustering problem in the QRAM model with a running time that has only polylogarithmic dependence on the number of data points. More specifically, given a dataset $V$ with $N$ points in $\mathbb{R}^d$ stored in QRAM data structure, our quantum algorithm runs in time $\tilde{O} \left( 2^{\tilde{O}(\frac{k}{\varepsilon})} \eta^2 d\right)$ and with high probability outputs a set $C$ of $k$ centers such that $cost(V, C) \leq (1+\varepsilon) \cdot cost(V, C_{OPT})$. Here $C_{OPT}$ denotes the optimal $k$-centers, $cost(.)$ denotes the standard $k$-means cost function (i.e., the sum of the squared distance of points to the closest center), and $\eta$ is the aspect ratio (i.e., the ratio of maximum distance to minimum distance). This is the first quantum algorithm with a polylogarithmic running time that gives a provable approximation guarantee of $(1+\varepsilon)$ for the $k$-means problem. Also, unlike previous works on unsupervised learning, our quantum algorithm does not require quantum linear algebra subroutines and has a running time independent of parameters (e.g., condition number) that appear in such procedures.

We show that the greedy algorithm for adaptive-submodular cover has approximation ratio at least 1.3*(1+ln Q). Moreover, the instance demonstrating this gap has Q=1. So, it invalidates a prior result in the paper ``Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization'' by Golovin-Krause, that claimed a (1+ln Q)^2 approximation ratio for the same algorithm.

Given a source and a target probability measure supported on $\mathbb{R}^d$, the Monge problem asks to find the most efficient way to map one distribution to the other. This efficiency is quantified by defining a \textit{cost} function between source and target data. Such a cost is often set by default in the machine learning literature to the squared-Euclidean distance, $\ell^2_2(\mathbf{x},\mathbf{y})=\tfrac12|\mathbf{x}-\mathbf{y}|_2^2$. Recently, Cuturi et. al '23 highlighted the benefits of using elastic costs, defined through a regularizer $\tau$ as $c(\mathbf{x},\mathbf{y})=\ell^2_2(\mathbf{x},\mathbf{y})+\tau(\mathbf{x}-\mathbf{y})$. Such costs shape the \textit{displacements} of Monge maps $T$, i.e., the difference between a source point and its image $T(\mathbf{x})-\mathbf{x})$, by giving them a structure that matches that of the proximal operator of $\tau$. In this work, we make two important contributions to the study of elastic costs: (i) For any elastic cost, we propose a numerical method to compute Monge maps that are provably optimal. This provides a much-needed routine to create synthetic problems where the ground truth OT map is known, by analogy to the Brenier theorem, which states that the gradient of any convex potential is always a valid Monge map for the $\ell_2^2$ cost; (ii) We propose a loss to \textit{learn} the parameter $\theta$ of a parameterized regularizer $\tau_\theta$, and apply it in the case where $\tau_{A}(\mathbf{z})=|A^\perp \mathbf{z}|^2_2$. This regularizer promotes displacements that lie on a low dimensional subspace of $\mathbb{R}^d$, spanned by the $p$ rows of $A\in\mathbb{R}^{p\times d}$.

We provide a self contained proof of a result of Dudley [Dud64]} which shows that a bounded convex-body in $\Re^d$ can be $\varepsilon$-approximated, by the intersection of $O_d\bigl(\varepsilon^{-(d-1)/2} \bigr)$ halfspaces, where $O_d$ hides constants that depends on $d$.

北京阿比特科技有限公司