亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Baker's technique is a powerful tool for designing polynomial-time approximation schemes, in particular for all optimization problems expressible in monotone first-order logic. However, it can only be used in rather restricted graph classes. We show that maximization problems expressible in monotone first-order logic admit PTAS under a much weaker assumption of fractional treewidth-fragility, and QPTAS on all hereditary classes with sublinear separators. Moreover, the same technique gives constant-factor approximation for these problems in any class of graphs of bounded expansion.

相關內容

The Gomory-Hu tree or cut tree (Gomory and Hu, 1961) is a classic data structure for reporting $(s,t)$ mincuts (and by duality, the values of $(s,t)$ maxflows) for all pairs of vertices $s$ and $t$ in an undirected graph. Gomory and Hu showed that it can be computed using $n-1$ exact maxflow computations. Surprisingly, this remains the best algorithm for Gomory-Hu trees more than 50 years later, *even for approximate mincuts*. In this paper, we break this longstanding barrier and give an algorithm for computing a $(1+\epsilon)$-approximate Gomory-Hu tree using $\text{polylog}(n)$ maxflow computations. Specifically, we obtain the runtime bounds we describe below. We obtain a randomized (Monte Carlo) algorithm for undirected, weighted graphs that runs in $\tilde O(m + n^{3/2})$ time and returns a $(1+\epsilon)$-approximate Gomory-Hu tree algorithm w.h.p. Previously, the best running time known was $\tilde O(n^{5/2})$, which is obtained by running Gomory and Hu's original algorithm on a cut sparsifier of the graph. Next, we obtain a randomized (Monte Carlo) algorithm for undirected, unweighted graphs that runs in $m^{4/3+o(1)}$ time and returns a $(1+\epsilon)$-approximate Gomory-Hu tree algorithm w.h.p. This improves on our first result for sparse graphs, namely $m = o(n^{9/8})$. Previously, the best running time known for unweighted graphs was $\tilde O(mn)$ for an exact Gomory-Hu tree (Bhalgat et al., STOC 2007); no better result was known if approximations are allowed. As a consequence of our Gomory-Hu tree algorithms, we also solve the $(1+\epsilon)$-approximate all pairs mincut and single source mincut problems in the same time bounds. (These problems are simpler in that the goal is to only return the $(s,t)$ mincut values, and not the mincuts.) This improves on the recent algorithm for these problems in $\tilde O(n^2)$ time due to Abboud et al. (FOCS 2020).

Minimax optimization has been central in addressing various applications in machine learning, game theory, and control theory. Prior literature has thus far mainly focused on studying such problems in the continuous domain, e.g., convex-concave minimax optimization is now understood to a significant extent. Nevertheless, minimax problems extend far beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains. In this paper, we study mixed continuous-discrete minimax problems where the minimization is over a continuous variable belonging to Euclidean space and the maximization is over subsets of a given ground set. We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable. Even though such problems appear frequently in machine learning applications, little is known about how to address them from algorithmic and theoretical perspectives. For such problems, we first show that obtaining saddle points are hard up to any approximation, and thus introduce new notions of (near-) optimality. We then provide several algorithmic procedures for solving convex and monotone-submodular minimax problems and characterize their convergence rates, computational complexity, and quality of the final solution according to our notions of optimally. Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization. Finally, we provide numerical experiments to showcase the effectiveness of our purposed methods.

Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL). In addition to value maximization, other practical considerations arise commonly as well, including the need of encouraging exploration, and that of ensuring certain structural properties of the learned policy due to safety, resource and operational constraints. These considerations can often be accounted for by resorting to regularized RL, which augments the target value function with a structure-promoting regularization term. Focusing on an infinite-horizon discounted Markov decision process, this paper proposes a generalized policy mirror descent (GPMD) algorithm for solving regularized RL. As a generalization of policy mirror descent Lan (2021), the proposed algorithm accommodates a general class of convex regularizers as well as a broad family of Bregman divergence in cognizant of the regularizer in use. We demonstrate that our algorithm converges linearly over an entire range of learning rates, in a dimension-free fashion, to the global solution, even when the regularizer lacks strong convexity and smoothness. In addition, this linear convergence feature is provably stable in the face of inexact policy evaluation and imperfect policy updates. Numerical experiments are provided to corroborate the applicability and appealing performance of GPMD.

Given only positive examples and unlabeled examples (from both positive and negative classes), we might hope nevertheless to estimate an accurate positive-versus-negative classifier. Formally, this task is broken down into two subtasks: (i) Mixture Proportion Estimation (MPE) -- determining the fraction of positive examples in the unlabeled data; and (ii) PU-learning -- given such an estimate, learning the desired positive-versus-negative classifier. Unfortunately, classical methods for both problems break down in high-dimensional settings. Meanwhile, recently proposed heuristics lack theoretical coherence and depend precariously on hyperparameter tuning. In this paper, we propose two simple techniques: Best Bin Estimation (BBE) (for MPE); and Conditional Value Ignoring Risk (CVIR), a simple objective for PU-learning. Both methods dominate previous approaches empirically, and for BBE, we establish formal guarantees that hold whenever we can train a model to cleanly separate out a small subset of positive examples. Our final algorithm (TED)$^n$, alternates between the two procedures, significantly improving both our mixture proportion estimator and classifier

The goal of this work is to give precise bounds on the counting complexity of a family of generalized coloring problems (list homomorphisms) on bounded-treewidth graphs. Given graphs $G$, $H$, and lists $L(v)\subseteq V(H)$ for every $v\in V(G)$, a {\em list homomorphism} is a function $f:V(G)\to V(H)$ that preserves the edges (i.e., $uv\in E(G)$ implies $f(u)f(v)\in E(H)$) and respects the lists (i.e., $f(v)\in L(v))$. Standard techniques show that if $G$ is given with a tree decomposition of width $t$, then the number of list homomorphisms can be counted in time $|V(H)|^t\cdot n^{\mathcal{O}(1)}$. Our main result is determining, for every fixed graph $H$, how much the base $|V(H)|$ in the running time can be improved. For a connected graph $H$ we define $\operatorname{irr}(H)$ the following way: if $H$ has a loop or is nonbipartite, then $\operatorname{irr}(H)$ is the maximum size of a set $S\subseteq V(H)$ where any two vertices have different neighborhoods; if $H$ is bipartite, then $\operatorname{irr}(H)$ is the maximum size of such a set that is fully in one of the bipartition classes. For disconnected $H$, we define $\operatorname{irr}(H)$ as the maximum of $\operatorname{irr}(C)$ over every connected component $C$ of $H$. We show that, for every fixed graph $H$, the number of list homomorphisms from $(G,L)$ to $H$ * can be counted in time $\operatorname{irr}(H)^t\cdot n^{\mathcal{O}(1)}$ if a tree decomposition of $G$ having width at most $t$ is given in the input, and * cannot be counted in time $(\operatorname{irr}(H)-\epsilon)^t\cdot n^{\mathcal{O}(1)}$ for any $\epsilon>0$, even if a tree decomposition of $G$ having width at most $t$ is given in the input, unless the #SETH fails. Thereby we give a precise and complete complexity classification featuring matching upper and lower bounds for all target graphs with or without loops.

We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces $B^\alpha_{q}(L^p)$ in arbitrary dimension $d$, on general domains. We show that \alert{deep rectifier} networks with a fixed activation function attain optimal or near to optimal approximation rates for functions in the Besov space $B^\alpha_{\tau}(L^\tau)$ on the critical embedding line $1/\tau=\alpha/d+1/p$ for \emph{arbitrary} smoothness order $\alpha>0$. Using interpolation theory, this implies that the entire range of smoothness classes at or above the critical line is (near to) optimally approximated by deep ReLU/RePU networks.

The Restricted Additive Schwarz method with impedance transmission conditions, also known as the Optimised Restricted Additive Schwarz (ORAS) method, is a simple overlapping one-level parallel domain decomposition method, which has been successfully used as an iterative solver and as a preconditioner for discretized Helmholtz boundary-value problems. In this paper, we give, for the first time, a convergence analysis for ORAS as an iterative solver -- and also as a preconditioner -- for nodal finite element Helmholtz systems of any polynomial order. The analysis starts by showing (for general domain decompositions) that ORAS as an unconventional finite element approximation of a classical parallel iterative Schwarz method, formulated at the PDE (non-discrete) level. This non-discrete Schwarz method was recently analysed in [Gong, Gander, Graham, Lafontaine, Spence, arXiv 2106.05218], and the present paper gives a corresponding discrete version of this analysis. In particular, for domain decompositions in strips in 2-d, we show that, when the mesh size is small enough, ORAS inherits the convergence properties of the Schwarz method, independent of polynomial order. The proof relies on characterising the ORAS iteration in terms of discrete `impedance-to-impedance maps', which we prove (via a novel weighted finite-element error analysis) converge as $h\rightarrow 0$ in the operator norm to their non-discrete counterparts.

In this paper we present results on asymptotic characteristics of multivariate function classes in the uniform norm. Our main interest is the approximation of functions with mixed smoothness parameter not larger than $1/2$. Our focus will be on the behavior of the best $m$-term trigonometric approximation as well as the decay of Kolmogorov and entropy numbers in the uniform norm. It turns out that these quantities share a few fundamental abstract properties like their behavior under real interpolation, such that they can be treated simultaneously. We start with proving estimates on finite rank convolution operators with range in a step hyperbolic cross. These results imply bounds for the corresponding function space embeddings by a well-known decomposition technique. The decay of Kolmogorov numbers have direct implications for the problem of sampling recovery in $L_2$ in situations where recent results in the literature are not applicable since the corresponding approximation numbers are not square summable.

Many representative graph neural networks, $e.g.$, GPR-GNN and ChebyNet, approximate graph convolutions with graph spectral filters. However, existing work either applies predefined filter weights or learns them without necessary constraints, which may lead to oversimplified or ill-posed filters. To overcome these issues, we propose $\textit{BernNet}$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters. In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein polynomial approximation and designs its spectral property by setting the coefficients of the Bernstein basis. Moreover, we can learn the coefficients (and the corresponding filter weights) based on observed graphs and their associated signals and thus achieve the BernNet specialized for the data. Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

北京阿比特科技有限公司