亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We survey the field of algorithms and complexity for graph problems parameterized above or below guaranteed values, a research area which was pioneered by Venkatesh Raman. Those problems seek, for a given graph $G$, a solution whose value is at least $g(G)+k$ or at most $g(G)-k$, where $g(G)$ is a guarantee on the value that any solution on $G$ takes. The goal is to design algorithms which find such solution in time whose complexity in k is decoupled from that in the guarantee, or to rule out the existence of such algorithms by means of intractability results. We discuss a large number of algorithms and intractability results, and complement them by several open problems.

相關內容

Studying the properties of stochastic noise to optimize complex non-convex functions has been an active area of research in the field of machine learning. Prior work has shown that the noise of stochastic gradient descent improves optimization by overcoming undesirable obstacles in the landscape. Moreover, injecting artificial Gaussian noise has become a popular idea to quickly escape saddle points. Indeed, in the absence of reliable gradient information, the noise is used to explore the landscape, but it is unclear what type of noise is optimal in terms of exploration ability. In order to narrow this gap in our knowledge, we study a general type of continuous-time non-Markovian process, based on fractional Brownian motion, that allows for the increments of the process to be correlated. This generalizes processes based on Brownian motion, such as the Ornstein-Uhlenbeck process. We demonstrate how to discretize such processes which gives rise to the new algorithm fPGD. This method is a generalization of the known algorithms PGD and Anti-PGD. We study the properties of fPGD both theoretically and empirically, demonstrating that it possesses exploration abilities that, in some cases, are favorable over PGD and Anti-PGD. These results open the field to novel ways to exploit noise for training machine learning models.

In PATH SET PACKING, the input is an undirected graph $G$, a collection $\cal P$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\cal P$. We study the parameterized complexity of PATH SET PACKING with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover plus the maximum length of a path in $\cal P$, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we show an FPT algorithm parameterized by feedback vertex set plus maximum degree, and also show an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\cal P$. Both the positive results complement the hardness of PATH SET PACKING with respect to any subset of the parameters used in the FPT algorithms.

We study the complexity of classical constraint satisfaction problems on a 2D grid. Specifically, we consider the complexity of function versions of such problems, with the additional restriction that the constraints are translationally invariant, namely, the variables are located at the vertices of a 2D grid and the constraint between every pair of adjacent variables is the same in each dimension. The only input to the problem is thus the size of the grid. This problem is equivalent to one of the most interesting problems in classical physics, namely, computing the lowest energy of a classical system of particles on the grid. We provide a tight characterization of the complexity of this problem, and show that it is complete for the class $FP^{NEXP}$. Gottesman and Irani (FOCS 2009) also studied classical translationally-invariant constraint satisfaction problems; they show that the problem of deciding whether the cost of the optimal solution is below a given threshold is NEXP-complete. Our result is thus a strengthening of their result from the decision version to the function version of the problem. Our result can also be viewed as a generalization to the translationally invariant setting, of Krentel's famous result from 1988, showing that the function version of SAT is complete for the class $FP^{NP}$. An essential ingredient in the proof is a study of the complexity of a gapped variant of the problem. We show that it is NEXP-hard to approximate the cost of the optimal assignment to within an additive error of $\Omega(N^{1/4})$, for an $N \times N$ grid. To the best of our knowledge, no gapped result is known for CSPs on the grid, even in the non-translationally invariant case. As a byproduct of our results, we also show that a decision version of the optimization problem which asks whether the cost of the optimal assignment is odd or even is also complete for $P^{NEXP}$.

Linear programming (LP) is an extremely useful tool which has been successfully applied to solve various problems in a wide range of areas, including operations research, engineering, economics, or even more abstract mathematical areas such as combinatorics. It is also used in many machine learning applications, such as $\ell_1$-regularized SVMs, basis pursuit, nonnegative matrix factorization, etc. Interior Point Methods (IPMs) are one of the most popular methods to solve LPs both in theory and in practice. Their underlying complexity is dominated by the cost of solving a system of linear equations at each iteration. In this paper, we consider both feasible and infeasible IPMs for the special case where the number of variables is much larger than the number of constraints. Using tools from Randomized Linear Algebra, we present a preconditioning technique that, when combined with the iterative solvers such as Conjugate Gradient or Chebyshev Iteration, provably guarantees that IPM algorithms (suitably modified to account for the error incurred by the approximate solver), converge to a feasible, approximately optimal solution, without increasing their iteration complexity. Our empirical evaluations verify our theoretical results on both real-world and synthetic data.

The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).

We give algorithms for approximating the partition function of the ferromagnetic Potts model on $d$-regular expanding graphs. We require much weaker expansion than in previous works; for example, the expansion exhibited by the hypercube suffices. The main improvements come from a significantly sharper analysis of standard polymer models, using extremal graph theory and applications of Karger's algorithm to counting cuts that may be of independent interest. It is #BIS-hard to approximate the partition function at low temperatures on bounded-degree graphs, so our algorithm can be seen as evidence that hard instances of #BIS are rare. We believe that these methods can shed more light on other important problems such as sub-exponential algorithms for approximate counting problems.

The $h$-index is a metric used to measure the impact of a user in a publication setting, such as a member of a social network with many highly liked posts or a researcher in an academic domain with many highly cited publications. Specifically, the $h$-index of a user is the largest integer $h$ such that at least $h$ publications of the user have at least $h$ units of positive feedback. We design an algorithm that, given query access to the $n$ publications of a user and each publication's corresponding positive feedback number, outputs a $(1\pm \varepsilon)$-approximation of the $h$-index of this user with probability at least $1-\delta$ in time \[ O(\frac{n \cdot \ln{(1/\delta)}}{\varepsilon^2 \cdot h}), \] where $h$ is the actual $h$-index which is unknown to the algorithm a-priori. We then design a novel lower bound technique that allows us to prove that this bound is in fact asymptotically optimal for this problem in all parameters $n,h,\varepsilon,$ and $\delta$. Our work is one of the first in sublinear time algorithms that addresses obtaining asymptotically optimal bounds, especially in terms of the error and confidence parameters. As such, we focus on designing novel techniques for this task. In particular, our lower bound technique seems quite general -- to showcase this, we also use our approach to prove an asymptotically optimal lower bound for the problem of estimating the number of triangles in a graph in sublinear time, which now is also optimal in the error and confidence parameters. This result improves upon prior lower bounds of Eden, Levi, Ron, and Seshadhri (FOCS'15) for this problem, as well as multiple follow-ups that extended this lower bound to other subgraph counting problems.

We have formalised Szemer\'edi's Regularity Lemma and Roth's Theorem on Arithmetic Progressions, two major results in extremal graph theory and additive combinatorics, using the proof assistant Isabelle/HOL. For the latter formalisation, we used the former to first show the Triangle Counting Lemma and the Triangle Removal Lemma: themselves important technical results. Here, in addition to showcasing the main formalised statements and definitions, we focus on sensitive points in the proofs, describing how we overcame the difficulties that we encountered.

In the well-known complexity class NP, many combinatorial problems can be found, whose optimization counterpart are important for many practical settings. Those problems usually consider full knowledge about the input and optimize on this specific input. In a practical setting, however, uncertainty in the input data is a usual phenomenon, whereby this is normally not covered in optimization versions of NP problems. One concept to model the uncertainty in the input data, is \textit{recoverable robustness}. In this setting, a solution on the input is calculated, whereby a possible recovery to a good solution should be guaranteed, whenever uncertainty manifests itself. That is, a solution $\texttt{s}_0$ for the base scenario $\textsf{S}_0$ as well as a solution \texttt{s} for every possible scenario of scenario set \textsf{S} has to be calculated. In other words, not only solution $\texttt{s}_0$ for instance $\textsf{S}_0$ is calculated but solutions \texttt{s} for all scenarios from \textsf{S} are prepared to correct possible errors through uncertainty. This paper introduces a specific concept of recoverable robust problems: Hamming Distance Recoverable Robust Problems. In this setting, solutions $\texttt{s}_0$ and \texttt{s} have to be calculated, such that $\texttt{s}_0$ and \texttt{s} may only differ in at most $\kappa$ elements. That is, one can recover from a harmful scenario by choosing a different solution, which is not too far away from the first solution. This paper surveys the complexity of Hamming distance recoverable robust version of optimization problems, typically found in NP for different types of scenarios. The complexity is primarily situated in the lower levels of the polynomial hierarchy. The main contribution of the paper is that recoverable robust problems with compression-encoded scenarios and $m \in \mathbb{N}$ recoveries are $\Sigma^P_{2m+1}$-complete.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司