亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a graph $G$, a subset $S \subseteq V(G)$ is called a \emph{resolving set} if for any two vertices $u,v \in V(G)$, there exists a vertex $w \in S$ such that $d(w,u) \neq d(w,v)$. The {\sc Metric Dimension} problem takes as input a graph $G$ and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. This problem was introduced in the 1970s and is known to be \NP-hard~[GT~61 in Garey and Johnson's book]. In the realm of parameterized complexity, Hartung and Nichterlein~[CCC~2013] proved that the problem is \W[2]-hard when parameterized by the natural parameter $k$. They also observed that it is \FPT\ when parameterized by the vertex cover number and asked about its complexity under \emph{smaller} parameters, in particular the feedback vertex set number. We answer this question by proving that {\sc Metric Dimension} is \W[1]-hard when parameterized by the combined parameter feedback vertex set number plus pathwidth. This also improves the result of Bonnet and Purohit~[IPEC 2019] which states that the problem is \W[1]-hard parameterized by the pathwidth. On the positive side, we show that {\sc Metric Dimension} is \FPT\ when parameterized by either the distance to cluster or the distance to co-cluster, both of which are smaller parameters than the vertex cover number.

相關內容

A numerical procedure providing guaranteed two-sided bounds on the effective coefficients of elliptic partial differential operators is presented. The upper bounds are obtained in a standard manner through the variational formulation of the problem and by applying the finite element method. To obtain the lower bounds we formulate the dual variational problem and introduce appropriate approximation spaces employing the finite element method as well. We deal with the 3D setting, which has been rarely considered in the literature so far. The theoretical justification of the procedure is presented and supported with illustrative examples.

The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. $\rsh$ asserts that if the distance between $a$ and a polytope $K$ with $k$ vertices and unit diameter in $\Re^d$ is at least $\delta$, where $\delta$ is a fixed constant in $(0,1)$, then a randomly chosen hyperplane separates $a$ and $K$ with probability at least $1/poly(k)$ and margin at least $\Omega \left(\delta/\sqrt{d} \right)$. An immediate consequence of our result is the first near optimal bound on the error increase in the reduction from a Separation oracle to an Optimization oracle over a polytope. RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the ``Hausdorff problem'', of learning a unit diameter polytope $K$ within Hausdorff distance $\delta$, given an optimization oracle for $K$. Using RSH, we show that with polynomially many random queries to the optimization oracle, $K$ can be approximated within error $O(\delta)$. To our knowledge this is the first provable algorithm for the Hausdorff Problem. Building on this result, we show that if the vertices of $K$ are well-separated, then an optimization oracle can be used to generate a list of points, each within Hausdorff distance $O(\delta)$ of $K$, with the property that the list contains a point close to each vertex of $K$. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption.

Graph neural networks (GNNs) are among the most powerful tools in deep learning. They routinely solve complex problems on unstructured networks, such as node classification, graph classification, or link prediction, with high accuracy. However, both inference and training of GNNs are complex, and they uniquely combine the features of irregular graph processing with dense and regular computations. This complexity makes it very challenging to execute GNNs efficiently on modern massively parallel architectures. To alleviate this, we first design a taxonomy of parallelism in GNNs, considering data and model parallelism, and different forms of pipelining. Then, we use this taxonomy to investigate the amount of parallelism in numerous GNN models, GNN-driven machine learning tasks, software frameworks, or hardware accelerators. We use the work-depth model, and we also assess communication volume and synchronization. We specifically focus on the sparsity/density of the associated tensors, in order to understand how to effectively apply techniques such as vectorization. We also formally analyze GNN pipelining, and we generalize the established Message-Passing class of GNN models to cover arbitrary pipeline depths, facilitating future optimizations. Finally, we investigate different forms of asynchronicity, navigating the path for future asynchronous parallel GNN pipelines. The outcomes of our analysis are synthesized in a set of insights that help to maximize GNN performance, and a comprehensive list of challenges and opportunities for further research into efficient GNN computations. Our work will help to advance the design of future GNNs.

Let ${\cal G}$ be a minor-closed graph class and let $G$ be an $n$-vertex graph. We say that $G$ is a $k$-apex of ${\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such that $G\setminus S$ belongs to ${\cal G}$. Our first result is an algorithm that decides whether $G$ is a $k$-apex of ${\cal G}$ in time $2^{{\sf poly}(k)}\cdot n^2$, where ${\sf poly}$ is a polynomial function depending on ${\cal G}$. This algorithm improves the previous one, given by Sau, Stamoulis, and Thilikos [ICALP 2020], whose running time was $2^{{\sf poly}(k)}\cdot n^3$. The elimination distance of $G$ to ${\cal G}$, denoted by ${\sf ed}_{\cal G}(G)$, is the minimum number of rounds required to reduce each connected component of $G$ to a graph in ${\cal G}$ by removing one vertex from each connected component in each round. Bulian and Dawar [Algorithmica 2017] provided an FPT-algorithm, with parameter $k$, to decide whether ${\sf ed}_{\cal G}(G)\leq k$. However, its dependence on $k$ is not explicit. We extend the techniques used in the first algorithm to decide whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{2^{2^{{\sf poly}(k)}}}\cdot n^2$. This is the first algorithm for this problem with an explicit parametric dependence in $k$. In the special case where ${\cal G}$ excludes some apex-graph as a minor, we give two alternative algorithms, running in time $2^{2^{{\cal O}(k^2\log k)}}\cdot n^2$ and $2^{{\sf poly}(k)}\cdot n^3$ respectively, where $c$ and ${\sf poly}$ depend on ${\cal G}$. As a stepping stone for these algorithms, we provide an algorithm that decides whether ${\sf ed}_{\cal G}(G)\leq k$ in time $2^{{\cal O}({\sf tw}\cdot k+{\sf tw}\log{\sf tw})}\cdot n$, where ${\sf tw}$ is the treewidth of $G$. Finally, we provide explicit upper bounds on the size of the graphs in the minor-obstruction set of the class of graphs ${\cal E}_k({\cal G})=\{G\mid{\sf ed}_{\cal G}(G)\leq k\}$.

Karger (STOC 1995) gave the first FPTAS for the network (un)reliability problem, setting in motion research over the next three decades that obtained increasingly faster running times, eventually leading to a $\tilde{O}(n^2)$-time algorithm (Karger, STOC 2020). This represented a natural culmination of this line of work because the algorithmic techniques used can enumerate $\Theta(n^2)$ (near)-minimum cuts. In this paper, we go beyond this quadratic barrier and obtain a faster FPTAS for the network unreliability problem. Our algorithm runs in $m^{1+o(1)} + \tilde{O}(n^{1.5})$ time. Our main contribution is a new estimator for network unreliability in very reliable graphs. These graphs are usually the bottleneck for network unreliability since the disconnection event is elusive. Our estimator is obtained by defining an appropriate importance sampling subroutine on a dual spanning tree packing of the graph. To complement this estimator for very reliable graphs, we use recursive contraction for moderately reliable graphs. We show that an interleaving of sparsification and contraction can be used to obtain a better parametrization of the recursive contraction algorithm that yields a faster running time matching the one obtained for the very reliable case.

We study a natural geometric variant of the classic Knapsack problem called 2D-Knapsack: we are given a set of axis-parallel rectangles and a rectangular bounding box, and the goal is to pack as many of these rectangles inside the box without overlap. Naturally, this problem is NP-complete. Recently, Grandoni et al. [ESA'19] showed that it is also W[1]-hard when parameterized by the size $k$ of the sought packing, and they presented a parameterized approximation scheme (PAS) for the variant where we are allowed to rotate the rectangles by 90{\textdegree} before packing them into the box. Obtaining a PAS for the original 2D-Knapsack problem, without rotation, appears to be a challenging open question. In this work, we make progress towards this goal by showing a PAS under the following assumptions: - both the box and all the input rectangles have integral, polynomially bounded sidelengths; - every input rectangle is wide -- its width is greater than its height; and - the aspect ratio of the box is bounded by a constant.Our approximation scheme relies on a mix of various parameterized and approximation techniques, including color coding, rounding, and searching for a structured near-optimum packing using dynamic programming.

This paper is devoted to the statistical and numerical properties of the geometric median, and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in R^d, and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension d; (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a "local quadratic growth" condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself, and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments including the application to estimation of mean values of log-returns for S&P 500 data.

Since their introduction in Abadie and Gardeazabal (2003), Synthetic Control (SC) methods have quickly become one of the leading methods for estimating causal effects in observational studies in settings with panel data. Formal discussions often motivate SC methods by the assumption that the potential outcomes were generated by a factor model. Here we study SC methods from a design-based perspective, assuming a model for the selection of the treated unit(s) and period(s). We show that the standard SC estimator is generally biased under random assignment. We propose a Modified Unbiased Synthetic Control (MUSC) estimator that guarantees unbiasedness under random assignment and derive its exact, randomization-based, finite-sample variance. We also propose an unbiased estimator for this variance. We document in settings with real data that under random assignment, SC-type estimators can have root mean-squared errors that are substantially lower than that of other common estimators. We show that such an improvement is weakly guaranteed if the treated period is similar to the other periods, for example, if the treated period was randomly selected. While our results only directly apply in settings where treatment is assigned randomly, we believe that they can complement model-based approaches even for observational studies.

The landscape of applications and subroutines relying on shortest path computations continues to grow steadily. This growth is driven by the undeniable success of shortest path algorithms in theory and practice. It also introduces new challenges as the models and assessing the optimality of paths become more complicated. Hence, multiple recent publications in the field adapt existing labeling methods in an ad-hoc fashion to their specific problem variant without considering the underlying general structure: they always deal with multi-criteria scenarios and those criteria define different partial orders on the paths. In this paper, we introduce the partial order shortest path problem (POSP), a generalization of the multi-objective shortest path problem (MOSP) and in turn also of the classical shortest path problem. POSP captures the particular structure of many shortest path applications as special cases. In this generality, we study optimality conditions or the lack of them, depending on the objective functions' properties. Our final contribution is a big lookup table summarizing our findings and providing the reader an easy way to choose among the most recent multicriteria shortest path algorithms depending on their problem's weight structure. Examples range from time-dependent shortest path and bottleneck path problems to the fuzzy shortest path problem and complex financial weight functions studied in the public transportation community. Our results hold for general digraphs and therefore surpass previous generalizations that were limited to acyclic graphs.

Given a graph $G = (V, E)$, a non-empty set $S \subseteq V$ is a defensive alliance, if for every vertex $v \in S$, the majority of its closed neighbours are in $S$, that is, $|N_G[v] \cap S| \geq |N_G[v] \setminus S|$. The decision version of the problem is known to be NP-Complete even when restricted to split and bipartite graphs. The problem is \textit{fixed-parameter tractable} for the parameters solution size, vertex cover number and neighbourhood diversity. For the parameters treewidth and feedback vertex set number, the problem is W[1]-hard. \\ \hspace*{2em} In this paper, we study the defensive alliance problem for graphs with bounded degree. We show that the problem is \textit{polynomial-time solvable} on graphs with maximum degree at most 5 and NP-Complete on graphs with maximum degree 6. This rules out the fixed-parameter tractability of the problem for the parameter maximum degree of the graph. We also consider the problem from the standpoint of parameterized complexity. We provide an FPT algorithm using the Integer Linear Programming approach for the parameter distance to clique. We also answer an open question posed in \cite{AG2} by providing an FPT algorithm for the parameter twin cover.

北京阿比特科技有限公司