亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A graph $G$ is $k$-out-connected from its node $s$ if it contains $k$ internally disjoint $sv$-paths to every node $v$; $G$ is $k$-connected if it is $k$-out-connected from every node. In connectivity augmentation problems the goal is to augment a graph $G_0=(V,E_0)$ by a minimum costs edge set $J$ such that $G_0 \cup J$ has higher connectivity than $G_0$. In the $k$-Out-Connectivity Augmentation ($k$-OCA) problem, $G_0$ is $(k-1)$-out-connected from $s$ and $G_0 \cup J$ should be $k$-out-connected from $s$; in the $k$-Connectivity Augmentation ($k$-CA) problem $G_0$ is $(k-1)$-connected and $G_0 \cup J$ should be $k$-connected. The parameterized complexity status of these problems was open even for $k=3$ and unit costs. We will show that $k$-OCA and $3$-CA can be solved in time $9^p \cdot n^{O(1)}$, where $p$ is the size of an optimal solution. Our paper is the first that shows fixed parameter tractability of a $k$-node-connectivity augmentation problem with high values of $k$. We will also consider the $(2,k)$-Connectivity Augmentation problem where $G_0$ is $(k-1)$-edge-connected and $G_0 \cup J$ should be both $k$-edge-connected and $2$-connected. We will show that this problem can be solved in time $9^p \cdot n^{O(1)}$, and for unit costs approximated within $1.892$.

相關內容

Model Predictive Control (MPC) is a well-established approach to solve infinite horizon optimal control problems. Since optimization over an infinite time horizon is generally infeasible, MPC determines a suboptimal feedback control by repeatedly solving finite time optimal control problems. Although MPC has been successfully used in many applications, applying MPC to large-scale systems -- arising, e.g., through discretization of partial differential equations -- requires the solution of high-dimensional optimal control problems and thus poses immense computational effort. We consider systems governed by parametrized parabolic partial differential equations and employ the reduced basis (RB) method as a low-dimensional surrogate model for the finite time optimal control problem. The reduced order optimal control serves as feedback control for the original large-scale system. We analyze the proposed RB-MPC approach by first developing a posteriori error bounds for the errors in the optimal control and associated cost functional. These bounds can be evaluated efficiently in an offline-online computational procedure and allow us to guarantee asymptotic stability of the closed-loop system using the RB-MPC approach in several practical scenarios. We also propose an adaptive strategy to choose the prediction horizon of the finite time optimal control problem. Numerical results are presented to illustrate the theoretical properties of our approach.

For the well-known Survivable Network Design Problem (SNDP) we are given an undirected graph $G$ with edge costs, a set $R$ of terminal vertices, and an integer demand $d_{s,t}$ for every terminal pair $s,t\in R$. The task is to compute a subgraph $H$ of $G$ of minimum cost, such that there are at least $d_{s,t}$ disjoint paths between $s$ and $t$ in $H$. If the paths are required to be edge-disjoint we obtain the edge-connectivity variant (EC-SNDP), while internally vertex-disjoint paths result in the vertex-connectivity variant (VC-SNDP). Another important case is the element-connectivity variant (LC-SNDP), where the paths are disjoint on edges and non-terminals. In this work we shed light on the parameterized complexity of the above problems. We consider several natural parameters, which include the solution size $\ell$, the sum of demands $D$, the number of terminals $k$, and the maximum demand $d_\max$. Using simple, elegant arguments, we prove the following results. - We give a complete picture of the parameterized tractability of the three variants w.r.t. parameter $\ell$: both EC-SNDP and LC-SNDP are FPT, while VC-SNDP is W[1]-hard. - We identify some special cases of VC-SNDP that are FPT: * when $d_\max\leq 3$ for parameter $\ell$, * on locally bounded treewidth graphs (e.g., planar graphs) for parameter $\ell$, and * on graphs of treewidth $tw$ for parameter $tw+D$. - The well-known Directed Steiner Tree (DST) problem can be seen as single-source EC-SNDP with $d_\max=1$ on directed graphs, and is FPT parameterized by $k$ [Dreyfus & Wagner 1971]. We show that in contrast, the 2-DST problem, where $d_\max=2$, is W[1]-hard, even when parameterized by $\ell$.

This paper considers a convex composite optimization problem with affine constraints, which includes problems that take the form of minimizing a smooth convex objective function over the intersection of (simple) convex sets, or regularized with multiple (simple) functions. Motivated by high-dimensional applications in which exact projection/proximal computations are not tractable, we propose a \textit{projection-free} augmented Lagrangian-based method, in which primal updates are carried out using a \textit{weak proximal oracle} (WPO). In an earlier work, WPO was shown to be more powerful than the standard \textit{linear minimization oracle} (LMO) that underlies conditional gradient-based methods (aka Frank-Wolfe methods). Moreover, WPO is computationally tractable for many high-dimensional problems of interest, including those motivated by recovery of low-rank matrices and tensors, and optimization over polytopes which admit efficient LMOs. The main result of this paper shows that under a certain curvature assumption (which is weaker than strong convexity), our WPO-based algorithm achieves an ergodic rate of convergence of $O(1/T)$ for both the objective residual and feasibility gap. This result, to the best of our knowledge, improves upon the $O(1/\sqrt{T})$ rate for existing LMO-based projection-free methods for this class of problems. Empirical experiments on a low-rank and sparse covariance matrix estimation task and the Max Cut semidefinite relaxation demonstrate the superiority of our method over state-of-the-art LMO-based Lagrangian-based methods.

In the vertex connectivity problem, given an undirected $n$-vertex $m$-edge graph $G$, we need to compute the minimum number of vertices that can disconnect $G$ after removing them. This problem is one of the most well-studied graph problems. From 2019, a new line of work [Nanongkai et al.~STOC'19;SODA'20;STOC'21] has used randomized techniques to break the quadratic-time barrier and, very recently, culminated in an almost-linear time algorithm via the recently announced maxflow algorithm by Chen et al. In contrast, all known deterministic algorithms are much slower. The fastest algorithm [Gabow FOCS'00] takes $O(m(n+\min\{c^{5/2},cn^{3/4}\}))$ time where $c$ is the vertex connectivity. It remains open whether there exists a subquadratic-time deterministic algorithm for any constant $c>3$. In this paper, we give the first deterministic almost-linear time vertex connectivity algorithm for all constants $c$. Our running time is $m^{1+o(1)}2^{O(c^{2})}$ time, which is almost-linear for all $c=o(\sqrt{\log n})$. This is the first deterministic algorithm that breaks the $O(n^{2})$-time bound on sparse graphs where $m=O(n)$, which is known for more than 50 years ago [Kleitman'69]. Towards our result, we give a new reduction framework to vertex expanders which in turn exploits our new almost-linear time construction of mimicking network for vertex connectivity. The previous construction by Kratsch and Wahlstr\"{o}m [FOCS'12] requires large polynomial time and is randomized.

The current best approximation algorithms for $k$-median rely on first obtaining a structured fractional solution known as a bi-point solution, and then rounding it to an integer solution. We improve this second step by unifying and refining previous approaches. We describe a hierarchy of increasingly-complex partitioning schemes for the facilities, along with corresponding sets of algorithms and factor-revealing non-linear programs. We prove that the third layer of this hierarchy is a $2.613$-approximation, improving upon the current best ratio of $2.675$, while no layer can be proved better than $2.588$ under the proposed analysis. On the negative side, we give a family of bi-point solutions which cannot be approximated better than the square root of the golden ratio, even if allowed to open $k+o(k)$ facilities. This gives a barrier to current approaches for obtaining an approximation better than $2 \sqrt{\phi} \approx 2.544$. Altogether we reduce the approximation gap of bi-point solutions by two thirds.

In this paper, we derive the limit of experiments for one parameter Ising models on dense regular graphs. In particular, we show that the limiting experiment is Gaussian in the low temperature regime, non Gaussian in the critical regime, and an infinite collection of Gaussians in the high temperature regime. We also derive the limiting distributions of the maximum likelihood and maximum pseudo-likelihood estimators, and study limiting power for tests of hypothesis against contiguous alternatives (whose scaling changes across the regimes). To the best of our knowledge, this is the first attempt at establishing the classical limits of experiments for Ising models (and more generally, Markov random fields).

We study a variant of classical clustering formulations in the context of algorithmic fairness, known as diversity-aware clustering. In this variant we are given a collection of facility subsets, and a solution must contain at least a specified number of facilities from each subset while simultaneously minimizing the clustering objective ($k$-median or $k$-means). We investigate the fixed-parameter tractability of these problems and show several negative hardness and inapproximability results, even when we afford exponential running time with respect to some parameters. Motivated by these results we identify natural parameters of the problem, and present fixed-parameter approximation algorithms with approximation ratios $\big(1 + \frac{2}{e} +\epsilon \big)$ and $\big(1 + \frac{8}{e}+ \epsilon \big)$ for diversity-aware $k$-median and diversity-aware $k$-means respectively, and argue that these ratios are essentially tight assuming the gap-exponential time hypothesis. We also present a simple and more practical bicriteria approximation algorithm with better running time bounds. We finally propose efficient and practical heuristics. We evaluate the scalability and effectiveness of our methods in a wide variety of rigorously conducted experiments, on both real and synthetic data.

In this paper, we prove that it is W[2]-hard to approximate k-SetCover within any constant ratio. Our proof is built upon the recently developed threshold graph composition technique. We propose a strong notion of threshold graphs and use a new composition method to prove this result. Our technique could also be applied to rule out polynomial time $o\left(\frac{\log n}{\log \log n}\right)$ ratio approximation algorithms for the non-parameterized k-SetCover problem with $k$ as small as $O\left(\frac{\log n}{\log \log n}\right)^3$, assuming W[1]$\neq$FPT. We highlight that our proof does not depend on the well-known PCP theorem, and only involves simple combinatorial objects.

We show sublinear-time algorithms for Max Cut and Max E2Lin$(q)$ on expanders in the adjacency list model that distinguishes instances with the optimal value more than $1-\varepsilon$ from those with the optimal value less than $1-\rho$ for $\rho \gg \varepsilon$. The time complexities for Max Cut and Max $2$Lin$(q)$ are $\widetilde{O}(\frac{1}{\phi^2\rho} \cdot m^{1/2+O(\varepsilon/(\phi^2\rho))})$ and $\widetilde{O}(\mathrm{poly}(\frac{q}{\phi\rho})\cdot {(mq)}^{1/2+O(q^6\varepsilon/\phi^2\rho^2)})$, respectively, where $m$ is the number of edges in the underlying graph and $\phi$ is its conductance. Then, we show a sublinear-time algorithm for Unique Label Cover on expanders with $\phi \gg \epsilon$ in the bounded-degree model. The time complexity of our algorithm is $\widetilde{O}_d(2^{q^{O(1)}\cdot\phi^{1/q}\cdot \varepsilon^{-1/2}}\cdot n^{1/2+q^{O(q)}\cdot \varepsilon^{4^{1.5-q}}\cdot \phi^{-2}})$, where $n$ is the number of variables. We complement these algorithmic results by showing that testing $3$-colorability requires $\Omega(n)$ queries even on expanders.

Feature selection plays a vital role in promoting the classifier's performance. However, current methods ineffectively distinguish the complex interaction in the selected features. To further remove these hidden negative interactions, we propose a GA-like dynamic probability (GADP) method with mutual information which has a two-layer structure. The first layer applies the mutual information method to obtain a primary feature subset. The GA-like dynamic probability algorithm, as the second layer, mines more supportive features based on the former candidate features. Essentially, the GA-like method is one of the population-based algorithms so its work mechanism is similar to the GA. Different from the popular works which frequently focus on improving GA's operators for enhancing the search ability and lowering the converge time, we boldly abandon GA's operators and employ the dynamic probability that relies on the performance of each chromosome to determine feature selection in the new generation. The dynamic probability mechanism significantly reduces the parameter number in GA that making it easy to use. As each gene's probability is independent, the chromosome variety in GADP is more notable than in traditional GA, which ensures GADP has a wider search space and selects relevant features more effectively and accurately. To verify our method's superiority, we evaluate our method under multiple conditions on 15 datasets. The results demonstrate the outperformance of the proposed method. Generally, it has the best accuracy. Further, we also compare the proposed model to the popular heuristic methods like POS, FPA, and WOA. Our model still owns advantages over them.

北京阿比特科技有限公司