亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We describe a new algorithm for vertex cover with runtime $O^*(1.25298^k)$, where $k$ is the size of the desired solution and $O^*$ hides polynomial factors in the input size. This improves over previous runtime of $O^*(1.2738^k)$ due to Chen, Kanj, & Xia (2010) standing for more than a decade. The key to our algorithm is to use a potential function which simultaneously tracks $k$ as well as the optimal value $\lambda$ of the vertex cover LP relaxation. This approach also allows us to make use of prior algorithms for Maximum Independent Set in bounded-degree graphs and Above-Guarantee Vertex Cover. The main step in the algorithm is to branch on high-degree vertices, while ensuring that both $k$ and $\mu = k - \lambda$ are decreased at each step. There can be local obstructions in the graph that prevent $\mu$ from decreasing in this process; we develop a number of novel branching steps to handle these situations.

相關內容

Optimal sampling based motion planning and trajectory optimization are two competing frameworks to generate optimal motion plans. Both frameworks have complementary properties: Sampling based planners are typically slow to converge, but provide optimality guarantees. Trajectory optimizers, however, are typically fast to converge, but do not provide global optimality guarantees in nonconvex problems, e.g. scenarios with obstacles. To achieve the best of both worlds, we introduce a new planner, BITKOMO, which integrates the asymptotically optimal Batch Informed Trees (BIT*) planner with the K-Order Markov Optimization (KOMO) trajectory optimization framework. Our planner is anytime and maintains the same asymptotic optimality guarantees provided by BIT*, while also exploiting the fast convergence of the KOMO trajectory optimizer. We experimentally evaluate our planner on manipulation scenarios that involve high dimensional configuration spaces, with up to two 7-DoF manipulators, obstacles and narrow passages. BITKOMO performs better than KOMO by succeeding even when KOMO fails, and it outperforms BIT* in terms of convergence to the optimal solution.

In this comment, we present a simple alternate derivation to the IRW-FCM algorithm presented in "Iteratively Re-weighted Algorithm for Fuzzy c-Means" for Fuzzy c-Means problem. We show that the iterative steps derived for IRW-FCM algorithm are nothing but steps of the popular Majorization Minimization (MM) algorithm. The derivation presented in this note is much simpler and straightforward and, unlike the derivation of IRW-FCM, the derivation here does not involve introduction of any auxiliary variable. Moreover, by showing the steps of IRW-FCM as the MM algorithm, the inner loop of the IRW-FCM algorithm can be eliminated and the algorithm can be effectively run as a "single loop" algorithm. More precisely, the new MM-based derivation deduces that a single inner loop of IRW-FCM is sufficient to decrease the Fuzzy c-means objective function, which speeds up the IRW-FCM algorithm.

We provide a differentially private algorithm for producing synthetic data simultaneously useful for multiple tasks: marginal queries and multitask machine learning (ML). A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}. Higher binning granularity is required for better accuracy, but this negatively impacts scalability. Eliminating the need for binning allows us to produce synthetic data preserving large numbers of statistical queries such as marginals on numerical features, and class conditional linear threshold queries. Preserving the latter means that the fraction of points of each class label above a particular half-space is roughly the same in both the real and synthetic data. This is the property that is needed to train a linear classifier in a multitask setting. Our algorithm also allows us to produce high quality synthetic data for mixed marginal queries, that combine both categorical and numerical features. Our method consistently runs 2-5x faster than the best comparable techniques, and provides significant accuracy improvements in both marginal queries and linear prediction tasks for mixed-type datasets.

Calibration of multi-camera systems, i.e. determining the relative poses between the cameras, is a prerequisite for many tasks in computer vision and robotics. Camera calibration is typically achieved using offline methods that use checkerboard calibration targets. These methods, however, often are cumbersome and lengthy, considering that a new calibration is required each time any camera pose changes. In this work, we propose a novel, marker-free online method for the extrinsic calibration of multiple smart edge sensors, relying solely on 2D human keypoint detections that are computed locally on the sensor boards from RGB camera images. Our method assumes the intrinsic camera parameters to be known and requires priming with a rough initial estimate of the camera poses. The person keypoint detections from multiple views are received at a central backend where they are synchronized, filtered, and assigned to person hypotheses. We use these person hypotheses to repeatedly solve optimization problems in the form of factor graphs. Given suitable observations of one or multiple persons traversing the scene, the estimated camera poses converge towards a coherent extrinsic calibration within a few minutes. We evaluate our approach in real-world settings and show that the calibration with our method achieves lower reprojection errors compared to a reference calibration generated by an offline method using a traditional calibration target.

In decentralized optimization environments, each agent $i$ in a network of $n$ nodes has its own private function $f_i$, and nodes communicate with their neighbors to cooperatively minimize the aggregate objective $\sum_{i=1}^n f_i$. In this setting, synchronizing the nodes' updates incurs significant communication overhead and computational costs, so much of the recent literature has focused on the analysis and design of asynchronous optimization algorithms, where agents activate and communicate at arbitrary times without needing a global synchronization enforcer. However, most works assume that when a node activates, it selects the neighbor to contact based on a fixed probability (e.g., uniformly at random), a choice that ignores the optimization landscape at the moment of activation. Instead, in this work we introduce an optimization-aware selection rule that chooses the neighbor providing the highest dual cost improvement (a quantity related to a dualization of the problem based on consensus). This scheme is related to the coordinate descent (CD) method with the Gauss-Southwell (GS) rule for coordinate updates; in our setting however, only a subset of coordinates is accessible at each iteration (because each node can communicate only with its neighbors), so the existing literature on GS methods does not apply. To overcome this difficulty, we develop a new analytical framework for smooth and strongly convex $f_i$ that covers the class of set-wise CD algorithms -- a class that directly applies to decentralized scenarios, but is not limited to them -- and we show that the proposed set-wise GS rule achieves a speedup factor of up to the maximum degree in the network (which is in the order of $\Theta(n)$ for highly connected graphs). The speedup predicted by our analysis is validated in numerical experiments with synthetic data.

Given a graph $G=(V,E)$, the problem of \gb{} is to find a sequence of nodes from $V$, called burning sequence, in order to burn the whole graph. This is a discrete-step process, in each step an unburned vertex is selected as an agent to spread fire to its neighbors by marking it as a burnt node. A node that is burnt spreads the fire to its neighbors at the next consecutive step. The goal is to find the burning sequence of minimum length. The \gb{} problem is NP-Hard for general graphs and even for binary trees. A few approximation results are known, including a $3$-approximation algorithm for general graphs and a $2$- approximation algorithm for trees. In this paper, we propose an approximation algorithm for trees that produces a burning sequence of length at most $\lfloor 1.75b(T) \rfloor + 1$, where $b(T)$ is length of the optimal burning sequence, also called the burning number of the tree $T$. In other words, we achieve an approximation factor of $(\lfloor 1.75b(T) \rfloor + 1)/b(T)$.

Dental disease is one of the most common chronic diseases despite being largely preventable. However, professional advice on optimal oral hygiene practices is often forgotten or abandoned by patients. Therefore patients may benefit from timely and personalized encouragement to engage in oral self-care behaviors. In this paper, we develop an online reinforcement learning (RL) algorithm for use in optimizing the delivery of mobile-based prompts to encourage oral hygiene behaviors. One of the main challenges in developing such an algorithm is ensuring that the algorithm considers the impact of the current action on the effectiveness of future actions (i.e., delayed effects), especially when the algorithm has been made simple in order to run stably and autonomously in a constrained, real-world setting (i.e., highly noisy, sparse data). We address this challenge by designing a quality reward which maximizes the desired health outcome (i.e., high-quality brushing) while minimizing user burden. We also highlight a procedure for optimizing the hyperparameters of the reward by building a simulation environment test bed and evaluating candidates using the test bed. The RL algorithm discussed in this paper will be deployed in Oralytics, an oral self-care app that provides behavioral strategies to boost patient engagement in oral hygiene practices.

A graph $G$ is $k$-out-connected from its node $s$ if it contains $k$ internally disjoint $sv$-paths to every node $v$; $G$ is $k$-connected if it is $k$-out-connected from every node. In connectivity augmentation problems the goal is to augment a graph $G_0=(V,E_0)$ by a minimum costs edge set $J$ such that $G_0 \cup J$ has higher connectivity than $G_0$. In the $k$-Out-Connectivity Augmentation ($k$-OCA) problem, $G_0$ is $(k-1)$-out-connected from $s$ and $G_0 \cup J$ should be $k$-out-connected from $s$; in the $k$-Connectivity Augmentation ($k$-CA) problem $G_0$ is $(k-1)$-connected and $G_0 \cup J$ should be $k$-connected. The parameterized complexity status of these problems was open even for $k=3$ and unit costs. We will show that $k$-OCA and $3$-CA can be solved in time $9^p \cdot n^{O(1)}$, where $p$ is the size of an optimal solution. Our paper is the first that shows fixed parameter tractability of a $k$-node-connectivity augmentation problem with high values of $k$. We will also consider the $(2,k)$-Connectivity Augmentation problem where $G_0$ is $(k-1)$-edge-connected and $G_0 \cup J$ should be both $k$-edge-connected and $2$-connected. We will show that this problem can be solved in time $9^p \cdot n^{O(1)}$, and for unit costs approximated within $1.892$.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.

北京阿比特科技有限公司