亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The All-Pairs Shortest Paths (APSP) problem is one of the fundamental problems in theoretical computer science. It asks to compute the distance matrix of a given $n$-vertex graph. We revisit the classical problem of maintaining the distance matrix under a \emph{fully-dynamic} setting undergoing vertex insertions and deletions with a fast \emph{worst-case} running time and efficient space usage. Although amortized algorithm with $\tilde O(n ^ 2)$ update-time has been known for nearly two decades [Demetrescu and Italiano, STOC 2003], the current best algorithm for worst-case running time with efficient space usage runs is due to [Gutenberg and Wulff-Nilsen, SODA 2020], which improves the space usage of the previous algorithm due to [Abraham, Chechik, and Krinninger, SODA 2017] to $\tilde O(n ^ 2)$ but fails to improve their running time of $\tilde O(n ^ {2 + 2 / 3})$. It has been conjectured that no algorithm in $O(n ^ {2.5 - \epsilon})$ worst-case update time exists. For graphs without negative cycles, we meet this conjectured lower bound by introducing a Monte Carlo algorithm running in randomized $\tilde O(n ^ {2.5})$ time while keeping the $\tilde O(n ^ 2)$ space bound from the previous algorithm. Our improvement is made possible using a novel multi-layer approach that exploits the gaps between hops (number of vertices traversed) of paths.

相關內容

本專題討論會主要討論離散問題之有效演算法與資料結構。除了這些方法和結構的設計,還包括它們的使用、性能分析以及與它們的發展或局限性相關的數學問題。性能分析可以是分析性的,也可以是實驗性的,可以是針對最壞情況或預期情況的性能。研究可以是理論性的,也可以是基于實踐中出現的數據集,可以解決績效分析中涉及的方法學問題。官網鏈接: · 控制器 · FAST · 機器人 · 約束優化 ·
2023 年 7 月 27 日

Trajectory optimization is a powerful tool for robot motion planning and control. State-of-the-art general-purpose nonlinear programming solvers are versatile, handle constraints effectively and provide a high numerical robustness, but they are slow because they do not fully exploit the optimal control problem structure at hand. Existing structure-exploiting solvers are fast, but they often lack techniques to deal with nonlinearity or rely on penalty methods to enforce (equality or inequality) path constraints. This work presents Fatrop: a trajectory optimization solver that is fast and benefits from the salient features of general-purpose nonlinear optimization solvers. The speed-up is mainly achieved through the integration of a specialized linear solver, based on a Riccati recursion that is generalized to also support stagewise equality constraints. To demonstrate the algorithm's potential, it is benchmarked on a set of robot problems that are challenging from a numerical perspective, including problems with a minimum-time objective and no-collision constraints. The solver is shown to solve problems for trajectory generation of a quadrotor, a robot manipulator and a truck-trailer problem in a few tens of milliseconds. The algorithm's C++-code implementation accompanies this work as open source software, released under the GNU Lesser General Public License (LGPL). This software framework may encourage and enable the robotics community to use trajectory optimization in more challenging applications.

The Weighted Path Order of Yamada is a powerful technique for proving termination. It is also supported by CeTA, a certifier for checking untrusted termination proofs. To be more precise, CeTA contains a verified function that computes for two terms whether one of them is larger than the other for a given WPO, i.e., where all parameters of the WPO have been fixed. The problem of this verified function is its exponential runtime in the worst case. Therefore, in this work we develop a polynomial time implementation of WPO that is based on memoization. It also improves upon an earlier verified implementation of the Recursive Path Order: the RPO-implementation uses full terms as keys for the memory, a design which simplified the soundness proofs, but has some runtime overhead. In this work, keys are just numbers, so that the lookup in the memory is faster. Although trivial on paper, this change introduces some challenges for the verification task.

We consider the following problem that we call the Shortest Two Disjoint Paths problem: given an undirected graph $G=(V,E)$ with edge weights $w:E \rightarrow \mathbb{R}$, two terminals $s$ and $t$ in $G$, find two internally vertex-disjoint paths between $s$ and $t$ with minimum total weight. As shown recently by Schlotter and Seb\H{o} (2022), this problem becomes NP-hard if edges can have negative weights, even if the weight function is conservative, i.e., there are are no cycles in $G$ with negative weight. We propose a polynomial-time algorithm that solves the Shortest Two Disjoint Paths problem for conservative weights in the case when the negative-weight edges form a single tree in $G$.

We study the reverse shortest path problem on disk graphs in the plane. In this problem we consider the proximity graph of a set of $n$ disks in the plane of arbitrary radii: In this graph two disks are connected if the distance between them is at most some threshold parameter $r$. The case of intersection graphs is a special case with $r=0$. We give an algorithm that, given a target length $k$, computes the smallest value of $r$ for which there is a path of length at most $k$ between some given pair of disks in the proximity graph. Our algorithm runs in $O^*(n^{5/4})$ randomized expected time, which improves to $O^*(n^{6/5})$ for unit disk graphs, where all the disks have the same radius. Our technique is robust and can be applied to many variants of the problem. One significant variant is the case of weighted proximity graphs, where edges are assigned real weights equal to the distance between the disks or between their centers, and $k$ is replaced by a target weight $w$; that is, we seek a path whose length is at most $w$. In other variants, we want to optimize a parameter different from $r$, such as a scale factor of the radii of the disks. The main technique for the decision version of the problem (determining whether the graph with a given $r$ has the desired property) is based on efficient implementations of BFS (for the unweighted case) and of Dijkstra's algorithm (for the weighted case), using efficient data structures for maintaining the bichromatic closest pair for certain bicliques and several distance functions. The optimization problem is then solved by combining the resulting decision procedure with enhanced variants of the interval shrinking and bifurcation technique of [4].

Gradient Descent (GD) is a powerful workhorse of modern machine learning thanks to its scalability and efficiency in high-dimensional spaces. Its ability to find local minimisers is only guaranteed for losses with Lipschitz gradients, where it can be seen as a `bona-fide' discretisation of an underlying gradient flow. Yet, many ML setups involving overparametrised models do not fall into this problem class, which has motivated research beyond the so-called ``Edge of Stability'' (EoS), where the step-size crosses the admissibility threshold inversely proportional to the Lipschitz constant above. Perhaps surprisingly, GD has been empirically observed to still converge regardless of local instability and oscillatory behavior. The incipient theoretical analysis of this phenomena has mainly focused in the overparametrised regime, where the effect of choosing a large learning rate may be associated to a `Sharpness-Minimisation' implicit regularisation within the manifold of minimisers, under appropriate asymptotic limits. In contrast, in this work we directly examine the conditions for such unstable convergence, focusing on simple, yet representative, learning problems, via analysis of two-step gradient updates. Specifically, we characterize a local condition involving third-order derivatives that guarantees existence and convergence to fixed points of the two-step updates, and leverage such property in a teacher-student setting, under population loss. Finally, starting from Matrix Factorization, we provide observations of period-2 orbit of GD in high-dimensional settings with intuition of its dynamics, along with exploration into more general settings.

We study the consistent k-center clustering problem. In this problem, the goal is to maintain a constant factor approximate $k$-center solution during a sequence of $n$ point insertions and deletions while minimizing the recourse, i.e., the number of changes made to the set of centers after each point insertion or deletion. Previous works by Lattanzi and Vassilvitskii [ICML '12] and Fichtenberger, Lattanzi, Norouzi-Fard, and Svensson [SODA '21] showed that in the incremental setting, where deletions are not allowed, one can obtain $k \cdot \textrm{polylog}(n) / n$ amortized recourse for both $k$-center and $k$-median, and demonstrated a matching lower bound. However, no algorithm for the fully dynamic setting achieves less than the trivial $O(k)$ changes per update, which can be obtained by simply reclustering the full dataset after every update. In this work, we give the first algorithm for consistent $k$-center clustering for the fully dynamic setting, i.e., when both point insertions and deletions are allowed, and improves upon a trivial $O(k)$ recourse bound. Specifically, our algorithm maintains a constant factor approximate solution while ensuring worst-case constant recourse per update, which is optimal in the fully dynamic setting. Moreover, our algorithm is deterministic and is therefore correct even if an adaptive adversary chooses the insertions and deletions.

The Gromov--Hausdorff distance measures the difference in shape between compact metric spaces and poses a notoriously difficult problem in combinatorial optimization. We introduce its quadratic relaxation over a convex polytope whose solutions provably deliver the Gromov--Hausdorff distance. The optimality guarantee is enabled by the fact that the search space of our approach is not constrained to a generalization of bijections, unlike in other relaxations such as the Gromov--Wasserstein distance. We suggest the Frank--Wolfe algorithm with $O(n^3)$-time iterations for solving the relaxation and numerically demonstrate its performance on metric spaces of hundreds of points. In particular, we obtain a new upper bound of the Gromov--Hausdorff distance between the unit circle and the unit hemisphere equipped with Euclidean metric. Our approach is implemented as a Python package dGH.

The selection of Gaussian kernel parameters plays an important role in the applications of support vector classification (SVC). A commonly used method is the k-fold cross validation with grid search (CV), which is extremely time-consuming because it needs to train a large number of SVC models. In this paper, a new approach is proposed to train SVC and optimize the selection of Gaussian kernel parameters. We first formulate the training and parameter selection of SVC as a minimax optimization problem named as MaxMin-L2-SVC-NCH, in which the minimization problem is an optimization problem of finding the closest points between two normal convex hulls (L2-SVC-NCH) while the maximization problem is an optimization problem of finding the optimal Gaussian kernel parameters. A lower time complexity can be expected in MaxMin-L2-SVC-NCH because CV is not needed. We then propose a projected gradient algorithm (PGA) for training L2-SVC-NCH. The famous sequential minimal optimization (SMO) algorithm is a special case of the PGA. Thus, the PGA can provide more flexibility than the SMO. Furthermore, the solution of the maximization problem is done by a gradient ascent algorithm with dynamic learning rate. The comparative experiments between MaxMin-L2-SVC-NCH and the previous best approaches on public datasets show that MaxMin-L2-SVC-NCH greatly reduces the number of models to be trained while maintaining competitive test accuracy. These findings indicate that MaxMin-L2-SVC-NCH is a better choice for SVC tasks.

DAG (directed acyclic graph) tasks are widely used to model parallel real-time workload. The real-time performance of a DAG task not only depends on its total workload, but also its graph structure. Intuitively, with the same total workload, a DAG task with looser precedence constraints tends to have better real-time performance in terms of worst-case response time. However, this paper shows that actually we can shorten the worst-case response time of a DAG task by carefully adding new edges and constructing longer paths. We develop techniques based on the state-of-the-art DAG response time analysis techniques to properly add new edges so that the worst-case response time bound guaranteed by formal analysis can be significantly reduced. Experiments under different parameter settings demonstrate the effectiveness of the proposed techniques.

In this paper the problem of maximizing the distance to a given fixed point over an intersection of balls is considered. It is known that this problem is NP complete in the general case, since any subset sum problem can be solved upon solving a maximization of the distance over an intersection of balls to a point inside the convex hull. The general context is: in [1] it is shown that exists a polynomial algorithm which always solves the maximization problem if the given point is outside the convex hull of the centers of the balls. Naturally one asks if there is a polynomial algorithm which solves the problem for a point inside the convex hull. A conjecture stated in a previous paper, [1] is proved, under slightly stronger conditions. The proven conjecture allows a polynomial algorithm for points on the facets of the convex hull and shows that such points share the maximizer with all the points in a small enough ball centered at it, thus including points in the interior of the convex hull of the ball centers.

北京阿比特科技有限公司