亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The $k$-Opt heuristic is a simple improvement heuristic for the Traveling Salesman Problem. It starts with an arbitrary tour and then repeatedly replaces $k$ edges of the tour by $k$ other edges, as long as this yields a shorter tour. We will prove that for 2-dimensional Euclidean Traveling Salesman Problems with $n$ cities the approximation ratio of the $k$-Opt heuristic is $\Theta(\log n / \log \log n)$. This improves the upper bound of $O(\log n)$ given by Chandra, Karloff, and Tovey in 1999 and provides for the first time a non-trivial lower bound for the case $k\ge 3$. Our results not only hold for the Euclidean norm but extend to arbitrary $p$-norms with $1 \le p < \infty$.

相關內容

Trajectory planning tasks for non-holonomic or collaborative systems are naturally modeled by state spaces with non-Euclidean metrics. However, existing proofs of convergence for sample-based motion planners only consider the setting of Euclidean state spaces. We resolve this issue by formulating a flexible framework and set of assumptions for which the widely-used PRM*, RRT, and RRT* algorithms remain asymptotically optimal in the non-Euclidean setting. The framework is compatible with collaborative trajectory planning: given a fleet of robotic systems that individually satisfy our assumptions, we show that the corresponding collaborative system again satisfies the assumptions and therefore has guaranteed convergence for the trajectory-finding methods. Our joint state space construction builds in a coupling parameter $1\leq p\leq \infty$, which interpolates between a preference for minimizing total energy at one extreme and a preference for minimizing the travel time at the opposite extreme. We illustrate our theory with trajectory planning for simple coupled systems, fleets of Reeds-Shepp vehicles, and a highly non-Euclidean fractal space.

We consider the singularly perturbed fourth-order boundary value problem $\varepsilon ^{2}\Delta ^{2}u-\Delta u=f $ on the unit square $\Omega \subset \mathbb{R}^2$, with boundary conditions $u = \partial u / \partial n = 0$ on $\partial \Omega$, where $\varepsilon \in (0, 1)$ is a small parameter. The problem is solved numerically by means of a weak Galerkin(WG) finite element method, which is highly robust and flexible in the element construction by using discontinuous piecewise polynomials on finite element partitions consisting of polygons of arbitrary shape. The resulting WG finite element formulation is symmetric, positive definite, and parameter-free. Under reasonable assumptions on the structure of the boundary layers that appear in the solution, a family of suitable Shishkin meshes with $N^2$ elements is constructed ,convergence of the method is proved in a discrete $H^2$ norm for the corresponding WG finite element solutions and numerical results are presented.

Approximating convex bodies is a fundamental question in geometry and has a wide variety of applications. Given a convex body $K$ of diameter $\Delta$ in $\mathbb{R}^d$ for fixed $d$, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error $\varepsilon$. The best known uniform bound, due to Dudley (1974), shows that $O((\Delta/\varepsilon)^{(d-1)/2})$ facets suffice. While this bound is optimal in the case of a Euclidean ball, it is far from optimal for ``skinny'' convex bodies. A natural way to characterize a convex object's skinniness is in terms of its relationship to the Euclidean ball. Given a convex body $K$, define its surface diameter $\Delta_{d-1}$ to be the diameter of a Euclidean ball of the same surface area as $K$. It follows from generalizations of the isoperimetric inequality that $\Delta \geq \Delta_{d-1}$. We show that, under the assumption that the width of the body in any direction is at least $\varepsilon$, it is possible to approximate a convex body using $O((\Delta_{d-1}/\varepsilon)^{(d-1)/2})$ facets. This bound is never worse than the previous bound and may be significantly better for skinny bodies. The bound is tight, in the sense that for any value of $\Delta_{d-1}$, there exist convex bodies that, up to constant factors, require this many facets. The improvement arises from a novel approach to sampling points on the boundary of a convex body. We employ a classical concept from convexity, called Macbeath regions. We demonstrate that Macbeath regions in $K$ and $K$'s polar behave much like polar pairs. We then apply known results on the Mahler volume to bound their number.

We present a new approach to approximate nearest-neighbor queries in fixed dimension under a variety of non-Euclidean distances. We are given a set $S$ of $n$ points in $\mathbb{R}^d$, an approximation parameter $\varepsilon > 0$, and a distance function that satisfies certain smoothness and growth-rate assumptions. The objective is to preprocess $S$ into a data structure so that for any query point $q$ in $\mathbb{R}^d$, it is possible to efficiently report any point of $S$ whose distance from $q$ is within a factor of $1+\varepsilon$ of the actual closest point. Prior to this work, the most efficient data structures for approximate nearest-neighbor searching in spaces of constant dimensionality applied only to the Euclidean metric. This paper overcomes this limitation through a method called convexification. For admissible distance functions, the proposed data structures answer queries in logarithmic time using $O(n \log (1 / \varepsilon) / \varepsilon^{d/2})$ space, nearly matching the best known bounds for the Euclidean metric. These results apply to both convex scaling distance functions (including the Mahalanobis distance and weighted Minkowski metrics) and Bregman divergences (including the Kullback-Leibler divergence and the Itakura-Saito distance).

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

We develop two simple and efficient approximation algorithms for the continuous $k$-medians problems, where we seek to find the optimal location of $k$ facilities among a continuum of client points in a convex polygon $C$ with $n$ vertices in a way that the total (average) Euclidean distance between clients and their nearest facility is minimized. Both algorithms run in $\mathcal{O}(n + k + k \log n)$ time. Our algorithms produce solutions within a factor of 2.002 of optimality. In addition, our simulation results applied to the convex hulls of the State of Massachusetts and the Town of Brookline, MA show that our algorithms generally perform within a range of 5\% to 22\% of optimality in practice.

Consider a random sample $(X_{1},\ldots,X_{n})$ from an unknown discrete distribution $P=\sum_{j\geq1}p_{j}\delta_{s_{j}}$ on a countable alphabet $\mathbb{S}$, and let $(Y_{n,j})_{j\geq1}$ be the empirical frequencies of distinct symbols $s_{j}$'s in the sample. We consider the problem of estimating the $r$-order missing mass, which is a discrete functional of $P$ defined as $$\theta_{r}(P;\mathbf{X}_{n})=\sum_{j\geq1}p^{r}_{j}I(Y_{n,j}=0).$$ This is generalization of the missing mass whose estimation is a classical problem in statistics, being the subject of numerous studies both in theory and methods. First, we introduce a nonparametric estimator of $\theta_{r}(P;\mathbf{X}_{n})$ and a corresponding non-asymptotic confidence interval through concentration properties of $\theta_{r}(P;\mathbf{X}_{n})$. Then, we investigate minimax estimation of $\theta_{r}(P;\mathbf{X}_{n})$, which is the main contribution of our work. We show that minimax estimation is not feasible over the class of all discrete distributions on $\mathbb{S}$, and not even for distributions with regularly varying tails, which only guarantee that our estimator is consistent for $\theta_{r}(P;\mathbf{X}_{n})$. This leads to introduce the stronger assumption of second-order regular variation for the tail behaviour of $P$, which is proved to be sufficient for minimax estimation of $\theta_r(P;\mathbf{X}_{n})$, making the proposed estimator an optimal minimax estimator of $\theta_{r}(P;\mathbf{X}_{n})$. Our interest in the $r$-order missing mass arises from forensic statistics, where the estimation of the $2$-order missing mass appears in connection to the estimation of the likelihood ratio $T(P,\mathbf{X}_{n})=\theta_{1}(P;\mathbf{X}_{n})/\theta_{2}(P;\mathbf{X}_{n})$, known as the "fundamental problem of forensic mathematics". We present theoretical guarantees to nonparametric estimation of $T(P,\mathbf{X}_{n})$.

Bilevel optimization has various applications such as hyper-parameter optimization and meta-learning. Designing theoretically efficient algorithms for bilevel optimization is more challenging than standard optimization because the lower-level problem defines the feasibility set implicitly via another optimization problem. One tractable case is when the lower-level problem permits strong convexity. Recent works show that second-order methods can provably converge to an $\epsilon$-first-order stationary point of the problem at a rate of $\tilde{\mathcal{O}}(\epsilon^{-2})$, yet these algorithms require a Hessian-vector product oracle. Kwon et al. (2023) resolved the problem by proposing a first-order method that can achieve the same goal at a slower rate of $\tilde{\mathcal{O}}(\epsilon^{-3})$. In this work, we provide an improved analysis demonstrating that the first-order method can also find an $\epsilon$-first-order stationary point within $\tilde {\mathcal{O}}(\epsilon^{-2})$ oracle complexity, which matches the upper bounds for second-order methods in the dependency on $\epsilon$. Our analysis further leads to simple first-order algorithms that can achieve similar near-optimal rates in finding second-order stationary points and in distributed bilevel problems.

Given a graph, the $k$-plex is a vertex set in which each vertex is not adjacent to at most $k-1$ other vertices in the set. The maximum $k$-plex problem, which asks for the largest $k$-plex from a given graph, is an important but computationally challenging problem in applications like graph search and community detection. So far, there is a number of empirical algorithms without sufficient theoretical explanations on the efficiency. We try to bridge this gap by defining a novel parameter of the input instance, $g_k(G)$, the gap between the degeneracy bound and the size of maximum $k$-plex in the given graph, and presenting an exact algorithm parameterized by $g_k(G)$. In other words, we design an algorithm with running time polynomial in the size of input graph and exponential in $g_k(G)$ where $k$ is a constant. Usually, $g_k(G)$ is small and bounded by $O(\log{(|V|)})$ in real-world graphs, indicating that the algorithm runs in polynomial time. We also carry out massive experiments and show that the algorithm is competitive with the state-of-the-art solvers. Additionally, for large $k$ values such as $15$ and $20$, our algorithm has superior performance over existing algorithms.

In this work, we present a generic step-size choice for the ADMM type proximal algorithms. It admits a closed-form expression and is theoretically optimal with respect to a worst-case convergence rate bound. It is simply given by the ratio of Euclidean norms of the dual and primal solutions, i.e., $ ||{\lambda}^\star|| / ||{x}^\star||$. Numerical tests show that its practical performance is near-optimal in general. The only challenge is that such a ratio is not known a priori and we provide two strategies to address it. The derivation of our step-size choice is based on studying the fixed-point structure of ADMM using the proximal operator. However, we demonstrate that the classical proximal operator definition contains an input scaling issue. This leads to a scaled step-size optimization problem which would yield a false solution. Such an issue is naturally avoided by our proposed new definition of the proximal operator. A series of its properties is established.

北京阿比特科技有限公司