亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present improved approximation algorithms for the (unsplittable) Capacitated Vehicle Routing Problem (CVRP) in general metrics. In CVRP, introduced by Dantzig and Ramser (1959), we are given a set of points (clients) $V$ together with a depot $r$ in a metric space, with each $v\in V$ having a demand $d_v>0$, and a vehicle of bounded capacity $Q$. The goal is to find a minimum cost collection of tours for the vehicle, each starting and ending at the depot, such that each client is visited at least once and the total demands of the clients in each tour is at most $Q$. In the unsplittable variant we study, the demand of a node must be served entirely by one tour. We present two approximation algorithms for unsplittable CVRP: a combinatorial $(\alpha+1.75)$-approximation, where $\alpha$ is the approximation factor for the Traveling Salesman Problem, and an approximation algorithm based on LP rounding with approximation guarantee $\alpha+\ln(2) + \delta \approx 3.194 + \delta$ in $n^{O(1/\delta)}$ time. Both approximations can further be improved by a small amount when combined with recent work by Blauth, Traub, and Vygen (2021), who obtained an $(\alpha + 2\cdot (1 -\epsilon))$-approximation for unsplittable CVRP for some constant $\epsilon$ depending on $\alpha$ ($\epsilon > 1/3000$ for $\alpha = 1.5$).

相關內容

We study the allocative challenges that governmental and nonprofit organizations face when tasked with equitable and efficient rationing of a social good among agents whose needs (demands) realize sequentially and are possibly correlated. To better achieve their dual aims of equity and efficiency, social planners intend to maximize the minimum fill rate across agents, where each agent's fill rate is determined by a one-time allocation that must be irrevocably decided upon its arrival. For an arbitrarily correlated sequence of demands, we establish upper bounds on both the expected minimum fill rate (ex-post fairness) and the minimum expected fill rate (ex-ante fairness) achievable by any policy. Our bounds are parameterized by the number of agents and the expected demand-to-supply ratio. Further, we show that for any set of parameters, a simple adaptive policy of projected proportional allocation achieves the best possible fairness guarantee, ex post as well as ex ante. We obtain the performance guarantees of our proposed adaptive policy by inductively designing lower-bound functions on its corresponding value-to-go. Our policy is transparent and easy to implement, as it does not rely on distributional information beyond the first conditional moments. Despite our policy's simplicity, we demonstrate that it provides significant improvement over the class of non-adaptive target-fill-rate policies by characterizing the performance of the optimal such policy. We complement our theoretical developments with a numerical study motivated by the rationing of COVID-19 medical supplies based on a standard SEIR model approach that is commonly used to forecast pandemic trajectories. In such a setting, our simple adaptive policy significantly outperforms its theoretical guarantee as well as the optimal target-fill-rate policy.

For an integer $k \geq 1$ and a graph $G$, let $\mathcal{K}_k(G)$ be the graph that has vertex set all proper $k$-colorings of $G$, and an edge between two vertices $\alpha$ and~$\beta$ whenever the coloring~$\beta$ can be obtained from $\alpha$ by a single Kempe change. A theorem of Meyniel from 1978 states that $\mathcal{K}_5(G)$ is connected with diameter $O(5^{|V(G)|})$ for every planar graph $G$. We significantly strengthen this result, by showing that there is a positive constant $c$ such that $\mathcal{K}_5(G)$ has diameter $O(|V(G)|^c)$ for every planar graph $G$.

In this paper, we prove a local limit theorem for the chi-square distribution with $r > 0$ degrees of freedom and noncentrality parameter $\lambda \geq 0$. We use it to develop refined normal approximations for the survival function. Our maximal errors go down to an order of $r^{-2}$, which is significantly smaller than the maximal error bounds of order $r^{-1/2}$ recently found by Horgan & Murphy (2013) and Seri (2015). Our results allow us to drastically reduce the number of observations required to obtain negligible errors in the energy detection problem, from $250$, as recommended in the seminal work of Urkowitz (1967), to only $8$ here with our new approximations. We also obtain an upper bound on several probability metrics between the central and noncentral chi-square distributions and the standard normal distribution, and we obtain an approximation for the median that improves the lower bound previously obtained by Robert (1990).

We show that solution to the Hermite-Pad\'{e} type I approximation problem leads in a natural way to a subclass of solutions of the Hirota (discrete Kadomtsev-Petviashvili) system and of its adjoint linear problem. Our result explains the appearence of various ingredients of the integrable systems theory in application to multiple orthogonal polynomials, numerical algorthms, random matrices, and in other branches of mathematical physics and applied mathematics where the Hermite-Pad\'{e} approximation problem is relevant. We present also the geometric algorithm, based on the notion of Desargues maps, of construction of solutions of the problem in the projective space over the field of rational functions. As a byproduct we obtain the corresponding generalization of the Wynn recurrence. We isolate the boundary data of the Hirota system which provide solutions to Hermite-Pad\'{e} problem showing that the corresponding reduction lowers dimensionality of the system. In particular, we obtain certain equations which, in addition to the known ones given by Paszkowski, can be considered as direct analogs of the Frobenius identities. We study the place of the reduced system within the integrability theory, which results in finding multidimensional (in the sense of number of variables) extension of the discrete-time Toda chain equations.

Let $L_{q,\mu},\, 1\le q<\infty, \ \mu\ge0,$ denote the weighted $L_q$ space with the classical Jacobi weight $w_\mu$ on the ball $\Bbb B^d$. We consider the weighted least $\ell_q$ approximation problem for a given $L_{q,\mu}$-Marcinkiewicz-Zygmund family on $\Bbb B^d$. We obtain the weighted least $\ell_q$ approximation errors for the weighted Sobolev space $W_{q,\mu}^r$, $r>(d+2\mu)/q$, which are order optimal. We also discuss the least squares quadrature induced by an $L_{2,\mu}$-Marcinkiewicz-Zygmund family, and get the quadrature errors for $W_{2,\mu}^r$, $r>(d+2\mu)/2$, which are also order optimal. Meanwhile, we give the corresponding the weighted least $\ell_q$ approximation theorem and the least squares quadrature errors on the sphere.

Given an $n$-point metric space $(\mathcal{X},d)$ where each point belongs to one of $m=O(1)$ different categories or groups and a set of integers $k_1, \ldots, k_m$, the fair Max-Min diversification problem is to select $k_i$ points belonging to category $i\in [m]$, such that the minimum pairwise distance between selected points is maximized. The problem was introduced by Moumoulidou et al. [ICDT 2021] and is motivated by the need to down-sample large data sets in various applications so that the derived sample achieves a balance over diversity, i.e., the minimum distance between a pair of selected points, and fairness, i.e., ensuring enough points of each category are included. We prove the following results: 1. We first consider general metric spaces. We present a randomized polynomial time algorithm that returns a factor $2$-approximation to the diversity but only satisfies the fairness constraints in expectation. Building upon this result, we present a $6$-approximation that is guaranteed to satisfy the fairness constraints up to a factor $1-\epsilon$ for any constant $\epsilon$. We also present a linear time algorithm returning an $m+1$ approximation with exact fairness. The best previous result was a $3m-1$ approximation. 2. We then focus on Euclidean metrics. We first show that the problem can be solved exactly in one dimension. For constant dimensions, categories and any constant $\epsilon>0$, we present a $1+\epsilon$ approximation algorithm that runs in $O(nk) + 2^{O(k)}$ time where $k=k_1+\ldots+k_m$. We can improve the running time to $O(nk)+ poly(k)$ at the expense of only picking $(1-\epsilon) k_i$ points from category $i\in [m]$. Finally, we present algorithms suitable to processing massive data sets including single-pass data stream algorithms and composable coresets for the distributed processing.

In this paper, we study a non-local approximation of the time-dependent (local) Eikonal equation with Dirichlet-type boundary conditions, where the kernel in the non-local problem is properly scaled. Based on the theory of viscosity solutions, we prove existence and uniqueness of the viscosity solutions of both the local and non-local problems, as well as regularity properties of these solutions in time and space. We then derive error bounds between the solution to the non-local problem and that of the local one, both in continuous-time and Backward Euler time discretization. We then turn to studying continuum limits of non-local problems defined on random weighted graphs with $n$ vertices. In particular, we establish that if the kernel scale parameter decreases at an appropriate rate as $n$ grows, then almost surely, the solution of the problem on graphs converges uniformly to the viscosity solution of the local problem as the time step vanishes and the number vertices $n$ grows large.

In this paper, a new framework for continuous-time maximum a posteriori estimation based on the Chebyshev polynomial optimization (ChevOpt) is proposed, which transforms the nonlinear continuous-time state estimation into a problem of constant parameter optimization. Specifically, the time-varying system state is represented by a Chebyshev polynomial and the unknown Chebyshev coefficients are optimized by minimizing the weighted sum of the prior, dynamics and measurements. The proposed ChevOpt is an optimal continuous-time estimation in the least squares sense and needs a batch processing. A recursive sliding-window version is proposed as well to meet the requirement of real-time applications. Comparing with the well-known Gaussian filters, the ChevOpt better resolves the nonlinearities in both dynamics and measurements. Numerical results of demonstrative examples show that the proposed ChevOpt achieves remarkably improved accuracy over the extended/unscented Kalman filters and RTS smoother, close to the Cramer-Rao lower bound.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司