亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the rooted prize-collecting walks (PCW) problem, wherein we seek a collection $C$ of rooted walks having minimum prize-collecting cost, which is the (total cost of walks in $C$) + (total node-reward of nodes not visited by any walk in $C$). This problem arises naturally as the Lagrangian relaxation of both orienteering, where we seek a length-bounded walk of maximum reward, and the $\ell$-stroll problem, where we seek a minimum-length walk covering at least $\ell$ nodes. Our main contribution is to devise a simple, combinatorial algorithm for the PCW problem in directed graphs that returns a rooted tree whose prize-collecting cost is at most the optimum value of the prize-collecting walks problem. We utilize our algorithm to develop combinatorial approximation algorithms for two fundamental vehicle-routing problems (VRPs): (1) orienteering; and (2) $k$-minimum-latency problem ($k$-MLP), wherein we seek to cover all nodes using $k$ paths starting at a prescribed root node, so as to minimize the sum of the node visiting times. Our combinatorial algorithm allows us to sidestep the part where we solve a preflow-based LP in the LP-rounding algorithms of Friggstand and Swamy (2017) for orienteering, and in the state-of-the-art $7.183$-approximation algorithm for $k$-MP in Post and Swamy (2015). Consequently, we obtain combinatorial implementations of these algorithms with substantially improved running times compared with the current-best approximation factors. We report computational results for our resulting (combinatorial implementations of) orienteering algorithms, which show that the algorithms perform quite well in practice, both in terms of the quality of the solution they return, as also the upper bound they yield on the orienteering optimum (which is obtained by leveraging the workings of our PCW algorithm).

相關內容

In numerical linear algebra, a well-established practice is to choose a norm that exploits the structure of the problem at hand in order to optimize accuracy or computational complexity. In numerical polynomial algebra, a single norm (attributed to Weyl) dominates the literature. This article initiates the use of $L_p$ norms for numerical algebraic geometry, with an emphasis on $L_{\infty}$. This classical idea yields strong improvements in the analysis of the number of steps performed by numerous iterative algorithms. In particular, we exhibit three algorithms where, despite the complexity of computing $L_{\infty}$-norm, the use of $L_p$-norms substantially reduces computational complexity: a subdivision-based algorithm in real algebraic geometry for computing the homology of semialgebraic sets, a well-known meshing algorithm in computational geometry, and the computation of zeros of systems of complex quadratic polynomials (a particular case of Smale's 17th problem).

In this paper, we study a non-local approximation of the time-dependent (local) Eikonal equation with Dirichlet-type boundary conditions, where the kernel in the non-local problem is properly scaled. Based on the theory of viscosity solutions, we prove existence and uniqueness of the viscosity solutions of both the local and non-local problems, as well as regularity properties of these solutions in time and space. We then derive error bounds between the solution to the non-local problem and that of the local one, both in continuous-time and Backward Euler time discretization. We then turn to studying continuum limits of non-local problems defined on random weighted graphs with $n$ vertices. In particular, we establish that if the kernel scale parameter decreases at an appropriate rate as $n$ grows, then almost surely, the solution of the problem on graphs converges uniformly to the viscosity solution of the local problem as the time step vanishes and the number vertices $n$ grows large.

A Regret Minimizing Set (RMS) is a useful concept in which a smaller subset of a database is selected while mostly preserving the best scores along every possible utility function. In this paper, we study the $k$-Regret Minimizing Sets ($k$-RMS) and Average Regret Minimizing Sets (ARMS) problems. $k$-RMS selects $r$ records from a database such that the maximum regret ratio between the $k$-th best score in the database and the best score in the selected records for any possible utility function is minimized. Meanwhile, ARMS minimizes the average of this ratio within a distribution of utility functions. Particularly, we study approximation algorithms for $k$-RMS and ARMS from the perspective of approximating the happiness ratio, which is equivalent to one minus the regret ratio. In this paper, we show that the problem of approximating the happiness of a $k$-RMS within any finite factor is NP-Hard when the dimensionality of the database is unconstrained and extend the result to an inapproximability proof for the regret. We then provide approximation algorithms for approximating the happiness of ARMS with better approximation ratios and time complexities than known algorithms for approximating the regret. We further provide dataset reduction schemes which can be used to reduce the runtime of existing heuristic based algorithms, as well as to derive polynomial-time approximation schemes for $k$-RMS when dimensionality is fixed. Finally, we provide experimental validation.

This paper is focused on the optimization approach to the solution of inverse problems. We introduce a stochastic dynamical system in which the parameter-to-data map is embedded, with the goal of employing techniques from nonlinear Kalman filtering to estimate the parameter given the data. The extended Kalman filter (which we refer to as ExKI in the context of inverse problems) can be effective for some inverse problems approached this way, but is impractical when the forward map is not readily differentiable and is given as a black box, and also for high dimensional parameter spaces because of the need to propagate large covariance matrices. Application of ensemble Kalman filters, for example use of the ensemble Kalman inversion (EKI) algorithm, has emerged as a useful tool which overcomes both of these issues: it is derivative free and works with a low-rank covariance approximation formed from the ensemble. In this paper, we work with the ExKI, EKI, and a variant on EKI which we term unscented Kalman inversion (UKI). The paper contains two main contributions. Firstly, we identify a novel stochastic dynamical system in which the parameter-to-data map is embedded. We present theory in the linear case to show exponential convergence of the mean of the filtering distribution to the solution of a regularized least squares problem. This is in contrast to previous work in which the EKI has been employed where the dynamical system used leads to algebraic convergence to an unregularized problem. Secondly, we show that the application of the UKI to this novel stochastic dynamical system yields improved inversion results, in comparison with the application of EKI to the same novel stochastic dynamical system.

The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally. If we follow a Nash equilibrium and opponents are irrational (or follow strategies from a different Nash equilibrium), then we may obtain an extremely low payoff. On the other hand, a maximin strategy assumes that all opposing agents are playing to minimize our payoff (even if it is not in their best interest), and ensures the maximal possible worst-case payoff, but results in exceedingly conservative play. We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability and behaving potentially arbitrarily with the remaining probability. We prove that a safe equilibrium exists in all strategic-form games (for all possible values of the rationality parameters), and prove that its computation is PPAD-hard. We present exact algorithms for computing a safe equilibrium in both 2 and $n$-player games, as well as scalable approximation algorithms.

We study the following two fixed-cardinality optimization problems (a maximization and a minimization variant). For a fixed $\alpha$ between zero and one we are given a graph and two numbers $k \in \mathbb{N}$ and $t \in \mathbb{Q}$. The task is to find a vertex subset $S$ of exactly $k$ vertices that has value at least (resp. at most for minimization) $t$. Here, the value of a vertex set computes as $\alpha$ times the number of edges with exactly one endpoint in $S$ plus $1-\alpha$ times the number of edges with both endpoints in $S$. These two problems generalize many prominent graph problems, such as Densest $k$-Subgraph, Sparsest $k$-Subgraph, Partial Vertex Cover, and Max ($k$,$n-k$)-Cut. In this work, we complete the picture of their parameterized complexity on several types of sparse graphs that are described by structural parameters. In particular, we provide kernelization algorithms and kernel lower bounds for these problems. A somewhat surprising consequence of our kernelizations is that Partial Vertex Cover and Max $(k,n-k)$-Cut not only behave in the same way but that the kernels for both problems can be obtained by the same algorithms.

Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists. We compare model-free, model-based, as well as hybrid offline RL approaches on various industrial benchmark (IB) datasets to test the algorithms in settings closer to real world problems, including complex noise and partially observable states. We find that on the IB, hybrid approaches face severe difficulties and that simpler algorithms, such as rollout based algorithms or model-free algorithms with simpler regularizers perform best on the datasets.

We consider the problem of computing an $(s,d)$-hypernetwork in an acyclic F-hypergraph. This is a fundamental computational problem arising in directed hypergraphs, and is a foundational step in tackling problems of reachability and redundancy. This problem was previously explored in the context of general directed hypergraphs (containing cycles), where it is NP-hard, and acyclic B-hypergraphs, where a linear time algorithm can be achieved. In a surprising contrast, we find that for acyclic F-hypergraphs the problem is NP-hard, which also implies the problem is hard in BF-hypergraphs. This is a striking complexity boundary given that F-hypergraphs and B-hypergraphs would at first seem to be symmetrical to one another. We provide the proof of complexity and explain why there is a fundamental asymmetry between the two classes of directed hypergraphs.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

北京阿比特科技有限公司