亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the minimax query complexity of online planning with a generative model in fixed-horizon Markov decision processes (MDPs) with linear function approximation. Following recent works, we consider broad classes of problems where either (i) the optimal value function $v^\star$ or (ii) the optimal action-value function $q^\star$ lie in the linear span of some features; or (iii) both $v^\star$ and $q^\star$ lie in the linear span when restricted to the states reachable from the starting state. Recently, Weisz et al. (2021b) showed that under (ii) the minimax query complexity of any planning algorithm is at least exponential in the horizon $H$ or in the feature dimension $d$ when the size $A$ of the action set can be chosen to be exponential in $\min(d,H)$. On the other hand, for the setting (i), Weisz et al. (2021a) introduced TensorPlan, a planner whose query cost is polynomial in all relevant quantities when the number of actions is fixed. Among other things, these two works left open the question whether polynomial query complexity is possible when $A$ is subexponential in $min(d,H)$. In this paper we answer this question in the negative: we show that an exponentially large lower bound holds when $A=\Omega(\min(d^{1/4},H^{1/2}))$, under either (i), (ii) or (iii). In particular, this implies a perhaps surprising exponential separation of query complexity compared to the work of Du et al. (2021) who prove a polynomial upper bound when (iii) holds for all states. Furthermore, we show that the upper bound of TensorPlan can be extended to hold under (iii) and, for MDPs with deterministic transitions and stochastic rewards, also under (ii).

相關內容

In this paper we study the recursive sequence $x_{n+1}=\frac{x_n+f(x_n)}{2}$ for each continuous real-valued function $f$ on an interval $[a,b]$, where $x_0$ is an arbitrary point in $[a,b]$. First, we present some results for real-valued continuous function $f$ on $[a,b]$ which have a unique fixed point $c\in (a,b)$ and show that the sequence $\{x_n\}$ converges to $c$ provided that $f$ satisfies some conditions. By assuming that $c$ is a root of $f$ instead of being its fixed point, we extend these results. We define two other sequences by $x^{+}_0=x^{-}_0=x_0\in [a,b]$ and $x^{+}_{n+1}=x^{+}_n+\frac{f(x^{+}_n)}{2}$ and $x^{-}_{n+1}= x^{-}_n-\frac{f(x^{-}_n)}{2}$ for each $n\ge 0$. We show that for each real-valued continuous function $f$ on $[a,b]$ with $f(a)>0>f(b)$ which has a unique root $c\in (a,b)$, the sequence $\{x^{+}_n\}$ converges to $c$ provided that $f^{'}\ge -2$ on $(a,b)$. Accordingly we show that for each real-valued continuous function $f$ on $[a,b]$ with $f(a)<0<f(b)$ which has a unique root $c\in (a,b)$, the sequence $\{x^{-}_n\}$ converges to $c$ provided that $f^{'}\le 2$ on $(a,b)$. By an example we also show that there exists some continuous real-valued function $f:[a,b]\to [a,b]$ such that the sequence $\{x_{n}\}$ does not converge for some $x_0\in [a,b]$.

We study the Maximum Independent Set (MIS) problem under the notion of stability introduced by Bilu and Linial (2010): a weighted instance of MIS is $\gamma$-stable if it has a unique optimal solution that remains the unique optimum under multiplicative perturbations of the weights by a factor of at most $\gamma\geq 1$. The goal then is to efficiently recover the unique optimal solution. In this work, we solve stable instances of MIS on several graphs classes: we solve $\widetilde{O}(\Delta/\sqrt{\log \Delta})$-stable instances on graphs of maximum degree $\Delta$, $(k - 1)$-stable instances on $k$-colorable graphs and $(1 + \varepsilon)$-stable instances on planar graphs. For general graphs, we present a strong lower bound showing that there are no efficient algorithms for $O(n^{\frac{1}{2} - \varepsilon})$-stable instances of MIS, assuming the planted clique conjecture. We also give an algorithm for $(\varepsilon n)$-stable instances. As a by-product of our techniques, we give algorithms and lower bounds for stable instances of Node Multiway Cut. Furthermore, we prove a general result showing that the integrality gap of convex relaxations of several maximization problems reduces dramatically on stable instances. Moreover, we initiate the study of certified algorithms, a notion recently introduced by Makarychev and Makarychev (2018), which is a class of $\gamma$-approximation algorithms that satisfy one crucial property: the solution returned is optimal for a perturbation of the original instance. We obtain $\Delta$-certified algorithms for MIS on graphs of maximum degree $\Delta$, and $(1+\varepsilon)$-certified algorithms on planar graphs. Finally, we analyze the algorithm of Berman and Furer (1994) and prove that it is a $\left(\frac{\Delta + 1}{3} + \varepsilon\right)$-certified algorithm for MIS on graphs of maximum degree $\Delta$ where all weights are equal to 1.

We propose and analyse an augmented mixed finite element method for the Navier--Stokes equations written in terms of velocity, vorticity, and pressure with non-constant viscosity and no-slip boundary conditions. The weak formulation includes least-squares terms arising from the constitutive equation and from the incompressibility condition, and we use a fixed point strategies to show the existence and uniqueness of continuous and discrete solutions under the assumption of sufficiently small data. The method is constructed using any compatible finite element pair (conforming or non-conforming) for velocity and pressure as dictated by Stokes inf-sup stability, while for vorticity any generic discrete space (of arbitrary order) can be used. We establish optimal a priori error estimates. Finally, we provide a set of numerical tests in 2D and 3D illustrating the behaviour of the scheme as well as verifying the theoretical convergence rates.

Multi-robot systems offer enhanced capability over their monolithic counterparts, but they come at a cost of increased complexity in coordination. To reduce complexity and to make the problem tractable, multi-robot motion planning (MRMP) methods in the literature adopt de-coupled approaches that sacrifice either optimality or dynamic feasibility. In this paper, we present a convexification method, namely "parabolic relaxation", to generate optimal and dynamically feasible trajectories for MRMP in the coupled joint-space of all robots. We leverage upon the proposed relaxation to tackle the problem complexity and to attain computational tractability for planning over one hundred robots in extremely clustered environments. We take a multi-stage optimization approach that consists of i) mathematically formulating MRMP as a non-convex optimization, ii) lifting the problem into a higher dimensional space, iii) convexifying the problem through the proposed computationally efficient parabolic relaxation, and iv) penalizing with iterative search to ensure feasibility and recovery of feasible and near-optimal solutions to the original problem. Our numerical experiments demonstrate that the proposed approach is capable of generating optimal and dynamically feasible trajectories for challenging motion planning problems with higher success rate than the state-of-the-art, yet remain computationally tractable for over one hundred robots in a highly dense environment.

We present a general technique, based on parametric search with some twist, for solving a variety of optimization problems on a set of points in the plane or in higher dimensions. These problems include (i) the reverse shortest path problem in unit-disk graphs, recently studied by Wang and Zhao, (ii) the same problem for weighted unit-disk graphs, with a decision procedure recently provided by Wang and Xue, (iii) extensions of these problems to three and higher dimensions, (iv) the discrete Fr\'echet distance with one-sided shortcuts in higher dimensions, extending the study by Ben Avraham et al., and (v) the maximum-height independent towers problem, in which we want to erect vertical towers of maximum height over a 1.5-dimensional terrain so that no pair of tower tips are mutually visible. We obtain significantly improved solutions for problems (i) and (ii), and new efficient solutions to problems (iii), (iv) and (v), which do not appear to have been studied earlier.

Q-learning, which seeks to learn the optimal Q-function of a Markov decision process (MDP) in a model-free fashion, lies at the heart of reinforcement learning. When it comes to the synchronous setting (such that independent samples for all state-action pairs are drawn from a generative model in each iteration), substantial progress has been made towards understanding the sample efficiency of Q-learning. Consider a $\gamma$-discounted infinite-horizon MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an entrywise $\varepsilon$-approximation of the optimal Q-function, state-of-the-art theory for Q-learning requires a sample size exceeding the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$, which fails to match existing minimax lower bounds. This gives rise to natural questions: what is the sharp sample complexity of Q-learning? Is Q-learning provably sub-optimal? This paper addresses these questions for the synchronous setting: (1) when $|\mathcal{A}|=1$ (so that Q-learning reduces to TD learning), we prove that the sample complexity of TD learning is minimax optimal and scales as $\frac{|\mathcal{S}|}{(1-\gamma)^3\varepsilon^2}$ (up to log factor); (2) when $|\mathcal{A}|\geq 2$, we settle the sample complexity of Q-learning to be on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4\varepsilon^2}$ (up to log factor). Our theory unveils the strict sub-optimality of Q-learning when $|\mathcal{A}|\geq 2$, and rigorizes the negative impact of over-estimation in Q-learning. Finally, we extend our analysis to accommodate asynchronous Q-learning (i.e., the case with Markovian samples), sharpening the horizon dependency of its sample complexity to be $\frac{1}{(1-\gamma)^4}$.

In this work we consider the well-known Secretary Problem -- a number $n$ of elements, each having an adversarial value, are arriving one-by-one according to some random order, and the goal is to choose the highest value element. The decisions are made online and are irrevocable -- if the algorithm decides to choose or not to choose the currently seen element, based on the previously observed values, it cannot change its decision later regarding this element. The measure of success is the probability of selecting the highest value element, minimized over all adversarial assignments of values. We show existential and constructive upper bounds on approximation of the success probability in this problem, depending on the entropy of the randomly chosen arrival order, including the lowest possible entropy $O(\log\log (n))$ for which the probability of success could be constant. We show that below entropy level $\mathcal{H}<0.5\log\log n$, all algorithms succeed with probability $0$ if random order is selected uniformly at random from some subset of permutations, while we are able to construct in polynomial time a non-uniform distribution with entropy $\mathcal{H}$ resulting in success probability of at least $\Omega\left(\frac{1}{(\log\log n +3\log\log\log n -\mathcal{H})^{2+\epsilon}}\right)$, for any constant $\epsilon>0$. We also prove that no algorithm using entropy $\mathcal{H}=O((\log\log n)^a)$ can improve our result by more than polynomially, for any constant $0<a<1$. For entropy $\log\log (n)$ and larger, our analysis precisely quantifies both multiplicative and additive approximation of the success probability. In particular, we improve more than doubly exponentially on the best previously known additive approximation guarantee for the secretary problem.

A common approach to tackle a combinatorial optimization problem is to first solve a continuous relaxation and then round the obtained fractional solution. For the latter, the framework of contention resolution schemes (or CR schemes), introduced by Chekuri, Vondrak, and Zenklusen, is a general and successful tool. A CR scheme takes a fractional point $x$ in a relaxation polytope, rounds each coordinate $x_i$ independently to get a possibly non-feasible set, and then drops some elements in order to satisfy the independence constraints. Intuitively, a CR scheme is $c$-balanced if every element $i$ is selected with probability at least $c \cdot x_i$. It is known that general matroids admit a $(1-1/e)$-balanced CR scheme, and that this is (asymptotically) optimal. This is in particular true for the special case of uniform matroids of rank one. In this work, we provide a simple and explicit monotone CR scheme with a balancedness of $1 - \binom{n}{k}\:\left(1-\frac{k}{n}\right)^{n+1-k}\:\left(\frac{k}{n}\right)^k$, and show that this is optimal. As $n$ grows, this expression converges from above to $1 - e^{-k}k^k/k!$. While this asymptotic bound can be obtained by combining previously known results, these require defining an exponential-sized linear program, as well as using random sampling and the ellipsoid algorithm. Our procedure, on the other hand, has the advantage of being simple and explicit. Moreover, this scheme generalizes into an optimal CR scheme for partition matroids.

Centralized Training with Decentralized Execution (CTDE) has been a popular paradigm in cooperative Multi-Agent Reinforcement Learning (MARL) settings and is widely used in many real applications. One of the major challenges in the training process is credit assignment, which aims to deduce the contributions of each agent according to the global rewards. Existing credit assignment methods focus on either decomposing the joint value function into individual value functions or measuring the impact of local observations and actions on the global value function. These approaches lack a thorough consideration of the complicated interactions among multiple agents, leading to an unsuitable assignment of credit and subsequently mediocre results on MARL. We propose Shapley Counterfactual Credit Assignment, a novel method for explicit credit assignment which accounts for the coalition of agents. Specifically, Shapley Value and its desired properties are leveraged in deep MARL to credit any combinations of agents, which grants us the capability to estimate the individual credit for each agent. Despite this capability, the main technical difficulty lies in the computational complexity of Shapley Value who grows factorially as the number of agents. We instead utilize an approximation method via Monte Carlo sampling, which reduces the sample complexity while maintaining its effectiveness. We evaluate our method on StarCraft II benchmarks across different scenarios. Our method outperforms existing cooperative MARL algorithms significantly and achieves the state-of-the-art, with especially large margins on tasks with more severe difficulties.

Many important real-world problems have action spaces that are high-dimensional, continuous or both, making full enumeration of all possible actions infeasible. Instead, only small subsets of actions can be sampled for the purpose of policy evaluation and improvement. In this paper, we propose a general framework to reason in a principled way about policy evaluation and improvement over such sampled action subsets. This sample-based policy iteration framework can in principle be applied to any reinforcement learning algorithm based upon policy iteration. Concretely, we propose Sampled MuZero, an extension of the MuZero algorithm that is able to learn in domains with arbitrarily complex action spaces by planning over sampled actions. We demonstrate this approach on the classical board game of Go and on two continuous control benchmark domains: DeepMind Control Suite and Real-World RL Suite.

北京阿比特科技有限公司