亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we design algorithms to protect swarm-robotics applications against sensor denial-of-service (DoS) attacks on robots. We focus on applications requiring the robots to jointly select actions, e.g., which trajectory to follow, among a set of available ones. Such applications are central in large-scale robotic applications, such as multi-robot motion planning for target tracking. But the current attack-robust algorithms are centralized. In this paper, we propose a general-purpose distributed algorithm towards robust optimization at scale, with local communications only. We name it Distributed Robust Maximization (DRM). DRM proposes a divide-and-conquer approach that distributively partitions the problem among cliques of robots. Then, the cliques optimize in parallel, independently of each other. We prove DRM achieves a close-to-optimal performance. We demonstrate DRM's performance in both Gazebo and MATLAB simulations, in scenarios of active target tracking with swarms of robots. In the simulations, DRM achieves computational speed-ups, being 1-2 orders faster than the centralized algorithms; yet, it nearly matches the tracking performance of the centralized counterparts. Since, DRM overestimates the number of attacks in each clique, in this paper we also introduce an Improved Distributed Robust Maximization (IDRM) algorithm. IDRM infers the number of attacks in each clique less conservatively than DRM by leveraging 3-hop neighboring communications. We verify IDRM improves DRM's performance in simulations.

相關內容

DRM:ACM Workshop on Digital Rights Management。 Explanation:數碼版權管理研討會。 Publisher:ACM。 SIT:

We propose a variant of alternating direction method of multiplier (ADMM) to solve constrained trajectory optimization problems. Our ADMM framework breaks a joint optimization into small sub-problems, leading to a low iteration cost and decentralized parameter updates. Our method inherits the theoretical properties of primal interior point method (P-IPM), i.e., guaranteed collision avoidance and homotopy preservation, while being orders of magnitude faster. We have analyzed the convergence and evaluated our method for time-optimal multi-UAV trajectory optimizations and simultaneous goal-reaching of multiple robot arms, where we take into consider kinematics-, dynamics-limits, and homotopy-preserving collision constraints. Our method highlights 10-100 times speedup, while generating trajectories of comparable qualities as state-of-the-art P-IPM solver.

The containerized services allocated in the mobile edge clouds bring up the opportunity for large-scale and real-time applications to have low latency responses. Meanwhile, live container migration is introduced to support dynamic resource management and users' mobility. However, with the expansion of network topology scale and increasing migration requests, the current multiple migration planning and scheduling algorithms of cloud data centers can not suit large-scale scenarios in edge computing. The user mobility-induced live migrations in edge computing require near real-time level scheduling. Therefore, in this paper, through the Software-Defined Networking (SDN) controller, the resource competitions among live migrations are modeled as a dynamic resource dependency graph. We propose an iterative Maximal Independent Set (MIS)-based multiple migration planning and scheduling algorithm. Using real-world mobility traces of taxis and telecom base station coordinates, the evaluation results indicate that our solution can efficiently schedule multiple live container migrations in large-scale edge computing environments. It improves the processing time by 3000 times compared with the state-of-the-art migration planning algorithm in clouds while providing guaranteed migration performance for time-critical migrations.

In this paper, we present a framework that unites obstacle avoidance and deliberate physical interaction for robotic manipulators. As humans and robots begin to coexist in work and household environments, pure collision avoidance is insufficient, as human-robot contact is inevitable and, in some situations, desired. Our work enables manipulators to anticipate, detect, and act on contact. To achieve this, we allow limited deviation from the robot's original trajectory through velocity reduction and motion restrictions. Then, if contact occurs, a robot can detect it and maneuver based on a novel dynamic contact thresholding algorithm. The core contribution of this work is dynamic contact thresholding, which allows a manipulator with onboard proximity sensors to track nearby objects and reduce contact forces in anticipation of a collision. Our framework elicits natural behavior during physical human-robot interaction. We evaluate our system on a variety of scenarios using the Franka Emika Panda robot arm; collectively, our results demonstrate that our contribution is not only able to avoid and react on contact, but also anticipate it.

Multi-criteria decision-making often requires finding a small representative subset from the database. A recently proposed method is the regret minimization set (RMS) query. RMS returns a fixed size subset S of dataset D that minimizes the regret ratio of S (the difference between the score of top1 in S and the score of top-1 in D, for any possible utility function). Existing work showed that the regret-ratio is not able to accurately quantify the regret level of a user. Further, relative to the regret-ratio, users do understand the notion of rank. Consequently, it considered the problem of finding a minimal set S with at most k rank-regret (the minimal rank of tuples of S in the sorted list of D). Corresponding to RMS, we focus on the dual version of the above problem, defined as the rank-regret minimization (RRM) problem, which seeks to find a fixed size set S that minimizes the maximum rank-regret for all possible utility functions. Further, we generalize RRM and propose the restricted rank-regret minimization (RRRM) problem to minimize the rank-regret of S for functions in a restricted space. The solution for RRRM usually has a lower regret level and can better serve the specific preferences of some users. In 2D space, we design a dynamic programming algorithm 2DRRM to find the optimal solution for RRM. In HD space, we propose an algorithm HDRRM for RRM that bounds the output size and introduces a double approximation guarantee for rank-regret. Both 2DRRM and HDRRM can be generalized to the RRRM problem. Extensive experiments are performed on the synthetic and real datasets to verify the efficiency and effectiveness of our algorithms.

This paper presents a distributed algorithm applicable to a wide range of practical multi-robot applications. In such multi-robot applications, the user-defined objectives of the mission can be cast as a general optimization problem, without explicit guidelines of the subtasks per different robot. Owing to the unknown environment, unknown robot dynamics, sensor nonlinearities, etc., the analytic form of the optimization cost function is not available a priori. Therefore, standard gradient-descent-like algorithms are not applicable to these problems. To tackle this, we introduce a new algorithm that carefully designs each robot's subcost function, the optimization of which can accomplish the overall team objective. Upon this transformation, we propose a distributed methodology based on the cognitive-based adaptive optimization (CAO) algorithm, that is able to approximate the evolution of each robot's cost function and to adequately optimize its decision variables (robot actions). The latter can be achieved by online learning only the problem-specific characteristics that affect the accomplishment of mission objectives. The overall, low-complexity algorithm can straightforwardly incorporate any kind of operational constraint, is fault tolerant, and can appropriately tackle time-varying cost functions. A cornerstone of this approach is that it shares the same convergence characteristics as those of block coordinate descent algorithms. The proposed algorithm is evaluated in three heterogeneous simulation set-ups under multiple scenarios, against both general-purpose and problem-specific algorithms. Source code is available at \url{//github.com/athakapo/A-distributed-plug-n-play-algorithm-for-multi-robot-applications}.

Multi-agent formation as well as obstacle avoidance is one of the most actively studied topics in the field of multi-agent systems. Although some classic controllers like model predictive control (MPC) and fuzzy control achieve a certain measure of success, most of them require precise global information which is not accessible in harsh environments. On the other hand, some reinforcement learning (RL) based approaches adopt the leader-follower structure to organize different agents' behaviors, which sacrifices the collaboration between agents thus suffering from bottlenecks in maneuverability and robustness. In this paper, we propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL). Agents in our system only utilize local and relative information to make decisions and control themselves distributively. Agent in the multi-agent system will reorganize themselves into a new topology quickly in case that any of them is disconnected. Our method achieves better performance regarding formation error, formation convergence rate and on-par success rate of obstacle avoidance compared with baselines (both classic control methods and another RL-based method). The feasibility of our method is verified by both simulation and hardware implementation with Ackermann-steering vehicles.

We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint -- the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error -- the rate scales as $\mathcal{O}(d)$, revealing an unavoidable speed-accuracy dilemma.Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings.

We prove several new tight distributed lower bounds for classic symmetry breaking graph problems. As a basic tool, we first provide a new insightful proof that any deterministic distributed algorithm that computes a $\Delta$-coloring on $\Delta$-regular trees requires $\Omega(\log_\Delta n)$ rounds and any randomized algorithm requires $\Omega(\log_\Delta\log n)$ rounds. We prove this result by showing that a natural relaxation of the $\Delta$-coloring problem is a fixed point in the round elimination framework. As a first application, we show that our $\Delta$-coloring lower bound proof directly extends to arbdefective colorings. We exactly characterize which variants of the arbdefective coloring problem are "easy", and which of them instead are "hard". As a second application, which we see as our main contribution, we use the structure of the fixed point as a building block to prove lower bounds as a function of $\Delta$ for a large class of distributed symmetry breaking problems. For example, we obtain a tight lower bound for the fundamental problem of computing a $(2,\beta)$-ruling set. This is an exponential improvement over the best existing lower bound for the problem, which was proven in [FOCS '20]. Our lower bound even applies to a much more general family of problems that allows for almost arbitrary combinations of natural constraints from coloring problems, orientation problems, and independent set problems, and provides a single unified proof for known and new lower bound results for these types of problems. Our lower bounds as a function of $\Delta$ also imply lower bounds as a function of $n$. We obtain, for example, that maximal independent set, on trees, requires $\Omega(\log n / \log \log n)$ rounds for deterministic algorithms, which is tight.

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司