亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose finite-time measures to compute the divergence, the curl and the velocity gradient tensor of the point particle velocity for two- and three-dimensional moving particle clouds. To this end, tessellation of the particle positions is applied to associate a volume to each particle. Considering then two subsequent time instants, the dynamics of the volume can be assessed. Determining the volume change of tessellation cells yields the divergence of the particle velocity and the rotation of the cells evaluates its curl. Thus the helicity of particle velocity can be likewise computed and swirling motion of particle clouds can be quantified. We propose a modified version of Voronoi tessellation and which overcomes some drawbacks of the classical Voronoi tessellation. First we assess the numerical accuracy for randomly distributed particles. We find strong Pearson correlation between the divergence computed with the the modified version, and the analytic value which confirms the validity of the method. Moreover the modified Voronoi-based method converges with first order in space and time is observed in two and three dimensions for randomly distributed particles, which is not the case for the classical Voronoi tessellation. Furthermore, we consider for advecting particles, random velocity fields with imposed power-law energy spectra, motivated by turbulence. We determine the number of particles necessary to guarantee a given precision. Finally, applications to fluid particles advected in three-dimensional fully developed isotropic turbulence show the utility of the approach for real world applications to quantify self-organization in particle clouds and their vortical or even swirling motion.

相關內容

This manuscript introduces an object deformability-agnostic framework for co-carrying tasks that are shared between a person and multiple robots. Our approach allows the full control of the co-carrying trajectories by the person while sharing the load with multiple robots depending on the size and the weight of the object. This is achieved by merging the haptic information transferred through the object and the human motion information obtained from a motion capture system. One important advantage of the framework is that no strict internal communication is required between the robots, regardless of the object size and deformation characteristics. We validate the framework with two challenging real-world scenarios: co-transportation of a wooden rigid closet and a bulky box on top of forklift moving straps, with the latter characterizing deformable objects. In order to evaluate the generalizability of the proposed framework, a heterogenous team of two mobile manipulators that consist of an Omni-directional mobile base and a collaborative robotic arm with different DoFs is chosen for the experiments. The qualitative comparison between our controller and the baseline controller (i.e., an admittance controller) during these experiments demonstrated the effectiveness of the proposed framework especially when co-carrying deformable objects. Furthermore, we believe that the performance of our framework during the experiment with the lifting straps offers a promising solution for the co-transportation of bulky and ungraspable objects.

Suppose we are given access to $n$ independent samples from distribution $\mu$ and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution $\nu$. In this work we show that the optimal total variation distance as a function of $n$ is given by $\tilde\Theta(\frac{D}{f'(n)})$ over the class of all pairs $\nu,\mu$ with a bounded $f$-divergence $D_f(\nu\|\mu)\leq D$. Previously, this question was studied only for the case when the Radon-Nikodym derivative of $\nu$ with respect to $\mu$ is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded $f$-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.

Motivated by the increasing need for fast processing of large-scale graphs, we study a number of fundamental graph problems in a message-passing model for distributed computing, called $k$-machine model, where we have $k$ machines that jointly perform computations on $n$-node graphs. The graph is assumed to be partitioned in a balanced fashion among the $k$ machines, a common implementation in many real-world systems. Communication is point-to-point via bandwidth-constrained links, and the goal is to minimize the round complexity, i.e., the number of communication rounds required to finish a computation. We present a generic methodology that allows to obtain efficient algorithms in the $k$-machine model using distributed algorithms for the classical CONGEST model of distributed computing. Using this methodology, we obtain algorithms for various fundamental graph problems such as connectivity, minimum spanning trees, shortest paths, maximal independent sets, and finding subgraphs, showing that many of these problems can be solved in $\tilde{O}(n/k)$ rounds; this shows that one can achieve speedup nearly linear in $k$. To complement our upper bounds, we present lower bounds on the round complexity that quantify the fundamental limitations of solving graph problems distributively. We first show a lower bound of $\Omega(n/k)$ rounds for computing a spanning tree of the input graph. This result implies the same bound for other fundamental problems such as computing a minimum spanning tree, breadth-first tree, or shortest paths tree. We also show a $\tilde \Omega(n/k^2)$ lower bound for connectivity, spanning tree verification and other related problems. The latter lower bounds follow from the development and application of novel results in a random-partition variant of the classical communication complexity model.

We study the distributed multi-user secret sharing (DMUSS) problem under the perfect privacy condition. In a DMUSS problem, multiple secret messages are deployed and the shares are offloaded to the storage nodes. Moreover, the access structure is extremely incomplete, as the decoding collection of each secret message has only one set, and by the perfect privacy condition such collection is also the colluding collection of all other secret messages. The secret message rate is defined as the size of the secret message normalized by the size of a share. We characterize the capacity region of the DMUSS problem when given an access structure, defined as the set of all achievable rate tuples. In the achievable scheme, we assume all shares are mutually independent and then design the decoding function based on the fact that the decoding collection of each secret message has only one set. Then it turns out that the perfect privacy condition is equivalent to the full rank property of some matrices consisting of different indeterminates and zeros. Such a solution does exist if the field size is bigger than the number of secret messages. Finally with a matching converse saying that the size of the secret is upper bounded by the sum of sizes of non-colluding shares, we characterize the capacity region of DMUSS problem under the perfect privacy condition.

Goal-conditioned reinforcement learning (GCRL) refers to learning general-purpose skills which aim to reach diverse goals. In particular, offline GCRL only requires purely pre-collected datasets to perform training tasks without additional interactions with the environment. Although offline GCRL has become increasingly prevalent and many previous works have demonstrated its empirical success, the theoretical understanding of efficient offline GCRL algorithms is not well established, especially when the state space is huge and the offline dataset only covers the policy we aim to learn. In this paper, we propose a novel provably efficient algorithm (the sample complexity is $\tilde{O}({\rm poly}(1/\epsilon))$ where $\epsilon$ is the desired suboptimality of the learned policy) with general function approximation. Our algorithm only requires nearly minimal assumptions of the dataset (single-policy concentrability) and the function class (realizability). Moreover, our algorithm consists of two uninterleaved optimization steps, which we refer to as $V$-learning and policy learning, and is computationally stable since it does not involve minimax optimization. To the best of our knowledge, this is the first algorithm with general function approximation and single-policy concentrability that is both statistically efficient and computationally stable.

Purpose of review: We review recent advances in algorithmic development and validation for modeling and control of soft robots leveraging the Koopman operator theory. Recent findings: We identify the following trends in recent research efforts in this area. (1) The design of lifting functions used in the data-driven approximation of the Koopman operator is critical for soft robots. (2) Robustness considerations are emphasized. Works are proposed to reduce the effect of uncertainty and noise during the process of modeling and control. (3) The Koopman operator has been embedded into different model-based control structures to drive the soft robots. Summary: Because of their compliance and nonlinearities, modeling and control of soft robots face key challenges. To resolve these challenges, Koopman operator-based approaches have been proposed, in an effort to express the nonlinear system in a linear manner. The Koopman operator enables global linearization to reduce nonlinearities and/or serves as model constraints in model-based control algorithms for soft robots. Various implementations in soft robotic systems are illustrated and summarized in the review.

In model-based reinforcement learning, the transition matrix and reward vector are often estimated from random samples subject to noise. Even if the estimated model is an unbiased estimate of the true underlying model, the value function computed from the estimated model is biased. We introduce an operator shifting method for reducing the error introduced by the estimated model. When the error is in the residual norm, we prove that the shifting factor is always positive and upper bounded by $1+O\left(1/n\right)$, where $n$ is the number of samples used in learning each row of the transition matrix. We also propose a practical numerical algorithm for implementing the operator shifting.

We analyze to what extent final users can infer information about the level of protection of their data when the data obfuscation mechanism is a priori unknown to them (the so-called ''black-box'' scenario). In particular, we delve into the investigation of two notions of local differential privacy (LDP), namely {\epsilon}-LDP and R\'enyi LDP. On one hand, we prove that, without any assumption on the underlying distributions, it is not possible to have an algorithm able to infer the level of data protection with provable guarantees; this result also holds for the central versions of the two notions of DP considered. On the other hand, we demonstrate that, under reasonable assumptions (namely, Lipschitzness of the involved densities on a closed interval), such guarantees exist and can be achieved by a simple histogram-based estimator. We validate our results experimentally and we note that, on a particularly well-behaved distribution (namely, the Laplace noise), our method gives even better results than expected, in the sense that in practice the number of samples needed to achieve the desired confidence is smaller than the theoretical bound, and the estimation of {\epsilon} is more precise than predicted.

Each step that results in a bit of information being ``forgotten'' by a computing device has an intrinsic energy cost. Although any Turing machine can be rewritten to be thermodynamically reversible without changing the recognized language, finite automata that are restricted to scan their input once in ``real-time'' fashion can only recognize the members of a proper subset of the class of regular languages in this reversible manner. We study the energy expenditure associated with the computations of deterministic and quantum finite automata. We prove that zero-error quantum finite automata have no advantage over their classical deterministic counterparts in terms of the maximum obligatory thermodynamic cost associated by any step during the recognition of different regular languages. We also demonstrate languages for which ``error can be traded for energy'', i.e. whose zero-error recognition is associated with computation steps having provably bigger obligatory energy cost when compared to their bounded-error recognition by real-time finite-memory quantum devices. We show that regular languages can be classified according to the intrinsic energy requirements on the recognizing automaton as a function of input length, and prove upper and lower bounds.

We consider signal source localization from range-difference measurements. First, we give some readily-checked conditions on measurement noises and sensor deployment to guarantee the asymptotic identifiability of the model and show the consistency and asymptotic normality of the maximum likelihood (ML) estimator. Then, we devise an estimator that owns the same asymptotic property as the ML one. Specifically, we prove that the negative log-likelihood function converges to a function, which has a unique minimum and positive-definite Hessian at the true source's position. Hence, it is promising to execute local iterations, e.g., the Gauss-Newton (GN) algorithm, following a consistent estimate. The main issue involved is obtaining a preliminary consistent estimate. To this aim, we construct a linear least-squares problem via algebraic operation and constraint relaxation and obtain a closed-form solution. We then focus on deriving and eliminating the bias of the linear least-squares estimator, which yields an asymptotically unbiased (thus consistent) estimate. Noting that the bias is a function of the noise variance, we further devise a consistent noise variance estimator which involves $3$-order polynomial rooting. Based on the preliminary consistent location estimate, we prove that a one-step GN iteration suffices to achieve the same asymptotic property as the ML estimator. Simulation results demonstrate the superiority of our proposed algorithm in the large sample case.

北京阿比特科技有限公司