亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Simulation of contact and friction dynamics is an important basis for control- and learning-based algorithms. However, the numerical difficulties of contact interactions pose a challenge for robust and efficient simulators. A maximal-coordinate representation of the dynamics enables efficient solving algorithms, but current methods in maximal coordinates require constraint stabilization schemes. Therefore, we propose an interior-point algorithm for the numerically robust treatment of rigid-body dynamics with contact interactions in maximal coordinates. Additionally, we discretize the dynamics with a variational integrator to prevent constraint drift. Our algorithm achieves linear-time complexity both in the number of contact points and the number of bodies, which is shown theoretically and demonstrated with an implementation. Furthermore, we simulate two robotic systems to highlight the applicability of the proposed algorithm.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Despite cobots have high potential in bringing several benefits in the manufacturing and logistic processes, but their rapid (re-)deployment in changing environments is still limited. To enable fast adaptation to new product demands and to boost the fitness of the human workers to the allocated tasks, we propose a novel method that optimizes assembly strategies and distributes the effort among the workers in human-robot cooperative tasks. The cooperation model exploits AND/OR Graphs that we adapted to solve also the role allocation problem. The allocation algorithm considers quantitative measurements that are computed online to describe human operator's ergonomic status and task properties. We conducted preliminary experiments to demonstrate that the proposed approach succeeds in controlling the task allocation process to ensure safe and ergonomic conditions for the human worker.

In this paper, we develop a framework to construct energy-preserving methods for multi-components Hamiltonian systems, combining the exponential integrator and the partitioned averaged vector field method. This leads to numerical schemes with both advantages of long-time stability and excellent behavior for highly oscillatory or stiff problems. Compared to the existing energy-preserving exponential integrators (EP-EI) in practical implementation, our proposed methods are much efficient which can at least be computed by subsystem instead of handling a nonlinear coupling system at a time. Moreover, for most cases, such as the Klein-Gordon-Schr\"{o}dinger equations and the Klein-Gordon-Zakharov equations considered in this paper, the computational cost can be further reduced. Specifically, one part of the derived schemes is totally explicit, and the other is linearly implicit. In addition, we present rigorous proof of conserving the original energy of Hamiltonian systems, in which an alternative technique is utilized so that no additional assumptions are required, in contrast to the proof strategies used for the existing EP-EI. Numerical experiments are provided to demonstrate the significant advantages in accuracy, computational efficiency, and the ability to capture highly oscillatory solutions.

Intelligent reflecting surface (IRS) is a new and revolutionary technology capable of reconfiguring the wireless propagation environment by controlling its massive low-cost passive reflecting elements. Different from prior works that focus on optimizing IRS reflection coefficients or single-IRS placement, we aim to maximize the minimum throughput of a single-cell multiuser system aided by multiple IRSs, by joint multi-IRS placement and power control at the access point (AP), which is a mixed-integer non-convex problem with drastically increased complexity with the number of IRSs/users. To tackle this challenge, a ring-based IRS placement scheme is proposed along with a power control policy that equalizes the users' non-outage probability. An efficient searching algorithm is further proposed to obtain a close-to-optimal solution for arbitrary number of IRSs/rings. Numerical results validate our analysis and show that our proposed scheme significantly outperforms the benchmark schemes without IRS and/or with other power control policies. Moreover, it is shown that the IRSs are preferably deployed near AP for coverage range extension, while with more IRSs, they tend to spread out over the cell to cover more and get closer to target users.

We study periodic review stochastic inventory control in the data-driven setting where the retailer makes ordering decisions based only on historical demand observations without any knowledge of the probability distribution of the demand. Since an (s, S)-policy is optimal when the demand distribution is known, we investigate the statistical properties of the data-driven (s, S)-policy obtained by recursively computing the empirical cost-to-go functions. This policy is inherently challenging to analyze because the recursion induces propagation of the estimation error backwards in time. In this work, we establish the asymptotic properties of this data-driven policy by fully accounting for the error propagation. First, we rigorously show the consistency of the estimated parameters by filling in some gaps (due to unaccounted error propagation) in the existing studies. In this setting, empirical process theory (EPT) cannot be directly applied to show asymptotic normality. To explain, the empirical cost-to-go functions for the estimated parameters are not i.i.d. sums due to the error propagation. Our main methodological innovation comes from an asymptotic representation for multi-sample U-processes in terms of i.i.d. sums. This representation enables us to apply EPT to derive the influence functions of the estimated parameters and to establish joint asymptotic normality. Based on these results, we also propose an entirely data-driven estimator of the optimal expected cost and we derive its asymptotic distribution. We demonstrate some useful applications of our asymptotic results, including sample size determination and interval estimation. The results from our numerical simulations conform to our theoretical analysis.lations conform to our theoretical analysis.

Several non-linear functions and machine learning methods have been developed for flexible specification of the systematic utility in discrete choice models. However, they lack interpretability, do not ensure monotonicity conditions, and restrict substitution patterns. We address the first two challenges by modelling the systematic utility using the Choquet Integral (CI) function and the last one by embedding CI into the multinomial probit (MNP) choice probability kernel. We also extend the MNP-CI model to account for attribute cut-offs that enable a modeller to approximately mimic the semi-compensatory behaviour using the traditional choice experiment data. The MNP-CI model is estimated using a constrained maximum likelihood approach, and its statistical properties are validated through a comprehensive Monte Carlo study. The CI-based choice model is empirically advantageous as it captures interaction effects while maintaining monotonicity. It also provides information on the complementarity between pairs of attributes coupled with their importance ranking as a by-product of the estimation. These insights could potentially assist policymakers in making policies to improve the preference level for an alternative. These advantages of the MNP-CI model with attribute cut-offs are illustrated in an empirical application to understand New Yorkers' preferences towards mobility-on-demand services.

In Hindsight Experience Replay (HER), a reinforcement learning agent is trained by treating whatever it has achieved as virtual goals. However, in previous work, the experience was replayed at random, without considering which episode might be the most valuable for learning. In this paper, we develop an energy-based framework for prioritizing hindsight experience in robotic manipulation tasks. Our approach is inspired by the work-energy principle in physics. We define a trajectory energy function as the sum of the transition energy of the target object over the trajectory. We hypothesize that replaying episodes that have high trajectory energy is more effective for reinforcement learning in robotics. To verify our hypothesis, we designed a framework for hindsight experience prioritization based on the trajectory energy of goal states. The trajectory energy function takes the potential, kinetic, and rotational energy into consideration. We evaluate our Energy-Based Prioritization (EBP) approach on four challenging robotic manipulation tasks in simulation. Our empirical results show that our proposed method surpasses state-of-the-art approaches in terms of both performance and sample-efficiency on all four tasks, without increasing computational time. A video showing experimental results is available at //youtu.be/jtsF2tTeUGQ

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.

In this paper, we develop the continuous time dynamic topic model (cDTM). The cDTM is a dynamic topic model that uses Brownian motion to model the latent topics through a sequential collection of documents, where a "topic" is a pattern of word use that we expect to evolve over the course of the collection. We derive an efficient variational approximate inference algorithm that takes advantage of the sparsity of observations in text, a property that lets us easily handle many time points. In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized. Moreover, the complexity of variational inference for the dDTM grows quickly as time granularity increases, a drawback which limits fine-grained discretization. We demonstrate the cDTM on two news corpora, reporting both predictive perplexity and the novel task of time stamp prediction.

北京阿比特科技有限公司