亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We address the problem of controlling Connected and Automated Vehicles (CAVs) in conflict areas of a traffic network subject to hard safety constraints. It has been shown that such problems can be solved through a combination of tractable optimal control problems and Control Barrier Functions (CBFs) that guarantee the satisfaction of all constraints. These solutions can be reduced to a sequence of Quadratic Programs (QPs) which are efficiently solved on line over discrete time steps. However, guaranteeing the feasibility of the CBF-based QP method within each discretized time interval requires the careful selection of time steps which need to be sufficiently small. This creates computational requirements and communication rates between agents which may hinder the controller's application to real CAVs. In this paper, we overcome this limitation by adopting an event-triggered approach for CAVs in a conflict area such that the next QP is triggered by properly defined events with a safety guarantee. We present a laboratory-scale test bed we have developed to emulate merging roadways using mobile robots as CAVs which can be used to demonstrate how the event-triggered scheme is computationally efficient and can handle measurement uncertainties and noise compared to time-driven control while guaranteeing safety.

相關內容

Neuromorphic visual sensors are artificial retinas that output sequences of asynchronous events when brightness changes occur in the scene. These sensors offer many advantages including very high temporal resolution, no motion blur and smart data compression ideal for real-time processing. In this study, we introduce an event-based dataset on fine-grained manipulation actions and perform an experimental study on the use of transformers for action prediction with events. There is enormous interest in the fields of cognitive robotics and human-robot interaction on understanding and predicting human actions as early as possible. Early prediction allows anticipating complex stages for planning, enabling effective and real-time interaction. Our Transformer network uses events to predict manipulation actions as they occur, using online inference. The model succeeds at predicting actions early on, building up confidence over time and achieving state-of-the-art classification. Moreover, the attention-based transformer architecture allows us to study the role of the spatio-temporal patterns selected by the model. Our experiments show that the Transformer network captures action dynamic features outperforming video-based approaches and succeeding with scenarios where the differences between actions lie in very subtle cues. Finally, we release the new event dataset, which is the first in the literature for manipulation action recognition. Code will be available at //github.com/DaniDeniz/EventVisionTransformer.

We establish a connection between stochastic optimal control and generative models based on stochastic differential equations (SDEs), such as recently developed diffusion probabilistic models. In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals. This perspective allows to transfer methods from optimal control theory to generative modeling. First, we show that the evidence lower bound is a direct consequence of the well-known verification theorem from control theory. Further, we can formulate diffusion-based generative modeling as a minimization of the Kullback-Leibler divergence between suitable measures in path space. Finally, we develop a novel diffusion-based method for sampling from unnormalized densities -- a problem frequently occurring in statistics and computational sciences. We demonstrate that our time-reversed diffusion sampler (DIS) can outperform other diffusion-based sampling approaches on multiple numerical examples.

In this paper we propose a numerical method to solve a 2D advection-diffusion equation, in the highly oscillatory regime. We use an efficient and robust integrator which leads to an accurate approximation of the solution without any time step-size restriction. Uniform first and second order numerical approximations in time are obtained with errors, and at a cost, that are independent of the oscillation frequency. {This work is part of a long time project, and the final goal is the resolution of a Stokes-advection-diffusion system, in which the expression for the velocity in the advection term, is the solution of the Stokes equations.} This paper focuses on the time multiscale challenge, coming from the velocity that is an $\varepsilon-$periodic function, whose expression is explicitly known. We also introduce a two--scale formulation, as a first step to the numerical resolution of the complete oscillatory Stokes-advection-diffusion system, that is currently under investigation. This two--scale formulation is also useful to understand the asymptotic behaviour of the solution.

Model-based control requires an accurate model of the system dynamics for precisely and safely controlling the robot in complex and dynamic environments. Moreover, in the presence of variations in the operating conditions, the model should be continuously refined to compensate for dynamics changes. In this paper, we present a self-supervised learning approach that actively models the dynamics of nonlinear robotic systems. We combine offline learning from past experience and online learning from current robot interaction with the unknown environment. These two ingredients enable a highly sample-efficient and adaptive learning process, capable of accurately inferring model dynamics in real-time even in operating regimes that greatly differ from the training distribution. Moreover, we design an uncertainty-aware model predictive controller that is heuristically conditioned to the aleatoric (data) uncertainty of the learned dynamics. This controller actively chooses the optimal control actions that (i) optimize the control performance and (ii) improve the efficiency of online learning sample collection. We demonstrate the effectiveness of our method through a series of challenging real-world experiments using a quadrotor system. Our approach showcases high resilience and generalization capabilities by consistently adapting to unseen flight conditions, while it significantly outperforms classical and adaptive control baselines.

Control barrier functions (CBFs) have been widely applied to safety-critical robotic applications. However, the construction of control barrier functions for robotic systems remains a challenging task. Recently, collision detection using differentiable optimization has provided a way to compute the minimum uniform scaling factor that results in an intersection between two convex shapes and to also compute the Jacobian of the scaling factor. In this paper, we propose a framework that uses this scaling factor, with an offset, to systematically define a CBF for obstacle avoidance tasks. We provide a theoretical analysis that proves the continuity of the proposed CBF. Empirically, we show that the proposed CBF is continuously differentiable, and the resulting optimal control problem is computationally efficient, which makes it applicable for real-time robotic control. We validate our approach, first using a 2D mobile robot example, then on the Franka-Emika Research 3 (FR3) robot manipulator both in simulation and experiment.

Despite temperature rise being a first-order design constraint, traditional thermal estimation techniques have severe limitations in modeling critical aspects affecting the temperature in modern-day chips. Existing thermal modeling techniques often ignore the effects of parameter variation, which can lead to significant errors. Such methods also ignore the dependence of conductivity on temperature and its variation. Leakage power is also incorporated inadequately by state-of-the-art techniques. Thermal modeling is a process that has to be repeated at least thousands of times in the design cycle, and hence speed is of utmost importance. To overcome these limitations, we propose VarSim, an ultrafast thermal simulator based on Green's functions. Green's functions have been shown to be faster than the traditional finite difference and finite element-based approaches but have rarely been employed in thermal modeling. Hence we propose a new Green's function-based method to capture the effects of leakage power as well as process variation analytically. We provide a closed-form solution for the Green's function considering the effects of variation on the process, temperature, and thermal conductivity. In addition, we propose a novel way of dealing with the anisotropicity introduced by process variation by splitting the Green's functions into shift-variant and shift-invariant components. Since our solutions are analytical expressions, we were able to obtain speedups that were several orders of magnitude over and above state-of-the-art proposals with a mean absolute error limited to 4% for a wide range of test cases. Furthermore, our method accurately captures the steady-state as well as the transient variation in temperature.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

In this paper, we present a model describing the collective motion of birds. We explore the dynamic relationship between followers and leaders, wherein a select few agents, known as leaders, can initiate spontaneous changes in direction without being influenced by external factors like predators. Starting at the microscopic level, we develop a kinetic model that characterizes the behaviour of large crowds with transient leadership. One significant challenge lies in managing topological interactions, as identifying nearest neighbors in extensive systems can be computationally expensive. To address this, we propose a novel stochastic particle method to simulate the mesoscopic dynamics and reduce the computational cost of identifying closer agents from quadratic to logarithmic complexity using a $k$-nearest neighbours search algorithm with a binary tree. Lastly, we conduct various numerical experiments for different scenarios to validate the algorithm's effectiveness and investigate collective dynamics in both two and three dimensions.

We propose a monotone discretization method for obstacle problems involving the integral fractional Laplacian with homogeneous Dirichlet boundary conditions over a bounded Lipschitz domain. Our approach is motivated by the success of the monotone discretization of the fractional Laplacian [SIAM J. Numer. Anal. 60(6), pp. 3052-3077, 2022]. By exploiting the problem's unique structure, we establish the uniform boundedness, existence, and uniqueness of the numerical solutions. Moreover, we employ the policy iteration method to efficiently solve discrete nonlinear problems and prove its convergence after a finite number of iterations. The improved policy iteration, adapted to the regularity result, exhibits superior performance by modifying the discretization in different regions. Several numerical examples are provided to illustrate the effectiveness of our method.

Pre-trained language models (PLMs) serve as backbones for various real-world systems. For high-stake applications, it's equally essential to have reasonable confidence estimations in predictions. While the vanilla confidence scores of PLMs can already be effectively utilized, PLMs consistently become overconfident in their wrong predictions, which is not desirable in practice. Previous work shows that introducing an extra calibration task can mitigate this issue. The basic idea involves acquiring additional data to train models in predicting the confidence of their initial predictions. However, it only demonstrates the feasibility of this kind of method, assuming that there are abundant extra available samples for the introduced calibration task. In this work, we consider the practical scenario that we need to effectively utilize training samples to make PLMs both task-solvers and self-calibrators. Three challenges are presented, including limited training samples, data imbalance, and distribution shifts. We first conduct pilot experiments to quantify various decisive factors in the calibration task. Based on the empirical analysis results, we propose a training algorithm LM-TOAST to tackle the challenges. Experimental results show that LM-TOAST can effectively utilize the training data to make PLMs have reasonable confidence estimations while maintaining the original task performance. Further, we consider three downstream applications, namely selective classification, adversarial defense, and model cascading, to show the practical usefulness of LM-TOAST. The code will be made public at \url{//github.com/Yangyi-Chen/LM-TOAST}.

北京阿比特科技有限公司