亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we enable automated property verification of deliberative components in robot control architectures. We focus on formalizing the execution context of Behavior Trees (BTs) to provide a scalable, yet formally grounded, methodology to enable runtime verification and prevent unexpected robot behaviors. To this end, we consider a message-passing model that accommodates both synchronous and asynchronous composition of parallel components, in which BTs and other components execute and interact according to the communication patterns commonly adopted in robotic software architectures. We introduce a formal property specification language to encode requirements and build runtime monitors. We performed a set of experiments, both on simulations and on the real robot, demonstrating the feasibility of our approach in a realistic application and its integration in a typical robot software architecture. We also provide an OS-level virtualization environment to reproduce the experiments in the simulated scenario.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。 

The use of domain-specific modeling for development of complex (cyber-physical) systems is gaining increasing acceptance in the industrial environment. Domain-specific modeling allows complex systems and data to be abstracted for a more efficient system design, development, validation, and configuration. However, no existing (meta-)modeling framework can be used with reasonable effort in certified software so far, neither for the development of systems nor for the execution of system functions. For the use of (development) artifacts from domain-specific modeling in safety-critical processes or systems it is required to ensure their correctness by either subsequent (manual) verification or the usage of (pre-)qualified software. Existing meta-languages often contain modeling elements that are difficult or impossible to implement in a qualifiable manner leading to a high manual, subsequent certification effort. Therefore, the aim is to develop a (meta-)modeling framework, that can be used in certified software. This can significantly reduce the development effort for safety-critical systems and enables the full advantages of domain-specific modeling. The framework components considered in this PhD-Thesis include: (1) an essential meta-language, (2) a qualifiable runtime environment, and (3) a suitable persistence. The essential \mbox{(meta-)}modeling language is mainly based on the UML standard, but is enhanced with multi-level modeling concepts such as deep instantiation. Supporting a possible qualification, the meta-language is implemented using the highly restrictive, but formally provable programming language Ada SPARK.

Electrification turned over a new leaf in aviation by introducing new types of aerial vehicles along with new means of transportation. Addressing a plethora of use cases, drones are gaining attention and increasingly appear in the sky. Emerging concepts of flying taxi enable passengers to be transported over several tens of kilometers. Therefore, unmanned traffic management systems are under development to cope with the complexity of future airspace, thereby resulting in unprecedented communication needs. Moreover, the increase in the number of commercial airplanes pushes the limits of voice-oriented communications, and future options such as single-pilot operations demand robust connectivity. In this survey, we provide a comprehensive review and vision for enabling the connectivity applications of aerial vehicles utilizing current and future communication technologies. We begin by categorizing the connectivity use cases per aerial vehicle and analyzing their connectivity requirements. By reviewing more than 500 related studies, we aim for a comprehensive approach to cover wireless communication technologies, and provide an overview of recent findings from the literature toward the possibilities and challenges of employing the wireless communication standards. After analyzing the network architectures, we list the open-source testbed platforms to facilitate future investigations. This study helped us observe that while numerous works focused on cellular technologies for aerial platforms, a single wireless technology is not sufficient to meet the stringent connectivity demands of the aerial use cases. We identified the need of further investigations on multi-technology network architectures to enable robust connectivity in the sky. Future works should consider suitable technology combinations to develop unified aerial networks that can meet the diverse quality of service demands.

Despite cobots have high potential in bringing several benefits in the manufacturing and logistic processes, but their rapid (re-)deployment in changing environments is still limited. To enable fast adaptation to new product demands and to boost the fitness of the human workers to the allocated tasks, we propose a novel method that optimizes assembly strategies and distributes the effort among the workers in human-robot cooperative tasks. The cooperation model exploits AND/OR Graphs that we adapted to solve also the role allocation problem. The allocation algorithm considers quantitative measurements that are computed online to describe human operator's ergonomic status and task properties. We conducted preliminary experiments to demonstrate that the proposed approach succeeds in controlling the task allocation process to ensure safe and ergonomic conditions for the human worker.

The goal of group testing is to efficiently identify a few specific items, called positives, in a large population of items via tests. A test is an action on a subset of items which returns positive if the subset contains at least one positive and negative otherwise. In non-adaptive group testing, all tests are fixed in advance and can be performed in parallel. In this work, we consider non-adaptive group testing with consecutive positives in which the items are linearly ordered and the positives are consecutive in that order. We present two contributions here. The first is the direct use of a binary code to construct measurement matrices compared to the use of Gray code in the state-of-the-art work, which is a rearrangement of the binary code, when the maximum number of consecutive positives is known. This leads to a reduction in decoding time in practice. The second one is efficient designs to identify positives when the number of consecutive positives is known. To the best of our knowledge, this setting has not been surveyed yet. Our simulations verify the efficiency of our proposed designs. In particular, it only requires up to $300$ tests to identify up to $100$ positives in a set of $2^{32} \approx 4.3\mathrm{B}$ items in less than $300$ nanoseconds. When the maximum number of consecutive positives is known, the simulations validate the superiority of our proposed design in decoding compared to the state-of-the-art work. Moreover, when the number of consecutive positives is known, the number of tests and the decoding time are almost reduced half.

Reinforcement learning (RL) is a promising approach and has limited success towards real-world applications, because ensuring safe exploration or facilitating adequate exploitation is a challenge for controlling robotic systems with unknown models and measurement uncertainties. Such a learning problem becomes even more intractable for complex tasks over continuous space (state-space and action-space). In this paper, we propose a learning-based control framework consisting of several aspects: (1) linear temporal logic (LTL) is leveraged to facilitate complex tasks over an infinite horizons which can be translated to a novel automaton structure; (2) we propose an innovative reward scheme for RL-agent with the formal guarantee such that global optimal policies maximize the probability of satisfying the LTL specifications; (3) based on a reward shaping technique, we develop a modular policy-gradient architecture utilizing the benefits of automaton structures to decompose overall tasks and facilitate the performance of learned controllers; (4) by incorporating Gaussian Processes (GPs) to estimate the uncertain dynamic systems, we synthesize a model-based safeguard using Exponential Control Barrier Functions (ECBFs) to address problems with high-order relative degrees. In addition, we utilize the properties of LTL automatons and ECBFs to construct a guiding process to further improve the efficiency of exploration. Finally, we demonstrate the effectiveness of the framework via several robotic environments. And we show such an ECBF-based modular deep RL algorithm achieves near-perfect success rates and guard safety with a high probability confidence during training.

Unmanned Aerial Vehicles (UAVs) have moved beyond a platform for hobbyists to enable environmental monitoring, journalism, film industry, search and rescue, package delivery, and entertainment. This paper describes 3D displays using swarms of flying light specks, FLSs. An FLS is a small (hundreds of micrometers in size) UAV with one or more light sources to generate different colors and textures with adjustable brightness. A synchronized swarm of FLSs renders an illumination in a pre-specified 3D volume, an FLS display. An FLS display provides true depth, enabling a user to perceive a scene more completely by analyzing its illumination from different angles. An FLS display may either be non-immersive or immersive. Both will support 3D acoustics. Non-immersive FLS displays may be the size of a 1980's computer monitor, enabling a surgical team to observe and control micro robots performing heart surgery inside a patient's body. Immersive FLS displays may be the size of a room, enabling users to interact with objects, e.g., a rock, a teapot. An object with behavior will be constructed using FLS-matters. FLS-matter will enable a user to touch and manipulate an object, e.g., a user may pick up a teapot or throw a rock. An immersive and interactive FLS display will approximate Star Trek's Holodeck. A successful realization of the research ideas presented in this paper will provide fundamental insights into implementing a Holodeck using swarms of FLSs. A Holodeck will transform the future of human communication and perception, and how we interact with information and data. It will revolutionize the future of how we work, learn, play and entertain, receive medical care, and socialize.

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

This paper addresses a new problem of understanding human gaze communication in social videos from both atomic-level and event-level, which is significant for studying human social interactions. To tackle this novel and challenging problem, we contribute a large-scale video dataset, VACATION, which covers diverse daily social scenes and gaze communication behaviors with complete annotations of objects and human faces, human attention, and communication structures and labels in both atomic-level and event-level. Together with VACATION, we propose a spatio-temporal graph neural network to explicitly represent the diverse gaze interactions in the social scenes and to infer atomic-level gaze communication by message passing. We further propose an event network with encoder-decoder structure to predict the event-level gaze communication. Our experiments demonstrate that the proposed model improves various baselines significantly in predicting the atomic-level and event-level gaze

Movie recommendation systems provide users with ranked lists of movies based on individual's preferences and constraints. Two types of models are commonly used to generate ranking results: long-term models and session-based models. While long-term models represent the interactions between users and movies that are supposed to change slowly across time, session-based models encode the information of users' interests and changing dynamics of movies' attributes in short terms. In this paper, we propose an LSIC model, leveraging Long and Short-term Information in Content-aware movie recommendation using adversarial training. In the adversarial process, we train a generator as an agent of reinforcement learning which recommends the next movie to a user sequentially. We also train a discriminator which attempts to distinguish the generated list of movies from the real records. The poster information of movies is integrated to further improve the performance of movie recommendation, which is specifically essential when few ratings are available. The experiments demonstrate that the proposed model has robust superiority over competitors and sets the state-of-the-art. We will release the source code of this work after publication.

Existing methods for interactive image retrieval have demonstrated the merit of integrating user feedback, improving retrieval results. However, most current systems rely on restricted forms of user feedback, such as binary relevance responses, or feedback based on a fixed set of relative attributes, which limits their impact. In this paper, we introduce a new approach to interactive image search that enables users to provide feedback via natural language, allowing for more natural and effective interaction. We formulate the task of dialog-based interactive image retrieval as a reinforcement learning problem, and reward the dialog system for improving the rank of the target image during each dialog turn. To avoid the cumbersome and costly process of collecting human-machine conversations as the dialog system learns, we train our system with a user simulator, which is itself trained to describe the differences between target and candidate images. The efficacy of our approach is demonstrated in a footwear retrieval application. Extensive experiments on both simulated and real-world data show that 1) our proposed learning framework achieves better accuracy than other supervised and reinforcement learning baselines and 2) user feedback based on natural language rather than pre-specified attributes leads to more effective retrieval results, and a more natural and expressive communication interface.

北京阿比特科技有限公司