亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

As highly automated vehicles reach higher deployment rates, they find themselves in increasingly dangerous situations. Knowing that the consequence of a crash is significant for the health of occupants, bystanders, and properties, as well as to the viability of autonomy and adjacent businesses, we must search for more efficacious ways to comprehensively and reliably train autonomous vehicles to better navigate the complex scenarios with which they struggle. We therefore introduce a taxonomy of potentially adversarial elements that may contribute to poor performance or system failures as a means of identifying and elucidating lesser-seen risks. This taxonomy may be used to characterize failures of automation, as well as to support simulation and real-world training efforts by providing a more comprehensive classification system for events resulting in disengagement, collision, or other negative consequences. This taxonomy is created from and tested against real collision events to ensure comprehensive coverage with minimal class overlap and few omissions. It is intended to be used both for the identification of harm-contributing adversarial events and in the generation thereof (to create extreme edge- and corner-case scenarios) in training procedures.

相關內容

分(fen)(fen)(fen)(fen)類(lei)(lei)學是(shi)(shi)分(fen)(fen)(fen)(fen)類(lei)(lei)的實(shi)踐和科學。Wikipedia類(lei)(lei)別說明了(le)一(yi)種(zhong)分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa),可(ke)以通過自動方(fang)式(shi)提(ti)取Wikipedia類(lei)(lei)別的完整分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)。截至2009年,已(yi)經證明,可(ke)以使用(yong)人工構建(jian)的分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)(例如像WordNet這(zhe)樣的計算(suan)詞(ci)典的分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa))來改(gai)進(jin)和重(zhong)組(zu)Wikipedia類(lei)(lei)別分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)。 從廣(guang)義上(shang)講,分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)還適(shi)用(yong)于除父子(zi)層(ceng)次(ci)結(jie)構以外的關(guan)系方(fang)案,例如網絡(luo)結(jie)構。然后分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)可(ke)能(neng)包括有(you)多父母(mu)(mu)的單身孩(hai)子(zi),例如,“汽(qi)車”可(ke)能(neng)與(yu)父母(mu)(mu)雙方(fang)一(yi)起(qi)出(chu)現(xian)“車輛(liang)”和“鋼結(jie)構”;但(dan)是(shi)(shi)對(dui)(dui)某些人而言(yan),這(zhe)僅意味著“汽(qi)車”是(shi)(shi)幾種(zhong)不同分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)的一(yi)部(bu)分(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)也可(ke)能(neng)只是(shi)(shi)將事物(wu)組(zu)織成組(zu),或者是(shi)(shi)按字母(mu)(mu)順序排列的列表;但(dan)是(shi)(shi)在(zai)這(zhe)里,術語詞(ci)匯更(geng)合(he)適(shi)。在(zai)知識管理中的當前用(yong)法(fa)(fa)(fa)中,分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)被認為比(bi)本體論窄,因為本體論應用(yong)了(le)各(ge)種(zhong)各(ge)樣的關(guan)系類(lei)(lei)型。 在(zai)數(shu)學上(shang),分(fen)(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(fa)是(shi)(shi)給定對(dui)(dui)象集(ji)的分(fen)(fen)(fen)(fen)類(lei)(lei)樹結(jie)構。該結(jie)構的頂部(bu)是(shi)(shi)適(shi)用(yong)于所有(you)對(dui)(dui)象的單個分(fen)(fen)(fen)(fen)類(lei)(lei),即根(gen)節點。此根(gen)下的節點是(shi)(shi)更(geng)具體的分(fen)(fen)(fen)(fen)類(lei)(lei),適(shi)用(yong)于總(zong)分(fen)(fen)(fen)(fen)類(lei)(lei)對(dui)(dui)象集(ji)的子(zi)集(ji)。推(tui)理的進(jin)展從一(yi)般到(dao)更(geng)具體。

知識薈萃

精品入門(men)和(he)進階(jie)教程(cheng)、論文(wen)和(he)代碼(ma)整理等

更多

查看相關VIP內容、論文、資訊(xun)等

Big models have achieved revolutionary breakthroughs in the field of AI, but they might also pose potential concerns. Addressing such concerns, alignment technologies were introduced to make these models conform to human preferences and values. Despite considerable advancements in the past year, various challenges lie in establishing the optimal alignment strategy, such as data cost and scalable oversight, and how to align remains an open question. In this survey paper, we comprehensively investigate value alignment approaches. We first unpack the historical context of alignment tracing back to the 1920s (where it comes from), then delve into the mathematical essence of alignment (what it is), shedding light on the inherent challenges. Following this foundation, we provide a detailed examination of existing alignment methods, which fall into three categories: Reinforcement Learning, Supervised Fine-Tuning, and In-context Learning, and demonstrate their intrinsic connections, strengths, and limitations, helping readers better understand this research area. In addition, two emerging topics, personal alignment, and multimodal alignment, are also discussed as novel frontiers in this field. Looking forward, we discuss potential alignment paradigms and how they could handle remaining challenges, prospecting where future alignment will go.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

Recently, Mutual Information (MI) has attracted attention in bounding the generalization error of Deep Neural Networks (DNNs). However, it is intractable to accurately estimate the MI in DNNs, thus most previous works have to relax the MI bound, which in turn weakens the information theoretic explanation for generalization. To address the limitation, this paper introduces a probabilistic representation of DNNs for accurately estimating the MI. Leveraging the proposed MI estimator, we validate the information theoretic explanation for generalization, and derive a tighter generalization bound than the state-of-the-art relaxations.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司