亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Virtual reality (VR) games are gradually becoming more elaborated and feature-rich, but fail to reach the complexity of traditional digital games. One common feature that is used to extend and organize complex gameplay is the in-game inventory, which allows players to obtain and carry new tools and items throughout their journey. However, VR imposes additional requirements and challenges that impede the implementation of this important feature and hinder games to unleash their full potential. Our current work focuses on the design space of inventories in VR games. We introduce this sparsely researched topic by constructing a first taxonomy of the underlying design considerations and building blocks. Furthermore, we present three different inventories that were designed using our taxonomy and evaluate them in an early qualitative study. The results underline the importance of our research and reveal promising insights that show the huge potential for VR games.

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)學(xue)是(shi)(shi)(shi)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)的(de)(de)(de)(de)(de)實踐(jian)和科學(xue)。Wikipedia類(lei)(lei)別(bie)(bie)說(shuo)明(ming)了一種分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa),可(ke)(ke)以通過自動(dong)方(fang)式提取Wikipedia類(lei)(lei)別(bie)(bie)的(de)(de)(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)。截至2009年(nian),已經證(zheng)明(ming),可(ke)(ke)以使用人工構(gou)(gou)(gou)(gou)(gou)建(jian)的(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)(例如像WordNet這樣的(de)(de)(de)(de)(de)計算詞(ci)典的(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa))來改進(jin)和重組(zu)Wikipedia類(lei)(lei)別(bie)(bie)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)。 從廣(guang)義(yi)上講,分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)還(huan)適(shi)用于除(chu)父子層次(ci)結(jie)(jie)構(gou)(gou)(gou)(gou)(gou)以外的(de)(de)(de)(de)(de)關系方(fang)案,例如網絡結(jie)(jie)構(gou)(gou)(gou)(gou)(gou)。然后分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)可(ke)(ke)能(neng)(neng)包括有(you)多(duo)父母(mu)的(de)(de)(de)(de)(de)單(dan)身孩子,例如,“汽(qi)車(che)”可(ke)(ke)能(neng)(neng)與父母(mu)雙方(fang)一起出現“車(che)輛”和“鋼結(jie)(jie)構(gou)(gou)(gou)(gou)(gou)”;但(dan)(dan)是(shi)(shi)(shi)對(dui)(dui)某些人而言,這僅意味(wei)著“汽(qi)車(che)”是(shi)(shi)(shi)幾(ji)種不(bu)同分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)的(de)(de)(de)(de)(de)一部分(fen)(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)也可(ke)(ke)能(neng)(neng)只是(shi)(shi)(shi)將事物組(zu)織成組(zu),或者是(shi)(shi)(shi)按字母(mu)順序排列的(de)(de)(de)(de)(de)列表;但(dan)(dan)是(shi)(shi)(shi)在(zai)(zai)(zai)這里,術語詞(ci)匯更合(he)適(shi)。在(zai)(zai)(zai)知識(shi)管(guan)理中(zhong)(zhong)的(de)(de)(de)(de)(de)當前用法(fa)(fa)中(zhong)(zhong),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)被(bei)認為比本(ben)體論窄(zhai),因為本(ben)體論應用了各(ge)種各(ge)樣的(de)(de)(de)(de)(de)關系類(lei)(lei)型。 在(zai)(zai)(zai)數學(xue)上,分(fen)(fen)(fen)(fen)(fen)(fen)(fen)層分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)法(fa)(fa)是(shi)(shi)(shi)給(gei)定對(dui)(dui)象(xiang)集的(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)樹(shu)結(jie)(jie)構(gou)(gou)(gou)(gou)(gou)。該結(jie)(jie)構(gou)(gou)(gou)(gou)(gou)的(de)(de)(de)(de)(de)頂部是(shi)(shi)(shi)適(shi)用于所(suo)有(you)對(dui)(dui)象(xiang)的(de)(de)(de)(de)(de)單(dan)個分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei),即根節點(dian)。此根下的(de)(de)(de)(de)(de)節點(dian)是(shi)(shi)(shi)更具(ju)體的(de)(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei),適(shi)用于總分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)對(dui)(dui)象(xiang)集的(de)(de)(de)(de)(de)子集。推理的(de)(de)(de)(de)(de)進(jin)展(zhan)從一般到更具(ju)體。

知識薈萃

精品入門和(he)進階(jie)教(jiao)程、論(lun)文和(he)代碼整理等(deng)

更多

查(cha)看相關VIP內容、論文、資訊等(deng)

Reasoning, a crucial ability for complex problem-solving, plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation. It serves as a fundamental methodology in the field of Artificial General Intelligence (AGI). With the ongoing development of foundation models, e.g., Large Language Models (LLMs), there is a growing interest in exploring their abilities in reasoning tasks. In this paper, we introduce seminal foundation models proposed or adaptable for reasoning, highlighting the latest advancements in various reasoning tasks, methods, and benchmarks. We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models. We also discuss the relevance of multimodal learning, autonomous agents, and super alignment in the context of reasoning. By discussing these future research directions, we hope to inspire researchers in their exploration of this field, stimulate further advancements in reasoning with foundation models, and contribute to the development of AGI.

Intelligent transportation systems play a crucial role in modern traffic management and optimization, greatly improving traffic efficiency and safety. With the rapid development of generative artificial intelligence (Generative AI) technologies in the fields of image generation and natural language processing, generative AI has also played a crucial role in addressing key issues in intelligent transportation systems, such as data sparsity, difficulty in observing abnormal scenarios, and in modeling data uncertainty. In this review, we systematically investigate the relevant literature on generative AI techniques in addressing key issues in different types of tasks in intelligent transportation systems. First, we introduce the principles of different generative AI techniques, and their potential applications. Then, we classify tasks in intelligent transportation systems into four types: traffic perception, traffic prediction, traffic simulation, and traffic decision-making. We systematically illustrate how generative AI techniques addresses key issues in these four different types of tasks. Finally, we summarize the challenges faced in applying generative AI to intelligent transportation systems, and discuss future research directions based on different application scenarios.

Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

Promoting behavioural diversity is critical for solving games with non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). Yet, there is a lack of rigorous treatment for defining diversity and constructing diversity-aware learning dynamics. In this work, we offer a geometric interpretation of behavioural diversity in games and introduce a novel diversity metric based on \emph{determinantal point processes} (DPP). By incorporating the diversity metric into best-response dynamics, we develop \emph{diverse fictitious play} and \emph{diverse policy-space response oracle} for solving normal-form games and open-ended games. We prove the uniqueness of the diverse best response and the convergence of our algorithms on two-player games. Importantly, we show that maximising the DPP-based diversity metric guarantees to enlarge the \emph{gamescape} -- convex polytopes spanned by agents' mixtures of strategies. To validate our diversity-aware solvers, we test on tens of games that show strong non-transitivity. Results suggest that our methods achieve much lower exploitability than state-of-the-art solvers by finding effective and diverse strategies.

Multi-agent influence diagrams (MAIDs) are a popular form of graphical model that, for certain classes of games, have been shown to offer key complexity and explainability advantages over traditional extensive form game (EFG) representations. In this paper, we extend previous work on MAIDs by introducing the concept of a MAID subgame, as well as subgame perfect and trembling hand perfect equilibrium refinements. We then prove several equivalence results between MAIDs and EFGs. Finally, we describe an open source implementation for reasoning about MAIDs and computing their equilibria.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

Recommender System (RS) is a hot area where artificial intelligence (AI) techniques can be effectively applied to improve performance. Since the well-known Netflix Challenge, collaborative filtering (CF) has become the most popular and effective recommendation method. Despite their success in CF, various AI techniques still have to face the data sparsity and cold start problems. Previous works tried to solve these two problems by utilizing auxiliary information, such as social connections among users and meta-data of items. However, they process different types of information separately, leading to information loss. In this work, we propose to utilize Heterogeneous Information Network (HIN), which is a natural and general representation of different types of data, to enhance CF-based recommending methods. HIN-based recommender systems face two problems: how to represent high-level semantics for recommendation and how to fuse the heterogeneous information to recommend. To address these problems, we propose to applying meta-graph to HIN-based RS and solve the information fusion problem with a "matrix factorization (MF) + factorization machine (FM)" framework. For the "MF" part, we obtain user-item similarity matrices from each meta-graph and adopt low-rank matrix approximation to get latent features for both users and items. For the "FM" part, we propose to apply FM with Group lasso (FMG) on the obtained features to simultaneously predict missing ratings and select useful meta-graphs. Experimental results on two large real-world datasets, i.e., Amazon and Yelp, show that our proposed approach is better than that of the state-of-the-art FM and other HIN-based recommending methods.

北京阿比特科技有限公司