亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Teleportation, a widely-used locomotion technique in Virtual Reality (VR), allows instantaneous movement within VR environments. Enhanced hand tracking in modern VR headsets has popularized hands-only teleportation methods, which eliminate the need for physical controllers. However, these techniques have not fully explored the potential of bi-manual input, where each hand plays a distinct role in teleportation: one controls the teleportation point and the other confirms selections. Additionally, the influence of users' posture, whether sitting or standing, on these techniques remains unexplored. Furthermore, previous teleportation evaluations lacked assessments based on established human motor models such as Fitts' Law. To address these gaps, we conducted a user study (N=20) to evaluate bi-manual pointing performance in VR teleportation tasks, considering both sitting and standing postures. We proposed a variation of the Fitts' Law model to accurately assess users' teleportation performance. We designed and evaluated various bi-manual teleportation techniques, comparing them to uni-manual and dwell-based techniques. Results showed that bi-manual techniques, particularly when the dominant hand is used for pointing and the non-dominant hand for selection, enable faster teleportation compared to other methods. Furthermore, bi-manual and dwell techniques proved significantly more accurate than uni-manual teleportation. Moreover, our proposed Fitts' Law variation more accurately predicted users' teleportation performance compared to existing models. Finally, we developed a set of guidelines for designers to enhance VR teleportation experiences and optimize user interactions.

相關內容

IEEE虛擬現實會議一直是展示虛擬現實(VR)廣泛領域研究成果的主要國際場所,包括增強現實(AR),混合現實(MR)和3D用戶界面中尋求高質量的原創論文。每篇論文應歸類為主要涵蓋研究,應用程序或系統,并使用以下準則進行分類:研究論文應描述有助于先進軟件,硬件,算法,交互或人為因素發展的結果。應用論文應解釋作者如何基于現有思想并將其應用到以新穎的方式解決有趣的問題。每篇論文都應包括對給定應用領域中VR/AR/MR使用成功的評估。 官網地址:

In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.

Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data by iteratively averaging local model updates with their neighbors in a network graph. The absence of direct communication between non-neighbor nodes might lead to the belief that users cannot infer precise information about the data of others. In this work, we demonstrate the opposite, by proposing the first attack against D-GD that enables a user (or set of users) to reconstruct the private data of other users outside their immediate neighborhood. Our approach is based on a reconstruction attack against the gossip averaging protocol, which we then extend to handle the additional challenges raised by D-GD. We validate the effectiveness of our attack on real graphs and datasets, showing that the number of users compromised by a single or a handful of attackers is often surprisingly large. We empirically investigate some of the factors that affect the performance of the attack, namely the graph topology, the number of attackers, and their position in the graph.

Multi-Task Learning (MTL) plays a crucial role in real-world advertising applications such as recommender systems, aiming to achieve robust representations while minimizing resource consumption. MTL endeavors to simultaneously optimize multiple tasks to construct a unified model serving diverse objectives. In online advertising systems, tasks like Click-Through Rate (CTR) and Conversion Rate (CVR) are often treated as MTL problems concurrently. However, it has been overlooked that a conversion ($y_{cvr}=1$) necessitates a preceding click ($y_{ctr}=1$). In other words, while certain CTR tasks are associated with corresponding conversions, others lack such associations. Moreover, the likelihood of noise is significantly higher in CTR tasks where conversions do not occur compared to those where they do, and existing methods lack the ability to differentiate between these two scenarios. In this study, exposure labels corresponding to conversions are regarded as definitive indicators, and a novel task-specific loss is introduced by calculating a \textbf{p}air\textbf{wise} \textbf{r}anking (PWiseR) loss between model predictions, manifesting as pairwise ranking loss, to encourage the model to rely more on them. To demonstrate the effect of the proposed loss function, experiments were conducted on different MTL and Single-Task Learning (STL) models using four distinct public MTL datasets, namely Alibaba FR, NL, US, and CCP, along with a proprietary industrial dataset. The results indicate that our proposed loss function outperforms the BCE loss function in most cases in terms of the AUC metric.

As with any fuzzer, directing Generator-Based Fuzzers (GBF) to reach particular code targets can increase the fuzzer's effectiveness. In previous work, coverage-guided fuzzers used a mix of static analysis, taint analysis, and constraint-solving approaches to address this problem. However, none of these techniques were particularly crafted for GBF where input generators are used to construct program inputs. The observation is that input generators carry information about the input structure that is naturally present through the typing composition of the program input. In this paper, we introduce a type-based mutation heuristic, along with constant string lookup, for Java GBF. Our key intuition is that if one can identify which sub-part (types) of the input will likely influence the branching decision, then focusing on mutating the choices of the generators constructing these types is likely to achieve the desired coverages. We used our technique to fuzz AWSLambda applications. Results compared to a baseline GBF tool show an almost 20\% average improvement in application coverage, and larger improvements when third-party code is included.

We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents. Our method can be used to optimize through any differentiable feature matching loss to achieve a target (stylized) output and leverages gradient checkpointing for memory efficiency. We demonstrate a surprisingly wide-range of applications for music generation including inpainting, outpainting, and looping as well as intensity, melody, and musical structure control - all without ever fine-tuning the underlying model. When we compare our approach against related training, guidance, and optimization-based methods, we find DITTO achieves state-of-the-art performance on nearly all tasks, including outperforming comparable approaches on controllability, audio quality, and computational efficiency, thus opening the door for high-quality, flexible, training-free control of diffusion models. Sound examples can be found at //DITTO-Music.github.io/web/.

Artificial Intelligence (AI) approaches have been incorporated into modern learning environments and software engineering (SE) courses and curricula for several years. However, with the significant rise in popularity of large language models (LLMs) in general, and OpenAI's LLM-powered chatbot ChatGPT in particular in the last year, educators are faced with rapidly changing classroom environments and disrupted teaching principles. Examples range from programming assignment solutions that are fully generated via ChatGPT, to various forms of cheating during exams. However, despite these negative aspects and emerging challenges, AI tools in general, and LLM applications in particular, can also provide significant opportunities in a wide variety of SE courses, supporting both students and educators in meaningful ways. In this early research paper, we present preliminary results of a systematic analysis of current trends in the area of AI, and how they can be integrated into university-level SE curricula, guidelines, and approaches to support both instructors and learners. We collected both teaching and research papers and analyzed their potential usage in SE education, using the ACM Computer Science Curriculum Guidelines CS2023. As an initial outcome, we discuss a series of opportunities for AI applications and further research areas.

We introduce the third major version of Metatheory.jl, a Julia library for general-purpose metaprogramming and symbolic computation. Metatheory.jl provides a flexible and performant implementation of e-graphs and Equality Saturation (EqSat) that addresses the two-language problem in high-level compiler optimizations, symbolics and metaprogramming. We present results from our ongoing optimization efforts, comparing the state-of-the-art egg Rust library's performance against our system and show that performant EqSat implementations are possible without sacrificing the comfort of a direct 1-1 integration with a dynamic, high-level and an interactive host programming language.

We present a demonstration of Play What I Mean (PWIM): a novel, AI-supported interaction technique for interactive emergent narrative (IEN) games and play experiences. By assisting players in translating high-level gameplay intents (expressed as short, unstructured text strings) into concrete game actions, PWIM aims to support open-ended player input while mitigating the overwhelm that players sometimes feel when confronting the large action spaces that characterize IEN gameplay. In matching player intents to game actions, PWIM makes use of an off-the-shelf sentence embedding model that is lightweight enough to run locally on a player's device, and wraps this model in a simple user interface that allows the player to work around occasional classification errors.

Markov Decision Process (MDP) presents a mathematical framework to formulate the learning processes of agents in reinforcement learning. MDP is limited by the Markovian assumption that a reward only depends on the immediate state and action. However, a reward sometimes depends on the history of states and actions, which may result in the decision process in a non-Markovian environment. In such environments, agents receive rewards via temporally-extended behaviors sparsely, and the learned policies may be similar. This leads the agents acquired with similar policies generally overfit to the given task and can not quickly adapt to perturbations of environments. To resolve this problem, this paper tries to learn the diverse policies from the history of state-action pairs under a non-Markovian environment, in which a policy dispersion scheme is designed for seeking diverse policy representation. Specifically, we first adopt a transformer-based method to learn policy embeddings. Then, we stack the policy embeddings to construct a dispersion matrix to induce a set of diverse policies. Finally, we prove that if the dispersion matrix is positive definite, the dispersed embeddings can effectively enlarge the disagreements across policies, yielding a diverse expression for the original policy embedding distribution. Experimental results show that this dispersion scheme can obtain more expressive diverse policies, which then derive more robust performance than recent learning baselines under various learning environments.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

北京阿比特科技有限公司