亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Tapping buttons and hyperlinks on smartphones is a fundamental operation, but users sometimes fail to tap user-interface (UI) elements. Such mistakes degrade usability, and thus it is important for designers to configure UI elements so that users can accurately select them. To support designers in setting a UI element with an intended tap success rate, we developed a plugin for Figma, which is modern software for developing webpages and applications for smartphones, based on our previously launched web-based application, Tappy. This plugin converts the size of a UI element from pixels to mm and then computes the tap success rates based on the Dual Gaussian Distribution Model. We have made this plugin freely available to external users, so readers can install the Tappy plugin for Figma by visiting its installation page (//www.figma.com/community/plugin/66437139/tappy) or from their desktop Figma software.

相關內容

ACM應(ying)用感(gan)(gan)(gan)(gan)知(zhi)TAP(ACM Transactions on Applied Perception)旨在通(tong)過發表有助于統一(yi)這些(xie)領域(yu)研究(jiu)的(de)高質量論文來增強計算機(ji)科(ke)學(xue)與心理學(xue)/感(gan)(gan)(gan)(gan)知(zhi)之(zhi)間的(de)協同作用。該期(qi)刊發表跨學(xue)科(ke)研究(jiu),在跨計算機(ji)科(ke)學(xue)和(he)(he)(he)感(gan)(gan)(gan)(gan)知(zhi)心理學(xue)的(de)任何(he)主題領域(yu)都具有重大而(er)持(chi)久的(de)價值。所有論文都必須(xu)包含感(gan)(gan)(gan)(gan)知(zhi)和(he)(he)(he)計算機(ji)科(ke)學(xue)兩個部(bu)分(fen)。主題包括但(dan)不限于:視覺(jue)(jue)感(gan)(gan)(gan)(gan)知(zhi):計算機(ji)圖形(xing)學(xue),科(ke)學(xue)/數據/信息可視化,數字成(cheng)像,計算機(ji)視覺(jue)(jue),立(li)體和(he)(he)(he)3D顯示(shi)技術。聽(ting)(ting)覺(jue)(jue)感(gan)(gan)(gan)(gan)知(zhi):聽(ting)(ting)覺(jue)(jue)顯示(shi)和(he)(he)(he)界面,聽(ting)(ting)覺(jue)(jue)聽(ting)(ting)覺(jue)(jue)編碼,空間聲音(yin)(yin),語(yu)音(yin)(yin)合成(cheng)和(he)(he)(he)識別。觸(chu)(chu)覺(jue)(jue):觸(chu)(chu)覺(jue)(jue)渲染(ran),觸(chu)(chu)覺(jue)(jue)輸(shu)入(ru)和(he)(he)(he)感(gan)(gan)(gan)(gan)知(zhi)。感(gan)(gan)(gan)(gan)覺(jue)(jue)運動(dong)知(zhi)覺(jue)(jue):手勢(shi)輸(shu)入(ru),身體運動(dong)輸(shu)入(ru)。感(gan)(gan)(gan)(gan)官感(gan)(gan)(gan)(gan)知(zhi):感(gan)(gan)(gan)(gan)官整合,多模(mo)式渲染(ran)和(he)(he)(he)交(jiao)互。 官網地址:

Tool-calling has changed Large Language Model (LLM) applications by integrating external tools, significantly enhancing their functionality across diverse tasks. However, this integration also introduces new security vulnerabilities, particularly in the tool scheduling mechanisms of LLM, which have not been extensively studied. To fill this gap, we present ToolCommander, a novel framework designed to exploit vulnerabilities in LLM tool-calling systems through adversarial tool injection. Our framework employs a well-designed two-stage attack strategy. Firstly, it injects malicious tools to collect user queries, then dynamically updates the injected tools based on the stolen information to enhance subsequent attacks. These stages enable ToolCommander to execute privacy theft, launch denial-of-service attacks, and even manipulate business competition by triggering unscheduled tool-calling. Notably, the ASR reaches 91.67% for privacy theft and hits 100% for denial-of-service and unscheduled tool calling in certain cases. Our work demonstrates that these vulnerabilities can lead to severe consequences beyond simple misuse of tool-calling systems, underscoring the urgent need for robust defensive strategies to secure LLM Tool-calling systems.

Quantization of Deep Neural Network (DNN) activations is a commonly used technique to reduce compute and memory demands during DNN inference, which can be particularly beneficial on resource-constrained devices. To achieve high accuracy, existing methods for quantizing activations rely on complex mathematical computations or perform extensive searches for the best hyper-parameters. However, these expensive operations are impractical on devices with limited computation capabilities, memory capacities, and energy budgets. Furthermore, many existing methods do not focus on sub-6-bit (or deep) quantization. To fill these gaps, in this paper we propose DQA (Deep Quantization of DNN Activations), a new method that focuses on sub-6-bit quantization of activations and leverages simple shifting-based operations and Huffman coding to be efficient and achieve high accuracy. We evaluate DQA with 3, 4, and 5-bit quantization levels and three different DNN models for two different tasks, image classification and image segmentation, on two different datasets. DQA shows significantly better accuracy (up to 29.28%) compared to the direct quantization method and the state-of-the-art NoisyQuant for sub-6-bit quantization.

Deep reinforcement learning (DRL) has revolutionised quadruped robot locomotion, but existing control frameworks struggle to generalise beyond their training-induced observational scope, resulting in limited adaptability. In contrast, animals achieve exceptional adaptability through gait transition strategies, diverse gait utilisation, and seamless adjustment to immediate environmental demands. Inspired by these capabilities, we present a novel DRL framework that incorporates key attributes of animal locomotion: gait transition strategies, pseudo gait procedural memory, and adaptive motion adjustments. This approach enables our framework to achieve unparalleled adaptability, demonstrated through blind zero-shot deployment on complex terrains and recovery from critically unstable states. Our findings offer valuable insights into the biomechanics of animal locomotion, paving the way for robust, adaptable robotic systems.

Optimization is crucial for MEC networks to function efficiently and reliably, most of which are NP-hard and lack efficient approximation algorithms. This leads to a paucity of optimal solution, constraining the effectiveness of conventional deep learning approaches. Most existing learning-based methods necessitate extensive optimal data and fail to exploit the potential benefits of suboptimal data that can be obtained with greater efficiency and effectiveness. Taking the multi-server multi-user computation offloading (MSCO) problem, which is widely observed in systems like Internet-of-Vehicles (IoV) and Unmanned Aerial Vehicle (UAV) networks, as a concrete scenario, we present a Graph Diffusion-based Solution Generation (GDSG) method. This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably. We transform the optimization issue into distribution-learning and offer a clear explanation of learning from suboptimal training datasets. We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions. We use a simple and efficient heuristic approach to obtain a sufficient amount of training data composed entirely of suboptimal solutions. In our implementation, we enhance the backbone GNN and achieve improved generalization. GDSG also reaches nearly 100\% task orthogonality, ensuring no interference between the discrete and continuous generation tasks. We further reveal that this orthogonality arises from the diffusion-related training loss, rather than the neural network architecture itself. The experiments demonstrate that GDSG surpasses other benchmark methods on both the optimal and suboptimal training datasets. The MSCO datasets has open-sourced at //ieee-dataport.org/13824, as well as the GDSG algorithm codes at //github.com/qiyu3816/GDSG.

Virtual Reality (VR) creates a highly realistic and controllable simulation environment that can manipulate users' sense of space and time. While the sensation of "losing track of time" is often associated with enjoyable experiences, the link between time perception and user experience in VR and its underlying mechanisms remains largely unexplored. This study investigates how different zeitgebers-light color, music tempo, and task factor-influence time perception. We introduced the Relative Subjective Time Change (RSTC) method to explore the relationship between time perception and user experience. Additionally, we applied a data-driven approach called the Time Perception Modeling Network (TPM-Net), which integrates Convolutional Neural Network (CNN) and Transformer architectures to model time perception based on multimodal physiological and zeitgebers data. With 56 participants in a between-subject experiment, our results show that task factors significantly influence time perception, with red light and slow-tempo music further contributing to time underestimation. The RSTC method reveals that underestimating time in VR is strongly associated with improved user experience, presence, and engagement. Furthermore, TPM-Net shows potential for modeling time perception in VR, enabling inference of relative changes in users' time perception and corresponding changes in user experience. This study provides insights into the relationship between time perception and user experience in VR, with applications in VR-based therapy and specialized training.

Accurate estimation of a user's biological joint moment from wearable sensor data is vital for improving exoskeleton control during real-world locomotor tasks. However, most state-of-the-art methods rely on deep learning techniques that necessitate extensive in-lab data collection, posing challenges in acquiring sufficient data to develop robust models. To address this challenge, we introduce a locomotor task set optimization strategy designed to identify a minimal, yet representative, set of tasks that preserves model performance while significantly reducing the data collection burden. In this optimization, we performed a cluster analysis on the dimensionally reduced biomechanical features of various cyclic and non-cyclic tasks. We identified the minimal viable clusters (i.e., tasks) to train a neural network for estimating hip joint moments and evaluated its performance. Our cross-validation analysis across subjects showed that the optimized task set-based model achieved a root mean squared error of 0.30$\pm$0.05 Nm/kg. This performance was significantly better than using only cyclic tasks (p<0.05) and was comparable to using the full set of tasks. Our results demonstrate the ability to maintain model accuracy while significantly reducing the cost associated with data collection and model training. This highlights the potential for future exoskeleton designers to leverage this strategy to minimize the data requirements for deep learning-based models in wearable robot control.

Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.

Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback. Despite many advances over the past three decades, learning in many domains still requires a large amount of interaction with the environment, which can be prohibitively expensive in realistic scenarios. To address this problem, transfer learning has been applied to reinforcement learning such that experience gained in one task can be leveraged when starting to learn the next, harder task. More recently, several lines of research have explored how tasks, or data samples themselves, can be sequenced into a curriculum for the purpose of learning a problem that may otherwise be too difficult to learn from scratch. In this article, we present a framework for curriculum learning (CL) in reinforcement learning, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals. Finally, we use our framework to find open problems and suggest directions for future RL curriculum learning research.

Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching text labels of stickers with previous utterances. However, due to their large quantities, it is impractical to require text labels for the all stickers. Hence, in this paper, we propose to recommend an appropriate sticker to user based on multi-turn dialog context history without any external labels. Two main challenges are confronted in this task. One is to learn semantic meaning of stickers without corresponding text labels. Another challenge is to jointly model the candidate sticker with the multi-turn dialog context. To tackle these challenges, we propose a sticker response selector (SRS) model. Specifically, SRS first employs a convolutional based sticker image encoder and a self-attention based multi-turn dialog encoder to obtain the representation of stickers and utterances. Next, deep interaction network is proposed to conduct deep matching between the sticker with each utterance in the dialog history. SRS then learns the short-term and long-term dependency between all interaction results by a fusion network to output the the final matching score. To evaluate our proposed method, we collect a large-scale real-world dialog dataset with stickers from one of the most popular online chatting platform. Extensive experiments conducted on this dataset show that our model achieves the state-of-the-art performance for all commonly-used metrics. Experiments also verify the effectiveness of each component of SRS. To facilitate further research in sticker selection field, we release this dataset of 340K multi-turn dialog and sticker pairs.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司