亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

By presenting curved surfaces of various curvatures including edges to the fingertip, it is possible to reproduce the haptic sensation of object shapes that cannot be reproduced by flat surfaces alone, such as spheres and rectangular objects. In this paper, we propose a method of presenting curved surfaces by controlling the inclination of a disk in contact with the finger belly with acoustic radiation pressure of ultrasound. The user only needs to mount a lightweight device on the fingertip to experience a tactile presentation with low physical burden. In the demonstration, the user can experience the sensation of stroking an edge and different curvatures of curved surfaces.

相關內容

Surface 是微軟公司(si)( )旗下一系(xi)列(lie)使用 Windows 10(早(zao)期為(wei) Windows 8.X)操(cao)作系(xi)統(tong)的電腦產品(pin),目前(qian)有 Surface、Surface Pro 和 Surface Book 三(san)個系(xi)列(lie)。 2012 年(nian) 6 月(yue) 18 日,初代 Surface Pro/RT 由時任微軟 CEO 史蒂(di)夫·鮑爾默發布于在洛杉磯舉行的記(ji)者會,2012 年(nian) 10 月(yue) 26 日上市(shi)銷售。

The biological brain has inspired multiple advances in machine learning. However, most state-of-the-art models in computer vision do not operate like the human brain, simply because they are not capable of changing or improving their decisions/outputs based on a deeper analysis. The brain is recurrent, while these models are not. It is therefore relevant to explore what would be the impact of adding recurrent mechanisms to existing state-of-the-art architectures and to answer the question of whether recurrency can improve existing architectures. To this end, we build on a feed-forward segmentation model and explore multiple types of recurrency for image segmentation. We explore self-organizing, relational, and memory retrieval types of recurrency that minimize a specific energy function. In our experiments, we tested these models on artificial and medical imaging data, while analyzing the impact of high levels of noise and few-shot learning settings. Our results do not validate our initial hypothesis that recurrent models should perform better in these settings, suggesting that these recurrent architectures, by themselves, are not sufficient to surpass state-of-the-art feed-forward versions and that additional work needs to be done on the topic.

The criticality problem in nuclear engineering asks for the principal eigenpair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step within judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigenpair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.

Language models have gained significant interest due to their general-purpose capabilities, which appear to emerge as models are scaled to increasingly larger parameter sizes. However, these large models impose stringent requirements on computing systems, necessitating significant memory and processing requirements for inference. This makes performing inference on mobile and edge devices challenging, often requiring invocating remotely-hosted models via network calls. Remote inference, in turn, introduces issues like latency, unreliable network connectivity, and privacy concerns. To address these challenges, we explored the possibility of deviating from the trend of increasing model size. Instead, we hypothesize that much smaller models (~30-120M parameters) can outperform their larger counterparts for specific tasks by carefully curating the data used for pre-training and fine-tuning. We investigate this within the context of deploying edge-device models to support sensing applications. We trained several foundational models through a systematic study and found that small models can run locally on edge devices, achieving high token rates and accuracy. Based on these findings, we developed a framework that allows users to train foundational models tailored to their specific applications and deploy them at the edge.

Computational thematic analysis is rapidly emerging as a method of using large text corpora to understand the lived experience of people across the continuum of health care: patients, practitioners, and everyone in between. However, many qualitative researchers do not have the necessary programming skills to write machine learning code on their own, but also seek to maintain ownership, intimacy, and control over their analysis. In this work we explore the use of data visualizations to foster researcher agency and make computational thematic analysis more accessible to domain experts. We used a design science research approach to develop a datavis prototype over four phases: (1) problem comprehension, (2) specifying needs and requirements, (3) prototype development, and (4) feedback on the prototype. We show that qualitative researchers have a wide range of cognitive needs when conducting data analysis and place high importance upon choices and freedom, wanting to feel autonomy over their own research and not be replaced or hindered by AI.

Every human with a functioning vestibular system is capable of feeling motion sickness, but some are more vulnerable than others. Based on the leading theories explaining this condition, vulnerability should be predicted by a person's years of real-life experience before using a VR device and years of VR experience after. A questionnaire was filled out on susceptibility to motion sickness in VR by people on VR-related forums. Results from the survey show that the condition has a significant relationship with age or experience outside the environment.

Parallel kinematic manipulators (PKM) are characterized by closed kinematic loops, due to the parallel arrangement of limbs but also due to the existence of kinematic loops within the limbs. Moreover, many PKM are built with limbs constructed by serially combining kinematic loops. Such limbs are called hybrid, which form a particular class of complex limbs. Design and model-based control requires accurate dynamic PKM models desirably without model simplifications. Dynamics modeling then necessitates kinematic relations of all members of the PKM, in contrast to the standard kinematics modeling of PKM, where only the forward and inverse kinematics solution for the manipulator (relating input and output motions) are computed. This becomes more involved for PKM with hybrid limbs. In this paper a modular modeling approach is employed, where limbs are treated separately, and the individual dynamic equations of motions (EOM) are subsequently assembled to the overall model. Key to the kinematic modeling is the constraint resolution for the individual loops within the limbs. This local constraint resolution is a special case of the general \emph{constraint embedding} technique. The proposed method finally allows for a systematic modeling of general PKM. The method is demonstrated for the IRSBot-2, where each limb comprises two independent loops.

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.

Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司