亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unmanned Aerial Vehicles (UAVs) have moved beyond a platform for hobbyists to enable environmental monitoring, journalism, film industry, search and rescue, package delivery, and entertainment. This paper describes 3D displays using swarms of flying light specks, FLSs. An FLS is a small (hundreds of micrometers in size) UAV with one or more light sources to generate different colors and textures with adjustable brightness. A synchronized swarm of FLSs renders an illumination in a pre-specified 3D volume, an FLS display. An FLS display provides true depth, enabling a user to perceive a scene more completely by analyzing its illumination from different angles. An FLS display may either be non-immersive or immersive. Both will support 3D acoustics. Non-immersive FLS displays may be the size of a 1980's computer monitor, enabling a surgical team to observe and control micro robots performing heart surgery inside a patient's body. Immersive FLS displays may be the size of a room, enabling users to interact with objects, e.g., a rock, a teapot. An object with behavior will be constructed using FLS-matters. FLS-matter will enable a user to touch and manipulate an object, e.g., a user may pick up a teapot or throw a rock. An immersive and interactive FLS display will approximate Star Trek's Holodeck. A successful realization of the research ideas presented in this paper will provide fundamental insights into implementing a Holodeck using swarms of FLSs. A Holodeck will transform the future of human communication and perception, and how we interact with information and data. It will revolutionize the future of how we work, learn, play and entertain, receive medical care, and socialize.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 回合 · Continuity · INFORMS · Performer ·
2022 年 1 月 9 日

In recent years, Virtual Reality (VR) Head-Mounted Displays (HMD) have been used to provide an immersive, first-person view in real-time for the remote-control of Unmanned Ground Vehicles (UGV). One critical issue is that it is challenging to perceive the distance of obstacles surrounding the vehicle from 2D views in the HMD, which deteriorates the control of UGV. Conventional distance indicators used in HMD take up screen space which leads clutter on the display and can further reduce situation awareness of the physical environment. To address the issue, in this paper we propose off-screen in-device feedback using vibro-tactile and/or light-visual cues to provide real-time distance information for the remote control of UGV. Results from a study show a significantly better performance with either feedback type, reduced workload and improved usability in a driving task that requires continuous perception of the distance between the UGV and its environmental objects or obstacles. Our findings show a solid case for in-device vibro-tactile and/or light-visual feedback to support remote operation of UGVs that highly relies on distance perception of objects.

Understanding decision-making in dynamic and complex settings is a challenge yet essential for preventing, mitigating, and responding to adverse events (e.g., disasters, financial crises). Simulation games have shown promise to advance our understanding of decision-making in such settings. However, an open question remains on how we extract useful information from these games. We contribute an approach to model human-simulation interaction by leveraging existing methods to characterize: (1) system states of dynamic simulation environments (with Principal Component Analysis), (2) behavioral responses from human interaction with simulation (with Hidden Markov Models), and (3) behavioral responses across system states (with Sequence Analysis). We demonstrate this approach with our game simulating drug shortages in a supply chain context. Results from our experimental study with 135 participants show different player types (hoarders, reactors, followers), how behavior changes in different system states, and how sharing information impacts behavior. We discuss how our findings challenge existing literature.

Microscopic digital volume correlation (DVC) and finite element precoalescence strain evaluations are compared for two nodular cast iron specimens. Displacement fields from \textit{in-situ} 3D synchrotron laminography images are obtained by DVC. Subsequently the microstructure is explicitely meshed from the images considering nodules as voids. Boundary conditions are applied from the DVC measurement. Image segmentation-related uncertainties are taken into account and observed to be negligible with respect to the differences between strain levels. Macroscopic as well as local strain levels in coalescing ligaments between voids nucleated at large graphite nodules are compared. Macroscopic strain levels are consistently predicted. A very good agreement is observed for one of the specimens, while the strain levels for the second specimen presents some discrepancies. Limitations of the modeling and numerical framework are discussed in light of these differences. A discussion of the use of strain as coalescence indicator is initiated.

While designing sustainable and resilient urban built environment is increasingly promoted around the world, significant data gaps have made research on pressing sustainability issues challenging to carry out. Pavements are known to have strong economic and environmental impacts; however, most cities lack a spatial catalog of their surfaces due to the cost-prohibitive and time-consuming nature of data collection. Recent advancements in computer vision, together with the availability of street-level images, provide new opportunities for cities to extract large-scale built environment data with lower implementation costs and higher accuracy. In this paper, we propose CitySurfaces, an active learning-based framework that leverages computer vision techniques for classifying sidewalk materials using widely available street-level images. We trained the framework on images from New York City and Boston and the evaluation results show a 90.5% mIoU score. Furthermore, we evaluated the framework using images from six different cities, demonstrating that it can be applied to regions with distinct urban fabrics, even outside the domain of the training data. CitySurfaces can provide researchers and city agencies with a low-cost, accurate, and extensible method to collect sidewalk material data which plays a critical role in addressing major sustainability issues, including climate change and surface water management.

During the Coronavirus 2019 (the covid-19) pandemic, schools continuously strive to provide consistent education to their students. Teachers and education policymakers are seeking ways to re-open schools, as it is necessary for community and economic development. However, in light of the pandemic, schools require customized schedules that can address the health concerns and safety of the students considering classroom sizes, air conditioning equipment, classroom systems, e.g., self-contained or compartmentalized. To solve this issue, we developed the School-Virus-Infection-Simulator (SVIS) for teachers and education policymakers. SVIS simulates the spread of infection at a school considering the students' lesson schedules, classroom volume, air circulation rates in classrooms, and infectability of the students. Thus, teachers and education policymakers can simulate how their school schedules can impact current health concerns. We then demonstrate the impact of several school schedules in self-contained and departmentalized classrooms and evaluate them in terms of the maximum number of students infected simultaneously and the percentage of face-to-face lessons. The results show that increasing classroom ventilation rate is effective, however, the impact is not stable compared to customizing school schedules, in addition, school schedules can differently impact the maximum number of students infected depending on whether classrooms are self-contained or compartmentalized. It was found that one of school schedules had a higher maximum number of students infected, compared to schedules with a higher percentage of face-to-face lessons. SVIS and the simulation results can help teachers and education policymakers plan school schedules appropriately in order to reduce the maximum number of students infected, while also maintaining a certain percentage of face-to-face lessons.

The information provided to a person's visual system by extended reality (XR) displays is not a veridical match to the information provided by the real world. Due in part to graphical limitations in XR head-mounted displays (HMDs), which vary by device, our perception of space may be altered. However, we do not yet know which properties of virtual objects rendered by HMDs -- particularly augmented reality displays -- influence our ability to understand space. In the current research, we evaluate how immersive graphics affect spatial perception across three unique XR displays: virtual reality (VR), video see-through augmented reality (VST AR), and optical see-through augmented reality (OST AR). We manipulated the geometry of the presented objects as well as the shading techniques for objects' cast shadows. Shape and shadow were selected for evaluation as they play an important role in determining where an object is in space by providing points of contact between an object and its environment -- be it real or virtual. Our results suggest that a non-photorealistic (NPR) shading technique, in this case for cast shadows, may be used to improve depth perception by enhancing perceived surface contact in XR. Further, the benefit of NPR graphics is more pronounced in AR than in VR displays. One's perception of ground contact is influenced by an object's shape, as well. However, the relationship between shape and surface contact perception is more complicated.

In this paper, we introduce Harmonics Virtual Lights (HVL), to model indirect light sources for interactive global illumination of dynamic 3D scenes. Virtual Point Lights (VPL) are an efficient approach to define indirect light sources and to evaluate the resulting indirect lighting. Nonetheless, VPL suffer from disturbing artifacts, especially with high frequency materials. Virtual Spherical Lights (VSL) avoid these artifacts by considering spheres instead of points but estimates the lighting integral using Monte Carlo which results to noise in the final image. We define HVL as an extension of VSL in a Spherical Harmonics (SH) framework, defining a closed form of the lighting integral evaluation. We propose an efficient SH projection of spherical lights contribution faster than existing methods. Computing the outgoing luminance requires $\mathcal{O}(n)$ operations when using materials with circular symmetric lobes, and $\mathcal{O}(n^2)$ operations for the general case, where $n$ is the number of SH bands. HVL can be used with either parametric or measured BRDF without extra cost and offers control over rendering time and image quality, by either decreasing or increasing the band limit used for SH projection. Our approach is particularly well designed to render medium-frequency one-bounce global illumination with arbitrary BRDF in interactive time.

In robotics, data acquisition often plays a key part in unknown environment exploration. For example, storing information about the topography of the explored terrain or the natural dangers in the environment can inform the decision-making process of the robots. Therefore, it is crucial to store these data safely and to make it available quickly to the operators of the robotic system. In a decentralized system like a swarm of robots, this entails several challenges. To address them, we propose RASS, a decentralized risk-aware swarm storage and routing mechanism, which relies exclusively on local information sharing between neighbours to establish storage and routing fitness. We test our system through thorough experiments in a physics-based simulator and test its real-world applicability with physical experiments. We obtain convincing reliability, routing speeds, and swarm storage capacity results.

We present a challenging and realistic novel dataset for evaluating 6-DOF object tracking algorithms. Existing datasets show serious limitations---notably, unrealistic synthetic data, or real data with large fiducial markers---preventing the community from obtaining an accurate picture of the state-of-the-art. Our key contribution is a novel pipeline for acquiring accurate ground truth poses of real objects w.r.t a Kinect V2 sensor by using a commercial motion capture system. A total of 100 calibrated sequences of real objects are acquired in three different scenarios to evaluate the performance of trackers in various scenarios: stability, robustness to occlusion and accuracy during challenging interactions between a person and the object. We conduct an extensive study of a deep 6-DOF tracking architecture and determine a set of optimal parameters. We enhance the architecture and the training methodology to train a 6-DOF tracker that can robustly generalize to objects never seen during training, and demonstrate favorable performance compared to previous approaches trained specifically on the objects to track.

Online multi-object tracking (MOT) is extremely important for high-level spatial reasoning and path planning for autonomous and highly-automated vehicles. In this paper, we present a modular framework for tracking multiple objects (vehicles), capable of accepting object proposals from different sensor modalities (vision and range) and a variable number of sensors, to produce continuous object tracks. This work is inspired by traditional tracking-by-detection approaches in computer vision, with some key differences - First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world. This is a departure from traditional techniques where objects are simply tracked in the image plane. Doing so allows the tracks to be readily used by an autonomous agent for navigation and related tasks. To verify the effectiveness of our approach, we test it on real world highway data collected from a heavily sensorized testbed capable of capturing full-surround information. We demonstrate that our framework is well-suited to track objects through entire maneuvers around the ego-vehicle, some of which take more than a few minutes to complete. We also leverage the modularity of our approach by comparing the effects of including/excluding different sensors, changing the total number of sensors, and the quality of object proposals on the final tracking result.

北京阿比特科技有限公司