State-of-the-art visual localization methods mostly rely on complex procedures to match local descriptors and 3D point clouds. However, these procedures can incur significant cost in terms of inference, storage, and updates over time. In this study, we propose a direct learning-based approach that utilizes a simple network named D2S to represent local descriptors and their scene coordinates. Our method is characterized by its simplicity and cost-effectiveness. It solely leverages a single RGB image for localization during the testing phase and only requires a lightweight model to encode a complex sparse scene. The proposed D2S employs a combination of a simple loss function and graph attention to selectively focus on robust descriptors while disregarding areas such as clouds, trees, and several dynamic objects. This selective attention enables D2S to effectively perform a binary-semantic classification for sparse descriptors. Additionally, we propose a new outdoor dataset to evaluate the capabilities of visual localization methods in terms of scene generalization and self-updating from unlabeled observations. Our approach outperforms the state-of-the-art CNN-based methods in scene coordinate regression in indoor and outdoor environments. It demonstrates the ability to generalize beyond training data, including scenarios involving transitions from day to night and adapting to domain shifts, even in the absence of the labeled data sources. The source code, trained models, dataset, and demo videos are available at the following link: //thpjp.github.io/d2s
Fog computing arises as a complement to cloud computing where computing and storage are provided in a decentralized way rather than the centralized approach of the cloud paradigm. In addition, blockchain provides a decentralized and immutable ledger which can provide support for running arbitrary logic thanks to smart contracts. These facts can lead to harness smart contracts on blockchain as the basis for a decentralized, autonomous, and resilient orchestrator for the resources in the fog. However, the potentially vast amount of geographically distributed fog nodes may threaten the feasibility of the orchestration. On the other hand, fog nodes can exhibit highly dynamic workloads which may result in the orchestrator redistributing the services among them. Thus, there is also a need to dynamically support the network connections to those services independently of their location. Software Defined Networking (SDN) can be integrated within the orchestrator to carry out a seamless service management. To tackle both aforementioned issues, the S-HIDRA architecture is proposed. It integrates SDN support within a blockchain-based orchestrator of container-based services for fog environments, in order to provide low network latency and high service availability. Also, a domain-based architecture is outlined \marev{as potential scenario} to address the geographic distributed nature of fog environments. Results obtained from a proof-of-concept implementation assess the required functionality for S-HIDRA.
We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.
We present new Neumann-Neumann algorithms based on a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, the Lagrange multiplier approach provides a coupled forward-backward optimality system, which can be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, nine variants can be found for the Neumann-Neumann algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.
Recent advances in robot-assisted surgery have resulted in progressively more precise, efficient, and minimally invasive procedures, sparking a new era of robotic surgical intervention. This enables doctors, in collaborative interaction with robots, to perform traditional or minimally invasive surgeries with improved outcomes through smaller incisions. Recent efforts are working toward making robotic surgery more autonomous which has the potential to reduce variability of surgical outcomes and reduce complication rates. Deep reinforcement learning methodologies offer scalable solutions for surgical automation, but their effectiveness relies on extensive data acquisition due to the absence of prior knowledge in successfully accomplishing tasks. Due to the intensive nature of simulated data collection, previous works have focused on making existing algorithms more efficient. In this work, we focus on making the simulator more efficient, making training data much more accessible than previously possible. We introduce Surgical Gym, an open-source high performance platform for surgical robot learning where both the physics simulation and reinforcement learning occur directly on the GPU. We demonstrate between 100-5000x faster training times compared with previous surgical learning platforms. The code is available at: //github.com/SamuelSchmidgall/SurgicalGym.
Head-mounted displays (HMDs) serve as indispensable devices for observing extended reality (XR) environments and virtual content. However, HMDs present an obstacle to external recording techniques as they block the upper face of the user. This limitation significantly affects social XR applications, specifically teleconferencing, where facial features and eye gaze information play a vital role in creating an immersive user experience. In this study, we propose a new network for expression-aware video inpainting for HMD removal (EVI-HRnet) based on generative adversarial networks (GANs). Our model effectively fills in missing information with regard to facial landmarks and a single occlusion-free reference image of the user. The framework and its components ensure the preservation of the user's identity across frames using the reference frame. To further improve the level of realism of the inpainted output, we introduce a novel facial expression recognition (FER) loss function for emotion preservation. Our results demonstrate the remarkable capability of the proposed framework to remove HMDs from facial videos while maintaining the subject's facial expression and identity. Moreover, the outputs exhibit temporal consistency along the inpainted frames. This lightweight framework presents a practical approach for HMD occlusion removal, with the potential to enhance various collaborative XR applications without the need for additional hardware.
We propose a neural network-based real-time volume rendering method for realistic and efficient rendering of volumetric media. The traditional volume rendering method uses path tracing to solve the radiation transfer equation, which requires a huge amount of calculation and cannot achieve real-time rendering. Therefore, this paper uses neural networks to simulate the iterative integration process of solving the radiative transfer equation to speed up the volume rendering of volume media. Specifically, the paper first performs data processing on the volume medium to generate a variety of sampling features, including density features, transmittance features and phase features. The hierarchical transmittance fields are fed into a 3D-CNN network to compute more important transmittance features. Secondly, the diffuse reflection sampling template and the highlight sampling template are used to layer the three types of sampling features into the network. This method can pay more attention to light scattering, highlights and shadows, and then select important channel features through the attention module. Finally, the scattering distribution of the center points of all sampling templates is predicted through the backbone neural network. This method can achieve realistic volumetric media rendering effects and greatly increase the rendering speed while maintaining rendering quality, which is of great significance for real-time rendering applications. Experimental results indicate that our method outperforms previous methods.
Obtaining high-resolution, accurate channel topography and deposit conditions is the prior challenge for the study of channelized debris flow. Currently, wide-used mapping technologies including satellite imaging and drone photogrammetry struggle to precisely observe channel interior conditions of mountainous long-deep gullies, particularly those in the Wenchuan Earthquake region. SLAM is an emerging tech for 3D mapping; however, extremely rugged environment in long-deep gullies poses two major challenges even for the state-of-art SLAM: (1) Atypical features; (2) Violent swaying and oscillation of sensors. These issues result in large deviation and lots of noise for SLAM results. To improve SLAM mapping in such environments, we propose an advanced SLAM-based channel detection and mapping system, namely AscDAMs. It features three main enhancements to post-process SLAM results: (1) The digital orthophoto map aided deviation correction algorithm greatly eliminates the systematic error; (2) The point cloud smoothing algorithm substantially diminishes noises; (3) The cross section extraction algorithm enables the quantitative assessment of channel deposits and their changes. Two field experiments were conducted in Chutou Gully, Wenchuan County in China in February and November 2023, representing observations before and after the rainy season. We demonstrate the capability of AscDAMs to greatly improve SLAM results, promoting SLAM for mapping the specially challenging environment. The proposed method compensates for the insufficiencies of existing technologies in detecting debris flow channel interiors including detailed channel morphology, erosion patterns, deposit distinction, volume estimation and change detection. It serves to enhance the study of full-scale debris flow mechanisms, long-term post-seismic evolution, and hazard assessment.
Just-in-Time software defect prediction (JIT-SDP) prevents the introduction of defects into the software by identifying them at commit check-in time. Current defect prediction approaches rely on manually crafted features such as change metrics and involve expensive to train machine learning or deep learning models. These models typically involve extensive training processes that may require significant computational resources and time. These characteristics can pose challenges when attempting to update the models in real-time as new examples become available, potentially impacting their suitability for fast online defect prediction. Furthermore, the reliance on a complex underlying model makes these approaches often less explainable, which means the developers cannot understand the reasons behind models' predictions. An approach that is not explainable might not be adopted in real-life development environments because of developers' lack of trust in its results. To address these limitations, we propose an approach called IRJIT that employs information retrieval on source code and labels new commits as buggy or clean based on their similarity to past buggy or clean commits. IRJIT approach is online and explainable as it can learn from new data without expensive retraining, and developers can see the documents that support a prediction, providing additional context. By evaluating 10 open-source datasets in a within project setting, we show that our approach is up to 23 times faster than the state-of-the-art, offers explainability at the commit and line level, and has comparable performance to the state-of-the-art.
Speech foundation models (SFMs) have been benchmarked on many speech processing tasks, often achieving state-of-the-art performance with minimal adaptation. However, the SFM paradigm has been significantly less explored for applications of interest to the speech perception community. In this paper we present a systematic evaluation of 10 SFMs on one such application: Speech intelligibility prediction. We focus on the non-intrusive setup of the Clarity Prediction Challenge 2 (CPC2), where the task is to predict the percentage of words correctly perceived by hearing-impaired listeners from speech-in-noise recordings. We propose a simple method that learns a lightweight specialized prediction head on top of frozen SFMs to approach the problem. Our results reveal statistically significant differences in performance across SFMs. Our method resulted in the winning submission in the CPC2, demonstrating its promise for speech perception applications.
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.