亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

LiDARs are widely used for mapping and localization in dynamic environments. However, their high cost limits their widespread adoption. On the other hand, monocular localization in LiDAR maps using inexpensive cameras is a cost-effective alternative for large-scale deployment. Nevertheless, most existing approaches struggle to generalize to new sensor setups and environments, requiring retraining or fine-tuning. In this paper, we present CMRNext, a novel approach for camera-LIDAR matching that is independent of sensor-specific parameters, generalizable, and can be used in the wild for monocular localization in LiDAR maps and camera-LiDAR extrinsic calibration. CMRNext exploits recent advances in deep neural networks for matching cross-modal data and standard geometric techniques for robust pose estimation. We reformulate the point-pixel matching problem as an optical flow estimation problem and solve the Perspective-n-Point problem based on the resulting correspondences to find the relative pose between the camera and the LiDAR point cloud. We extensively evaluate CMRNext on six different robotic platforms, including three publicly available datasets and three in-house robots. Our experimental evaluations demonstrate that CMRNext outperforms existing approaches on both tasks and effectively generalizes to previously unseen environments and sensor setups in a zero-shot manner. We make the code and pre-trained models publicly available at //cmrnext.cs.uni-freiburg.de .

相關內容

Graph Transformer, due to its global attention mechanism, has emerged as a new tool in dealing with graph-structured data. It is well recognized that the global attention mechanism considers a wider receptive field in a fully connected graph, leading many to believe that useful information can be extracted from all the nodes. In this paper, we challenge this belief: does the globalizing property always benefit Graph Transformers? We reveal the over-globalizing problem in Graph Transformer by presenting both empirical evidence and theoretical analysis, i.e., the current attention mechanism overly focuses on those distant nodes, while the near nodes, which actually contain most of the useful information, are relatively weakened. Then we propose a novel Bi-Level Global Graph Transformer with Collaborative Training (CoBFormer), including the inter-cluster and intra-cluster Transformers, to prevent the over-globalizing problem while keeping the ability to extract valuable information from distant nodes. Moreover, the collaborative training is proposed to improve the model's generalization ability with a theoretical guarantee. Extensive experiments on various graphs well validate the effectiveness of our proposed CoBFormer.

Nowadays, there has been a growing trend in the fields of high-energy physics (HEP) in its both parts experimental and phenomenological studies, to incorporate machine learning (ML) and its specialized branch, deep learning (DL). This review paper provides a thorough illustration of these applications using different DL approaches. The first part of the paper examines the basics of various particle physics types and sets up guidelines for assessing particle physics alongside the available learning models. Next, a detailed classification is provided for representing the jet images that are reconstructed in high energy collisions mainly with proton-proton collisions at well defined beam energies, covering various datasets, preprocessing techniques, and feature extraction and selection methods. The presented techniques can be applied to future hadron-hadron colliders (HLC) such as high luminosity LHC (HL-HLC) and future circular collider-hadron-hadron (FCC-hh). Next, the authors explore a number of AI models analysis designed specifically for images in HEP. We additionally undertake a closer look at the classification associated with images in hadron collisions, with an emphasis on Jets. In this review, we look into various state-of-the-art (SOTA) techniques in ML and DL, examining their implications for HEP demands. More precisely, this discussion tackles various applications in extensive detail, such as Jet tagging, Jet tracking, particle classification, and more. The review concludes with an analysis of the current state of HEP, using DL methodologies. It covers the challenges and potential areas for future research that will be illustrated for each application.

Direct Preference Optimization (DPO) is a widely used offline preference optimization algorithm that reparameterizes reward functions in reinforcement learning from human feedback (RLHF) to enhance simplicity and training stability. In this work, we propose SimPO, a simpler yet more effective approach. The effectiveness of SimPO is attributed to a key design: using the average log probability of a sequence as the implicit reward. This reward formulation better aligns with model generation and eliminates the need for a reference model, making it more compute and memory efficient. Additionally, we introduce a target reward margin to the Bradley-Terry objective to encourage a larger margin between the winning and losing responses, further enhancing the algorithm's performance. We compare SimPO to DPO and its latest variants across various state-of-the-art training setups, including both base and instruction-tuned models like Mistral and Llama3. We evaluated on extensive instruction-following benchmarks, including AlpacaEval 2, MT-Bench, and the recent challenging Arena-Hard benchmark. Our results demonstrate that SimPO consistently and significantly outperforms existing approaches without substantially increasing response length. Specifically, SimPO outperforms DPO by up to 6.4 points on AlpacaEval 2 and by up to 7.5 points on Arena-Hard. Our top-performing model, built on Llama3-8B-Instruct, achieves a remarkable 44.7 length-controlled win rate on AlpacaEval 2 -- surpassing Claude 3 Opus on the leaderboard, and a 33.8 win rate on Arena-Hard -- making it the strongest 8B open-source model.

Data augmentation (DA) is widely employed to improve the generalization performance of deep models. However, most existing DA methods use augmentation operations with random magnitudes throughout training. While this fosters diversity, it can also inevitably introduce uncontrolled variability in augmented data, which may cause misalignment with the evolving training status of the target models. Both theoretical and empirical findings suggest that this misalignment increases the risks of underfitting and overfitting. To address these limitations, we propose AdaAugment, an innovative and tuning-free Adaptive Augmentation method that utilizes reinforcement learning to dynamically adjust augmentation magnitudes for individual training samples based on real-time feedback from the target network. Specifically, AdaAugment features a dual-model architecture consisting of a policy network and a target network, which are jointly optimized to effectively adapt augmentation magnitudes. The policy network optimizes the variability within the augmented data, while the target network utilizes the adaptively augmented samples for training. Extensive experiments across benchmark datasets and deep architectures demonstrate that AdaAugment consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency.

Multiplex graphs, characterised by their layered structure, exhibit informative interdependencies within layers that are crucial for understanding complex network dynamics. Quantifying the interaction and shared information among these layers is challenging due to the non-Euclidean structure of graphs. Our paper introduces a comprehensive theory of multivariate information measures for multiplex graphs. We introduce graphon mutual information for pairs of graphs and expand this to graphon interaction information for three or more graphs, including their conditional variants. We then define graphon total correlation and graphon dual total correlation, along with their conditional forms, and introduce graphon $O-$information. We discuss and quantify the concepts of synergy and redundancy in graphs for the first time, introduce consistent nonparametric estimators for these multivariate graphon information--theoretic measures, and provide their convergence rates. We also conduct a simulation study to illustrate our theoretical findings and demonstrate the relationship between the introduced measures, multiplex graph structure, and higher--order interdependecies. Real-world applications further show the utility of our estimators in revealing shared information and dependence structures in real-world multiplex graphs. This work not only answers fundamental questions about information sharing across multiple graphs but also sets the stage for advanced pattern analysis in complex networks.

Researchers are exploring the integration of IoT and the cloud continuum, together with AI to enhance the cost-effectiveness and efficiency of critical infrastructure (CI) systems. This integration, however, increases susceptibility of CI systems to cyberattacks, potentially leading to disruptions like power outages, oil spills, or even a nuclear mishap. CI systems are inherently complex and generate vast amounts of heterogeneous and high-dimensional data, which crosses many trust boundaries in their journey across the IoT, edge, and cloud domains over the communication network interconnecting them. As a result, they face expanded attack surfaces. To ensure the security of these dataflows, researchers have used deep neural network models with encouraging results. Nevertheless, two important challenges that remain are tackling the computational complexity of these models to reduce convergence times and preserving the accuracy of detection of integrity-violating intrusions. In this paper, we propose an innovative approach that utilizes trained edge cloud models to synthesize central cloud models, effectively overcoming these challenges. We empirically validate the effectiveness of the proposed method by comparing it with traditional centralized and distributed techniques, including a contemporary collaborative technique.

Communication and computation are traditionally treated as separate entities, allowing for individual optimizations. However, many applications focus on local information's functionality rather than the information itself. For such cases, harnessing interference for computation in a multiple access channel through digital over-the-air computation can notably increase the computation, as established by the ChannelComp method. However, the coding scheme originally proposed in ChannelComp may suffer from high computational complexity because it is general and is not optimized for specific modulation categories. Therefore, this study considers a specific category of digital modulations for over-the-air computations, QAM and PAM, for which we introduce a novel coding scheme called SumComp. Furthermore, we derive an MSE analysis for SumComp coding in the computation of the arithmetic mean function and establish an upper bound on the MAE for a set of nomographic functions. Simulation results affirm the superior performance of SumComp coding compared to traditional analog over-the-air computation and the original coding in ChannelComp approaches regarding both MSE and MAE over a noisy multiple access channel. Specifically, SumComp coding shows approximately $10$ dB improvements for computing arithmetic and geometric mean on the normalized MSE for low noise scenarios.

Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

北京阿比特科技有限公司