亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Automated damage detection is an integral component of each structural health monitoring (SHM) system. Typically, measurements from various sensors are collected and reduced to damage-sensitive features, and diagnostic values are generated by statistically evaluating the features. Since changes in data do not only result from damage, it is necessary to determine the confounding factors (environmental or operational variables) and to remove their effects from the measurements or features. Many existing methods for correcting confounding effects are based on different types of mean regression. This neglects potential changes in higher-order statistical moments, but in particular, the output covariances are essential for generating reliable diagnostics for damage detection. This article presents an approach to explicitly quantify the changes in the covariance, using conditional covariance matrices based on a non-parametric, kernel-based estimator. The method is applied to the Munich Test Bridge and the KW51 Railway Bridge in Leuven, covering both raw sensor measurements (acceleration, strain, inclination) and extracted damage-sensitive features (natural frequencies). The results show that covariances between different vibration or inclination sensors can significantly change due to temperature changes, and the same is true for natural frequencies. To highlight the advantages, it is explained how conditional covariances can be combined with standard approaches for damage detection, such as the Mahalanobis distance and principal component analysis. As a result, more reliable diagnostic values can be generated with fewer false alarms.

相關內容

A safe and efficient decision-making system is crucial for autonomous vehicles. However, the complexity of driving environments limits the effectiveness of many rule-based and machine learning approaches. Reinforcement Learning (RL), with its robust self-learning capabilities and environmental adaptability, offers a promising solution to these challenges. Nevertheless, safety and efficiency concerns during training hinder its widespread application. To address these concerns, we propose a novel RL framework, Simple to Complex Collaborative Decision (S2CD). First, we rapidly train the teacher model in a lightweight simulation environment. In the more complex and realistic environment, teacher intervenes when the student agent exhibits suboptimal behavior by assessing actions' value to avert dangers. We also introduce an RL algorithm called Adaptive Clipping Proximal Policy Optimization Plus, which combines samples from both teacher and student policies and employs dynamic clipping strategies based on sample importance. This approach improves sample efficiency while effectively alleviating data imbalance. Additionally, we employ the Kullback-Leibler divergence as a policy constraint, transforming it into an unconstrained problem with the Lagrangian method to accelerate the student's learning. Finally, a gradual weaning strategy ensures that the student learns to explore independently over time, overcoming the teacher's limitations and maximizing performance. Simulation experiments in highway lane-change scenarios show that the S2CD framework enhances learning efficiency, reduces training costs, and significantly improves safety compared to state-of-the-art algorithms. This framework also ensures effective knowledge transfer between teacher and student models, even with suboptimal teachers, the student achieves superior performance, demonstrating the robustness and effectiveness of S2CD.

The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.

Robotic insertion tasks remain challenging due to uncertainties in perception and the need for precise control, particularly in unstructured environments. While humans seamlessly combine vision and touch for such tasks, effectively integrating these modalities in robotic systems is still an open problem. Our work presents an extensive analysis of the interplay between visual and tactile feedback during dexterous insertion tasks, showing that tactile sensing can greatly enhance success rates on challenging insertions with tight tolerances and varied hole orientations that vision alone cannot solve. These findings provide valuable insights for designing more effective multi-modal robotic control systems and highlight the critical role of tactile feedback in contact-rich manipulation tasks.

In public health, it is critical for policymakers to assess the relationship between the disease prevalence and associated risk factors or clinical characteristics, facilitating effective resources allocation. However, for diseases like female breast cancer (FBC), reliable prevalence data at specific geographical levels, such as the county-level, are limited because the gold standard data typically come from long-term cancer registries, which do not necessarily collect needed risk factors. In addition, it remains unclear whether fitting each model separately or jointly results in better estimation. In this paper, we identify two data sources to produce reliable county-level prevalence estimates in Missouri, USA: the population-based Missouri Cancer Registry (MCR) and the survey-based Missouri County-Level Study (CLS). We propose a two-stage Bayesian model to synthesize these sources, accounting for their differences in the methodological design, case definitions, and collected information. The first stage involves estimating the county-level FBC prevalence using the raking method for CLS data and the counting method for MCR data, calibrating the differences in the methodological design and case definition. The second stage includes synthesizing two sources with different sets of covariates using a Bayesian generalized linear mixed model with Zeller-Siow prior for the coefficients. Our data analyses demonstrate that using both data sources have better results than at least one data source, and including a data source membership matters when there exist systematic differences in these sources. Finally, we translate results into policy making and discuss methodological differences for data synthesis of registry and survey data.

Large language models (LLMs) have strong capabilities in solving diverse natural language processing tasks. However, the safety and security issues of LLM systems have become the major obstacle to their widespread application. Many studies have extensively investigated risks in LLM systems and developed the corresponding mitigation strategies. Leading-edge enterprises such as OpenAI, Google, Meta, and Anthropic have also made lots of efforts on responsible LLMs. Therefore, there is a growing need to organize the existing studies and establish comprehensive taxonomies for the community. In this paper, we delve into four essential modules of an LLM system, including an input module for receiving prompts, a language model trained on extensive corpora, a toolchain module for development and deployment, and an output module for exporting LLM-generated content. Based on this, we propose a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system and discusses the corresponding mitigation strategies. Furthermore, we review prevalent benchmarks, aiming to facilitate the risk assessment of LLM systems. We hope that this paper can help LLM participants embrace a systematic perspective to build their responsible LLM systems.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司