亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hybrid modeling, the combination of first principle and machine learning models, is an emerging research field that gathers more and more attention. Even if hybrid models produce formidable results for academic examples, there are still different technical challenges that hinder the use of hybrid modeling in real-world applications. By presenting NeuralFMUs, the fusion of a FMU, a numerical ODE solver and an ANN, we are paving the way for the use of a variety of first principle models from different modeling tools as parts of hybrid models. This contribution handles the hybrid modeling of a complex, real-world example: Starting with a simplified 1D-fluid model of the human cardiovascular system (arterial side), the aim is to learn neglected physical effects like arterial elasticity from data. We will show that the hybrid modeling process is more comfortable, needs less system knowledge and is therefore less error-prone compared to modeling solely based on first principle. Further, the resulting hybrid model has improved in computation performance, compared to a pure first principle white-box model, while still fulfilling the requirements regarding accuracy of the considered hemodynamic quantities. The use of the presented techniques is explained in a general manner and the considered use-case can serve as example for other modeling and simulation applications in and beyond the medical domain.

相關內容

Different from conventional wired line connections, industrial control through wireless transmission is widely regarded as a promising solution due to its reduced cost, increased long-term reliability, and enhanced reliability. However, mission-critical applications impose stringent quality of service (QoS) requirements that entail ultra-reliability low-latency communications (URLLC). The primary feature of URLLC is that the blocklength of channel codes is short, and the conventional Shannon's Capacity is not applicable. In this paper, we consider the URLLC in a factory automation (FA) scenario. Due to densely deployed equipment in FA, wireless signal are easily blocked by the obstacles. To address this issue, we propose to deploy intelligent reflecting surface (IRS) to create an alternative transmission link, which can enhance the transmission reliability. In this paper, we focus on the performance analysis for IRS-aided URLLC-enabled communications in a FA scenario. Both the average data rate (ADR) and the average decoding error probability (ADEP) are derived under finite channel blocklength for seven cases: 1) Rayleigh fading channel; 2) With direct channel link; 3) Nakagami-m fading channel; 4) Imperfect phase alignment; 5) Multiple-IRS case; 6) Rician fading channel; 7) Correlated channels. Extensive numerical results are provided to verify the accuracy of our derived results.

The critical nature of vehicular communications requires their extensive testing and evaluation. Analytical models can represent an attractive and cost-effective approach for such evaluation if they can adequately model all underlying effects that impact the performance of vehicular communications. Several analytical models have been proposed to date to model vehicular communications based on the IEEE 802.11p (or DSRC) standard. However, existing models normally model in detail the MAC (Medium Access Control), and generally simplify the propagation and interference effects. This reduces their value as an alternative to evaluate the performance of vehicular communications. This paper addresses this gap, and presents new analytical models that accurately model the performance of vehicle-to-vehicle communications based on the IEEE 802.11p standard. The models jointly account for a detailed modeling of the propagation and interference effects, as well as the impact of the hidden terminal problem. The model quantifies the PDR (Packet Delivery Ratio) as a function of the distance between transmitter and receiver. The paper also presents new analytical models to quantify the probability of the four different types of packet errors in IEEE 802.11p. In addition, the paper presents the first analytical model capable to accurately estimate the Channel Busy Ratio (CBR) metric even under high channel load levels. All the analytical models are validated by means of simulation for a wide range of parameters, including traffic densities, packet transmission frequencies, transmission power levels, data rates and packet sizes. An implementation of the models is provided openly to facilitate their use by the community.

Recent years have witnessed an increasing use of coordinated accounts on social media, operated by misinformation campaigns to influence public opinion and manipulate social outcomes. Consequently, there is an urgent need to develop an effective methodology for coordinated group detection to combat the misinformation on social media. However, existing works suffer from various drawbacks, such as, either limited performance due to extreme reliance on predefined signatures of coordination, or instead an inability to address the natural sparsity of account activities on social media with useful prior domain knowledge. Therefore, in this paper, we propose a coordination detection framework incorporating neural temporal point process with prior knowledge such as temporal logic or pre-defined filtering functions. Specifically, when modeling the observed data from social media with neural temporal point process, we jointly learn a Gibbs-like distribution of group assignment based on how consistent an assignment is to (1) the account embedding space and (2) the prior knowledge. To address the challenge that the distribution is hard to be efficiently computed and sampled from, we design a theoretically guaranteed variational inference approach to learn a mean-field approximation for it. Experimental results on a real-world dataset show the effectiveness of our proposed method compared to the SOTA model in both unsupervised and semi-supervised settings. We further apply our model on a COVID-19 Vaccine Tweets dataset. The detection result suggests the presence of suspicious coordinated efforts on spreading misinformation about COVID-19 vaccines.

Building Information Modeling (BIM) is increasingly used in the construction industry, but existing studies often ignore embedded rebars. Ground Penetrating Radar (GPR) provides a potential solution to develop as-built BIM with surface elements and rebars. However, automatically translating rebars from GPR into BIM is challenging since GPR cannot provide any information about the scanned element. Thus, we propose an approach to link GPR data and BIM according to Faster R-CNN. A label is attached to each element scanned by GPR for capturing the labeled images, which are used with other images to build a 3D model. Meanwhile, Faster R-CNN is introduced to identify the labels, and the projection relationship between images and the model is used to localize the scanned elements in the 3D model. Two concrete buildings is selected to evaluate the proposed approach, and the results reveal that our method could accurately translate the rebars from GPR data into corresponding elements in BIM with correct distributions.

Affective technology offers exciting opportunities to improve road safety by catering to human emotions. Modern car interiors enable the contactless detection of user states, paving the way for a systematic promotion of safe driver behavior through emotion regulation. We review the current literature regarding the impact of emotions on driver behavior and analyze the state of emotion regulation approaches in the car. We summarize challenges for affective interaction in form of cultural aspects, technological hurdles and methodological considerations, as well as opportunities to improve road safety by reinstating drivers into an emotionally balanced state. The purpose of this review is to outline the community's combined knowledge for interested researchers, to provide a focussed introduction for practitioners and to identify future directions for affective interaction in the car.

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Influenced by the stunning success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks. In recent years, we have witnessed significant progress in developing neural recommender models, which generalize and surpass traditional recommender models owing to the strong representation power of neural networks. In this survey paper, we conduct a systematic review on neural recommender models, aiming to summarize the field to facilitate future progress. Distinct from existing surveys that categorize existing methods based on the taxonomy of deep learning techniques, we instead summarize the field from the perspective of recommendation modeling, which could be more instructive to researchers and practitioners working on recommender systems. Specifically, we divide the work into three types based on the data they used for recommendation modeling: 1) collaborative filtering models, which leverage the key source of user-item interaction data; 2) content enriched models, which additionally utilize the side information associated with users and items, like user profile and item knowledge graph; and 3) context enriched models, which account for the contextual information associated with an interaction, such as time, location, and the past interactions. After reviewing representative works for each type, we finally discuss some promising directions in this field, including benchmarking recommender systems, graph reasoning based recommendation models, and explainable and fair recommendations for social good.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

Click-through rate (CTR) prediction is one of the fundamental tasks for e-commerce search engines. As search becomes more personalized, it is necessary to capture the user interest from rich behavior data. Existing user behavior modeling algorithms develop different attention mechanisms to emphasize query-relevant behaviors and suppress irrelevant ones. Despite being extensively studied, these attentions still suffer from two limitations. First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors. Second, these attentions are usually biased towards frequent behaviors, which is unreasonable since high frequency does not necessarily indicate great importance. To tackle the two limitations, we propose a novel attention mechanism, termed Kalman Filtering Attention (KFAtt), that considers the weighted pooling in attention as a maximum a posteriori (MAP) estimation. By incorporating a priori, KFAtt resorts to global statistics when few user behaviors are relevant. Moreover, a frequency capping mechanism is incorporated to correct the bias towards frequent behaviors. Offline experiments on both benchmark and a 10 billion scale real production dataset, together with an Online A/B test, show that KFAtt outperforms all compared state-of-the-arts. KFAtt has been deployed in the ranking system of a leading e commerce website, serving the main traffic of hundreds of millions of active users everyday.

北京阿比特科技有限公司