亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the present era of advanced technology, the Internet of Things (IoT) plays a crucial role in enabling smart connected environments. This includes various domains such as smart homes, smart healthcare, smart cities, smart vehicles, and many others.With ubiquitous smart connected devices and systems, a large amount of data associated with them is at a prime risk from malicious entities (e.g., users, devices, applications) in these systems. Innovative technologies, including cloud computing, Machine Learning (ML), and data analytics, support the development of anomaly detection models for the Vehicular Internet of Things (V-IoT), which encompasses collaborative automatic driving and enhanced transportation systems. However, traditional centralized anomaly detection models fail to provide better services for connected vehicles due to issues such as high latency, privacy leakage, performance overhead, and model drift. Recently, Federated Learning (FL) has gained significant recognition for its ability to address data privacy concerns in the IoT domain. Digital Twin (DT), proves beneficial in addressing uncertain crises and data security issues by creating a virtual replica that simulates various factors, including traffic trajectories, city policies, and vehicle utilization. However, the effectiveness of a V-IoT DT system heavily relies on the collection of long-term and high-quality data to make appropriate decisions. This paper introduces a Hierarchical Federated Learning (HFL) based anomaly detection model for V-IoT, aiming to enhance the accuracy of the model. Our proposed model integrates both DT and HFL approaches to create a comprehensive system for detecting malicious activities using an anomaly detection model. Additionally, real-world V-IoT use case scenarios are presented to demonstrate the application of the proposed model.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Networking · Learning · Analysis · 決策樹 ·
2023 年 9 月 18 日

In 5G cellular networks, frequency range 2 (FR2) introduces higher frequencies that cause rapid signal degradation and challenge user mobility. In recent studies, a conditional handover procedure has been adopted as an enhancement to baseline handover to enhance user mobility robustness. In this article, the mobility performance of conditional handover is analyzed for a 5G mm-wave network in FR2 that employs beamforming. In addition, a resource-efficient random access procedure is proposed that increases the probability of contention-free random access during a handover. Moreover, a simple yet effective decision tree-based supervised learning method is proposed to minimize the handover failures that are caused by the beam preparation phase of the random access procedure. Results have shown that a tradeoff exists between contention-free random access and handover failures. It is also seen that the optimum operation point of random access is achievable with the proposed learning algorithm for conditional handover. Moreover, a mobility performance comparison of conditional handover with baseline handover is also carried out. Results have shown that while baseline handover causes fewer handover failures than conditional handover, the total number of mobility failures in the latter is less due to the decoupling of the handover preparation and execution phases.

Quantum computing (QC) introduces a novel mode of computation with the possibility of greater computational power that remains to be exploited - presenting exciting opportunities for high performance computing (HPC) applications. However, recent advancements in the field have made clear that QC does not supplant conventional HPC, but can rather be incorporated into current heterogeneous HPC infrastructures as an additional accelerator, thereby enabling the optimal utilization of both paradigms. The desire for such integration significantly affects the development of software for quantum computers, which in turn influences the necessary software infrastructure. To date, previous review papers have investigated various quantum programming tools (QPTs) (such as languages, libraries, frameworks) in their ability to program, compile, and execute quantum circuits. However, the integration effort with classical HPC frameworks or systems has not been addressed. This study aims to characterize existing QPTs from an HPC perspective, investigating if existing QPTs have the potential to be efficiently integrated with classical computing models and determining where work is still required. This work structures a set of criteria into an analysis blueprint that enables HPC scientists to assess whether a QPT is suitable for the quantum-accelerated classical application at hand.

Soft robotics, with their inherent flexibility and infinite degrees of freedom (DoF), offer promising advancements in human-machine interfaces. Particularly, pneumatic artificial muscles (PAMs) and pneumatic bending actuators have been fundamental in driving this evolution, capitalizing on their mimetic nature to natural muscle movements. However, with the versatility of these actuators comes the intricate challenge of hysteresis - a nonlinear phenomenon that hampers precise positioning, especially pronounced in pneumatic actuators due to gas compressibility. In this study, we introduce a novel 2-DoF adaptive control for precise bending tracking using a pneumatic continuum actuator. Notably, our control method integrates adaptability into both the feedback and the feedforward element, enhancing trajectory tracking in the presence of profound nonlinear effects. Comparative analysis with existing approaches underscores the superior tracking accuracy of our proposed strategy. This work discusses a new way of simple yet effective control designs for soft actuators with hysteresis properties.

The application of Physics-Informed Neural Networks (PINNs) is investigated for the first time in solving the one-dimensional Countercurrent spontaneous imbibition (COUCSI) problem at both early and late time (i.e., before and after the imbibition front meets the no-flow boundary). We introduce utilization of Change-of-Variables as a technique for improving performance of PINNs. We formulated the COUCSI problem in three equivalent forms by changing the independent variables. The first describes saturation as function of normalized position X and time T; the second as function of X and Y=T^0.5; and the third as a sole function of Z=X/T^0.5 (valid only at early time). The PINN model was generated using a feed-forward neural network and trained based on minimizing a weighted loss function, including the physics-informed loss term and terms corresponding to the initial and boundary conditions. All three formulations could closely approximate the correct solutions, with water saturation mean absolute errors around 0.019 and 0.009 for XT and XY formulations and 0.012 for the Z formulation at early time. The Z formulation perfectly captured the self-similarity of the system at early time. This was less captured by XT and XY formulations. The total variation of saturation was preserved in the Z formulation, and it was better preserved with XY- than XT formulation. Redefining the problem based on the physics-inspired variables reduced the non-linearity of the problem and allowed higher solution accuracies, a higher degree of loss-landscape convexity, a lower number of required collocation points, smaller network sizes, and more computationally efficient solutions.

More research attention has recently been given to end-to-end autonomous driving technologies where the entire driving pipeline is replaced with a single neural network because of its simpler structure and faster inference time. Despite this appealing approach largely reducing the components in driving pipeline, its simplicity also leads to interpretability problems and safety issues arXiv:2003.06404. The trained policy is not always compliant with the traffic rules and it is also hard to discover the reason for the misbehavior because of the lack of intermediate outputs. Meanwhile, Sensors are also critical to autonomous driving's security and feasibility to perceive the surrounding environment under complex driving scenarios. In this paper, we proposed P-CSG, a novel penalty-based imitation learning approach with cross semantics generation sensor fusion technologies to increase the overall performance of End-to-End Autonomous Driving. We conducted an assessment of our model's performance using the Town 05 Long benchmark, achieving an impressive driving score improvement of over 15%. Furthermore, we conducted robustness evaluations against adversarial attacks like FGSM and Dot attacks, revealing a substantial increase in robustness compared to baseline models.More detailed information, such as code-based resources, ablation studies and videos can be found at //hk-zh.github.io/p-csg-plus.

Most of the existing work in one-stage referring expression comprehension (REC) mainly focuses on multi-modal fusion and reasoning, while the influence of other factors in this task lacks in-depth exploration. To fill this gap, we conduct an empirical study in this paper. Concretely, we first build a very simple REC network called SimREC, and ablate 42 candidate designs/settings, which covers the entire process of one-stage REC from network design to model training. Afterwards, we conduct over 100 experimental trials on three benchmark datasets of REC. The extensive experimental results not only show the key factors that affect REC performance in addition to multi-modal fusion, e.g., multi-scale features and data augmentation, but also yield some findings that run counter to conventional understanding. For example, as a vision and language (V&L) task, REC does is less impacted by language prior. In addition, with a proper combination of these findings, we can improve the performance of SimREC by a large margin, e.g., +27.12% on RefCOCO+, which outperforms all existing REC methods. But the most encouraging finding is that with much less training overhead and parameters, SimREC can still achieve better performance than a set of large-scale pre-trained models, e.g., UNITER and VILLA, portraying the special role of REC in existing V&L research.

The Internet of Medical Things (IoMT) is a platform that combines Internet of Things (IoT) technology with medical applications, enabling the realization of precision medicine, intelligent healthcare, and telemedicine in the era of digitalization and intelligence. However, the IoMT faces various challenges, including sustainable power supply, human adaptability of sensors and the intelligence of sensors. In this study, we designed a robust and intelligent IoMT system through the synergistic integration of flexible wearable triboelectric sensors and deep learning-assisted data analytics. We embedded four triboelectric sensors into a wristband to detect and analyze limb movements in patients suffering from Parkinson's Disease (PD). By further integrating deep learning-assisted data analytics, we actualized an intelligent healthcare monitoring system for the surveillance and interaction of PD patients, which includes location/trajectory tracking, heart monitoring and identity recognition. This innovative approach enabled us to accurately capture and scrutinize the subtle movements and fine motor of PD patients, thus providing insightful feedback and comprehensive assessment of the patients conditions. This monitoring system is cost-effective, easily fabricated, highly sensitive, and intelligent, consequently underscores the immense potential of human body sensing technology in a Health 4.0 society.

Software Testing is a well-established area in software engineering, encompassing various techniques and methodologies to ensure the quality and reliability of software systems. However, with the advent of generative artificial intelligence (GenAI) systems, new challenges arise in the testing domain. These systems, capable of generating novel and creative outputs, introduce unique complexities that require novel testing approaches. In this paper, I aim to explore the challenges posed by generative AI systems and discuss potential opportunities for future research in the field of testing. I will touch on the specific characteristics of GenAI systems that make traditional testing techniques inadequate or insufficient. By addressing these challenges and pursuing further research, we can enhance our understanding of how to safeguard GenAI and pave the way for improved quality assurance in this rapidly evolving domain.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

北京阿比特科技有限公司