亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Practical applications of artificial intelligence increasingly often have to deal with the streaming properties of real data, which, considering the time factor, are subject to phenomena such as periodicity and more or less chaotic degeneration - resulting directly in the concept drifts. The modern concept drift detectors almost always assume immediate access to labels, which due to their cost, limited availability and possible delay has been shown to be unrealistic. This work proposes an unsupervised Parallel Activations Drift Detector, utilizing the outputs of an untrained neural network, presenting its key design elements, intuitions about processing properties, and a pool of computer experiments demonstrating its competitiveness with state-of-the-art methods.

相關內容

神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(Neural Networks)是世界上三個(ge)最古老的(de)(de)神(shen)經(jing)(jing)(jing)(jing)(jing)建模學(xue)(xue)(xue)(xue)(xue)會(hui)(hui)的(de)(de)檔案期刊(kan):國際(ji)神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)(he)(he)日本神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)提供了(le)一(yi)個(ge)論壇,以發(fa)展和(he)(he)(he)培育一(yi)個(ge)國際(ji)社(she)會(hui)(hui)的(de)(de)學(xue)(xue)(xue)(xue)(xue)者和(he)(he)(he)實踐者感興趣的(de)(de)所有方(fang)面(mian)的(de)(de)神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)和(he)(he)(he)相關方(fang)法(fa)的(de)(de)計(ji)(ji)算(suan)智(zhi)能。神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)歡迎高質量(liang)論文(wen)(wen)的(de)(de)提交(jiao),有助于全面(mian)的(de)(de)神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)研究,從行(xing)為(wei)和(he)(he)(he)大腦建模,學(xue)(xue)(xue)(xue)(xue)習算(suan)法(fa),通過數(shu)學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)(ji)算(suan)分(fen)析,系統的(de)(de)工程(cheng)和(he)(he)(he)技(ji)(ji)術應用(yong),大量(liang)使(shi)用(yong)神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)的(de)(de)概念和(he)(he)(he)技(ji)(ji)術。這一(yi)獨特而廣泛的(de)(de)范(fan)圍促(cu)進了(le)生(sheng)(sheng)物和(he)(he)(he)技(ji)(ji)術研究之(zhi)間的(de)(de)思想交(jiao)流,并有助于促(cu)進對生(sheng)(sheng)物啟發(fa)的(de)(de)計(ji)(ji)算(suan)智(zhi)能感興趣的(de)(de)跨學(xue)(xue)(xue)(xue)(xue)科(ke)(ke)社(she)區(qu)的(de)(de)發(fa)展。因此,神(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)編委會(hui)(hui)代表的(de)(de)專家(jia)領域包括(kuo)心理學(xue)(xue)(xue)(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)(jing)生(sheng)(sheng)物學(xue)(xue)(xue)(xue)(xue),計(ji)(ji)算(suan)機科(ke)(ke)學(xue)(xue)(xue)(xue)(xue),工程(cheng),數(shu)學(xue)(xue)(xue)(xue)(xue),物理。該(gai)雜志發(fa)表文(wen)(wen)章(zhang)、信件和(he)(he)(he)評論以及給編輯的(de)(de)信件、社(she)論、時事、軟(ruan)件調(diao)查和(he)(he)(he)專利信息。文(wen)(wen)章(zhang)發(fa)表在(zai)五個(ge)部分(fen)之(zhi)一(yi):認知(zhi)科(ke)(ke)學(xue)(xue)(xue)(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)(jing)科(ke)(ke)學(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)習系統,數(shu)學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)(ji)算(suan)分(fen)析、工程(cheng)和(he)(he)(he)應用(yong)。 官網(wang)(wang)(wang)(wang)(wang)(wang)地址:

In most applications, robots need to adapt to new environments and be multi-functional without forgetting previous information. This requirement gains further importance in real-world scenarios where robots operate in coexistence with humans. In these complex environments, human actions inevitably lead to changes, requiring robots to adapt accordingly. To effectively address these dynamics, the concept of continual learning proves essential. It not only enables learning models to integrate new knowledge while preserving existing information but also facilitates the acquisition of insights from diverse contexts. This aspect is particularly relevant to the issue of context-switching, where robots must navigate and adapt to changing situational dynamics. Our approach introduces a novel approach to effectively tackle the problem of context drifts by designing a Streaming Graph Neural Network that incorporates both regularization and rehearsal techniques. Our Continual\_GTM model enables us to retain previous knowledge from different contexts, and it is more effective than traditional fine-tuning approaches. We evaluated the efficacy of Continual\_GTM in predicting human routines within household environments, leveraging spatio-temporal object dynamics across diverse scenarios.

We present a computational formulation for the approximate version of several variational inequality problems, investigating their computational complexity and establishing PPAD-completeness. Examining applications in computational game theory, we specifically focus on two key concepts: resilient Nash equilibrium, and multi-leader-follower games -- domains traditionally known for the absence of general solutions. In the presence of standard assumptions and relaxation techniques, we formulate problem versions for such games that are expressible in terms of variational inequalities, ultimately leading to proofs of PPAD-completeness.

Smart metering networks are increasingly susceptible to cyber threats, where false data injection (FDI) appears as a critical attack. Data-driven-based machine learning (ML) methods have shown immense benefits in detecting FDI attacks via data learning and prediction abilities. Literature works have mostly focused on centralized learning and deploying FDI attack detection models at the control center, which requires data collection from local utilities like meters and transformers. However, this data sharing may raise privacy concerns due to the potential disclosure of household information like energy usage patterns. This paper proposes a new privacy-preserved FDI attack detection by developing an efficient federated learning (FL) framework in the smart meter network with edge computing. Distributed edge servers located at the network edge run an ML-based FDI attack detection model and share the trained model with the grid operator, aiming to build a strong FDI attack detection model without data sharing. Simulation results demonstrate the efficiency of our proposed FL method over the conventional method without collaboration.

Content moderation typically combines the efforts of human moderators and machine learning models.However, these systems often rely on data where significant disagreement occurs during moderation, reflecting the subjective nature of toxicity perception.Rather than dismissing this disagreement as noise, we interpret it as a valuable signal that highlights the inherent ambiguity of the content,an insight missed when only the majority label is considered.In this work, we introduce a novel content moderation framework that emphasizes the importance of capturing annotation disagreement. Our approach uses multitask learning, where toxicity classification serves as the primary task and annotation disagreement is addressed as an auxiliary task.Additionally, we leverage uncertainty estimation techniques, specifically Conformal Prediction, to account for both the ambiguity in comment annotations and the model's inherent uncertainty in predicting toxicity and disagreement.The framework also allows moderators to adjust thresholds for annotation disagreement, offering flexibility in determining when ambiguity should trigger a review.We demonstrate that our joint approach enhances model performance, calibration, and uncertainty estimation, while offering greater parameter efficiency and improving the review process in comparison to single-task methods.

With the widespread adoption of edge computing technologies and the increasing prevalence of deep learning models in these environments, the security risks and privacy threats to models and data have grown more acute. Attackers can exploit various techniques to illegally obtain models or misuse data, leading to serious issues such as intellectual property infringement and privacy breaches. Existing model access control technologies primarily rely on traditional encryption and authentication methods; however, these approaches exhibit significant limitations in terms of flexibility and adaptability in dynamic environments. Although there have been advancements in model watermarking techniques for marking model ownership, they remain limited in their ability to proactively protect intellectual property and prevent unauthorized access. To address these challenges, we propose a novel model access control method tailored for edge computing environments. This method leverages image style as a licensing mechanism, embedding style recognition into the model's operational framework to enable intrinsic access control. Consequently, models deployed on edge platforms are designed to correctly infer only on license data with specific style, rendering them ineffective on any other data. By restricting the input data to the edge model, this approach not only prevents attackers from gaining unauthorized access to the model but also enhances the privacy of data on terminal devices. We conducted extensive experiments on benchmark datasets, including MNIST, CIFAR-10, and FACESCRUB, and the results demonstrate that our method effectively prevents unauthorized access to the model while maintaining accuracy. Additionally, the model shows strong resistance against attacks such as forged licenses and fine-tuning. These results underscore the method's usability, security, and robustness.

The high-performance computing (HPC) community has recently seen a substantial diversification of hardware platforms and their associated programming models. From traditional multicore processors to highly specialized accelerators, vendors and tool developers back up the relentless progress of those architectures. In the context of scientific programming, it is fundamental to consider performance portability frameworks, i.e., software tools that allow programmers to write code once and run it on different computer architectures without sacrificing performance. We report here on the benefits and challenges of performance portability using a field-line tracing simulation and a particle-in-cell code, two relevant applications in computational plasma physics with applications to magnetically-confined nuclear-fusion energy research. For these applications we report performance results obtained on four HPC platforms with server-class CPUs from Intel (Xeon) and AMD (EPYC), and high-end GPUs from Nvidia and AMD, including the latest Nvidia H100 GPU and the novel AMD Instinct MI300A APU. Our results show that both Kokkos and OpenMP are powerful tools to achieve performance portability and decent "out-of-the-box" performance, even for the very latest hardware platforms. For our applications, Kokkos provided performance portability to the broadest range of hardware architectures from different vendors.

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Existing recommender systems extract the user preference based on learning the correlation in data, such as behavioral correlation in collaborative filtering, feature-feature, or feature-behavior correlation in click-through rate prediction. However, regretfully, the real world is driven by causality rather than correlation, and correlation does not imply causation. For example, the recommender systems can recommend a battery charger to a user after buying a phone, in which the latter can serve as the cause of the former, and such a causal relation cannot be reversed. Recently, to address it, researchers in recommender systems have begun to utilize causal inference to extract causality, enhancing the recommender system. In this survey, we comprehensively review the literature on causal inference-based recommendation. At first, we present the fundamental concepts of both recommendation and causal inference as the basis of later content. We raise the typical issues that the non-causality recommendation is faced. Afterward, we comprehensively review the existing work of causal inference-based recommendation, based on a taxonomy of what kind of problem causal inference addresses. Last, we discuss the open problems in this important research area, along with interesting future works.

北京阿比特科技有限公司