亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physics-informed neural networks (PINNs) have emerged as a versatile and widely applicable concept across various science and engineering domains over the past decade. This article offers a comprehensive overview of the fundamentals of PINNs, tracing their evolution, modifications, and various variants. It explores the impact of different parameters on PINNs and the optimization algorithms involved. The review also delves into the theoretical advancements related to the convergence, consistency, and stability of numerical solutions using PINNs, while highlighting the current state of the art. Given their ability to address equations involving complex physics, the article discusses various applications of PINNs, with a particular focus on their utility in computational fluid dynamics problems. Additionally, it identifies current gaps in the research and outlines future directions for the continued development of PINNs.

相關內容

神(shen)(shen)經網(wang)絡(luo)(luo)(Neural Networks)是世界上三個最古老的(de)(de)(de)(de)(de)(de)神(shen)(shen)經建模學(xue)(xue)(xue)(xue)(xue)(xue)(xue)會(hui)的(de)(de)(de)(de)(de)(de)檔案期刊:國際(ji)神(shen)(shen)經網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(INNS)、歐洲神(shen)(shen)經網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(ENNS)和(he)日本神(shen)(shen)經網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(JNNS)。神(shen)(shen)經網(wang)絡(luo)(luo)提供了一(yi)個論壇,以發(fa)(fa)展和(he)培(pei)育一(yi)個國際(ji)社會(hui)的(de)(de)(de)(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)者(zhe)和(he)實踐者(zhe)感(gan)興趣的(de)(de)(de)(de)(de)(de)所有(you)方(fang)(fang)面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)經網(wang)絡(luo)(luo)和(he)相(xiang)關方(fang)(fang)法(fa)(fa)的(de)(de)(de)(de)(de)(de)計(ji)(ji)(ji)算(suan)智能(neng)。神(shen)(shen)經網(wang)絡(luo)(luo)歡迎高質量(liang)論文(wen)(wen)的(de)(de)(de)(de)(de)(de)提交,有(you)助(zhu)于(yu)(yu)全面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)經網(wang)絡(luo)(luo)研究,從行為(wei)和(he)大(da)(da)腦(nao)建模,學(xue)(xue)(xue)(xue)(xue)(xue)(xue)習算(suan)法(fa)(fa),通過數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)和(he)計(ji)(ji)(ji)算(suan)分析(xi),系(xi)統的(de)(de)(de)(de)(de)(de)工程(cheng)和(he)技(ji)術(shu)應用,大(da)(da)量(liang)使用神(shen)(shen)經網(wang)絡(luo)(luo)的(de)(de)(de)(de)(de)(de)概念和(he)技(ji)術(shu)。這一(yi)獨(du)特而廣泛(fan)的(de)(de)(de)(de)(de)(de)范圍促(cu)進(jin)了生(sheng)(sheng)物(wu)和(he)技(ji)術(shu)研究之間的(de)(de)(de)(de)(de)(de)思想交流,并有(you)助(zhu)于(yu)(yu)促(cu)進(jin)對生(sheng)(sheng)物(wu)啟發(fa)(fa)的(de)(de)(de)(de)(de)(de)計(ji)(ji)(ji)算(suan)智能(neng)感(gan)興趣的(de)(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)(xue)(xue)(xue)(xue)科社區的(de)(de)(de)(de)(de)(de)發(fa)(fa)展。因此,神(shen)(shen)經網(wang)絡(luo)(luo)編委會(hui)代表的(de)(de)(de)(de)(de)(de)專(zhuan)家(jia)領域包括心理學(xue)(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經生(sheng)(sheng)物(wu)學(xue)(xue)(xue)(xue)(xue)(xue)(xue),計(ji)(ji)(ji)算(suan)機科學(xue)(xue)(xue)(xue)(xue)(xue)(xue),工程(cheng),數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)(xue),物(wu)理。該雜志(zhi)發(fa)(fa)表文(wen)(wen)章、信件和(he)評(ping)論以及給編輯的(de)(de)(de)(de)(de)(de)信件、社論、時事、軟件調查和(he)專(zhuan)利信息。文(wen)(wen)章發(fa)(fa)表在(zai)五個部(bu)分之一(yi):認知科學(xue)(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經科學(xue)(xue)(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)(xue)(xue)習系(xi)統,數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)(xue)和(he)計(ji)(ji)(ji)算(suan)分析(xi)、工程(cheng)和(he)應用。 官網(wang)地址:

Port-Hamiltonian neural networks (pHNNs) are emerging as a powerful modeling tool that integrates physical laws with deep learning techniques. While most research has focused on modeling the entire dynamics of interconnected systems, the potential for identifying and modeling individual subsystems while operating as part of a larger system has been overlooked. This study addresses this gap by introducing a novel method for using pHNNs to identify such subsystems based solely on input-output measurements. By utilizing the inherent compositional property of the port-Hamiltonian systems, we developed an algorithm that learns the dynamics of individual subsystems, without requiring direct access to their internal states. On top of that, by choosing an output error (OE) model structure, we have been able to handle measurement noise effectively. The effectiveness of the proposed approach is demonstrated through tests on interconnected systems, including multi-physics scenarios, demonstrating its potential for identifying subsystem dynamics and facilitating their integration into new interconnected models.

Nowadays, neural networks are commonly used to solve various problems. Unfortunately, despite their effectiveness, they are often perceived as black boxes capable of providing answers without explaining their decisions, which raises numerous ethical and legal concerns. Fortunately, the field of explainability helps users understand these results. This aspect of machine learning allows users to grasp the decision-making process of a model and verify the relevance of its outcomes. In this article, we focus on the learning process carried out by a ``time distributed`` convRNN, which performs anomaly detection from video data.

Large language models (LLMs) represent a groundbreaking advancement in the domain of natural language processing due to their impressive reasoning abilities. Recently, there has been considerable interest in increasing the context lengths for these models to enhance their applicability to complex tasks. However, at long context lengths and large batch sizes, the key-value (KV) cache, which stores the attention keys and values, emerges as the new bottleneck in memory usage during inference. To address this, we propose Eigen Attention, which performs the attention operation in a low-rank space, thereby reducing the KV cache memory overhead. Our proposed approach is orthogonal to existing KV cache compression techniques and can be used synergistically with them. Through extensive experiments over OPT, MPT, and Llama model families, we demonstrate that Eigen Attention results in up to 40% reduction in KV cache sizes and up to 60% reduction in attention operation latency with minimal drop in performance. Code is available at //github.com/UtkarshSaxena1/EigenAttn.

Human-computer interaction (HCI) has been a widely researched area for many years, with continuous advancements in technology leading to the development of new techniques that change the way we interact with computers. With the recent advent of powerful computers, we recognize human actions and interact accordingly, thus revolutionizing the way we interact with computers. The purpose of this paper is to provide a comparative analysis of various algorithms used for recognizing user faces and gestures in the context of computer vision and HCI. This study aims to explore and evaluate the performance of different algorithms in terms of accuracy, robustness, and efficiency. This study aims to provide a comprehensive analysis of algorithms for face and gesture recognition in the context of computer vision and HCI, with the goal of improving the design and development of interactive systems that are more intuitive, efficient, and user-friendly.

Socio-technical networks represent emerging cyber-physical infrastructures that are tightly interwoven with human networks. The coupling between human and technical networks presents significant challenges in managing, controlling, and securing these complex, interdependent systems. This paper investigates game-theoretic frameworks for the design and control of socio-technical networks, with a focus on critical applications such as misinformation management, infrastructure optimization, and resilience in socio-cyber-physical systems (SCPS). Core methodologies, including Stackelberg games, mechanism design, and dynamic game theory, are examined as powerful tools for modeling interactions in hierarchical, multi-agent environments. Key challenges addressed include mitigating human-driven vulnerabilities, managing large-scale system dynamics, and countering adversarial threats. By bridging individual agent behaviors with overarching system goals, this work illustrates how the integration of game theory and control theory can lead to robust, resilient, and adaptive socio-technical networks. This paper highlights the potential of these frameworks to dynamically align decentralized agent actions with system-wide objectives of stability, security, and efficiency.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information, amplifying critical details to focus more on the essential aspects of data. Deep learning has employed attention to boost performance for many applications. Interestingly, the same attention design can suit processing different data modalities and can easily be incorporated into large networks. Furthermore, multiple complementary attention mechanisms can be incorporated in one network. Hence, attention techniques have become extremely attractive. However, the literature lacks a comprehensive survey specific to attention techniques to guide researchers in employing attention in their deep models. Note that, besides being demanding in terms of training data and computational resources, transformers only cover a single category in self-attention out of the many categories available. We fill this gap and provide an in-depth survey of 50 attention techniques categorizing them by their most prominent features. We initiate our discussion by introducing the fundamental concepts behind the success of attention mechanism. Next, we furnish some essentials such as the strengths and limitations of each attention category, describe their fundamental building blocks, basic formulations with primary usage, and applications specifically for computer vision. We also discuss the challenges and open questions related to attention mechanism in general. Finally, we recommend possible future research directions for deep attention.

Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language. In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framework, which enables us to capture the language bias as the direct causal effect of questions on answers and reduce the language bias by subtracting the direct language effect from the total causal effect. Experiments demonstrate that our proposed counterfactual inference framework 1) is general to various VQA backbones and fusion strategies, 2) achieves competitive performance on the language-bias sensitive VQA-CP dataset while performs robustly on the balanced VQA v2 dataset.

Since real-world objects and their interactions are often multi-modal and multi-typed, heterogeneous networks have been widely used as a more powerful, realistic, and generic superclass of traditional homogeneous networks (graphs). Meanwhile, representation learning (\aka~embedding) has recently been intensively studied and shown effective for various network mining and analytical tasks. In this work, we aim to provide a unified framework to deeply summarize and evaluate existing research on heterogeneous network embedding (HNE), which includes but goes beyond a normal survey. Since there has already been a broad body of HNE algorithms, as the first contribution of this work, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms. Moreover, existing HNE algorithms, though mostly claimed generic, are often evaluated on different datasets. Understandable due to the application favor of HNE, such indirect comparisons largely hinder the proper attribution of improved task performance towards effective data preprocessing and novel technical design, especially considering the various ways possible to construct a heterogeneous network from real-world application data. Therefore, as the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and \etc.~from different sources, towards handy and fair evaluations of HNE algorithms. As the third contribution, we carefully refactor and amend the implementations and create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.

北京阿比特科技有限公司