亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physics-informed neural networks (PINNs) provide a framework to build surrogate models for dynamical systems governed by differential equations. During the learning process, PINNs incorporate a physics-based regularization term within the loss function to enhance generalization performance. Since simulating dynamics controlled by partial differential equations (PDEs) can be computationally expensive, PINNs have gained popularity in learning parametric surrogates for fluid flow problems governed by Navier-Stokes equations. In this work, we introduce RANS-PINN, a modified PINN framework, to predict flow fields (i.e., velocity and pressure) in high Reynolds number turbulent flow regime. To account for the additional complexity introduced by turbulence, RANS-PINN employs a 2-equation eddy viscosity model based on a Reynolds-averaged Navier-Stokes (RANS) formulation. Furthermore, we adopt a novel training approach that ensures effective initialization and balance among the various components of the loss function. The effectiveness of RANS-PINN framework is then demonstrated using a parametric PINN.

相關內容

損失函數,在AI中亦稱呼距離函數,度量函數。此處的距離代表的是抽象性的,代表真實數據與預測數據之間的誤差。損失函數(loss function)是用來估量你模型的預測值f(x)與真實值Y的不一致程度,它是一個非負實值函數,通常使用L(Y, f(x))來表示,損失函數越小,模型的魯棒性就越好。損失函數是經驗風險函數的核心部分,也是結構風險函數重要組成部分。

Reinforcement learning (RL) algorithms face significant challenges when dealing with long-horizon robot manipulation tasks in real-world environments due to sample inefficiency and safety issues. To overcome these challenges, we propose a novel framework, SEED, which leverages two approaches: reinforcement learning from human feedback (RLHF) and primitive skill-based reinforcement learning. Both approaches are particularly effective in addressing sparse reward issues and the complexities involved in long-horizon tasks. By combining them, SEED reduces the human effort required in RLHF and increases safety in training robot manipulation with RL in real-world settings. Additionally, parameterized skills provide a clear view of the agent's high-level intentions, allowing humans to evaluate skill choices before they are executed. This feature makes the training process even safer and more efficient. To evaluate the performance of SEED, we conducted extensive experiments on five manipulation tasks with varying levels of complexity. Our results show that SEED significantly outperforms state-of-the-art RL algorithms in sample efficiency and safety. In addition, SEED also exhibits a substantial reduction of human effort compared to other RLHF methods. Further details and video results can be found at //seediros23.github.io/.

The Traveling Salesman Problem (TSP) is a well-known problem in combinatorial optimization with applications in various domains. However, existing TSP solvers face challenges in producing high-quality solutions with low latency. To address this issue, we propose NAR4TSP, which produces TSP solutions in a Non-Autoregressive (NAR) manner using a specially designed Graph Neural Network (GNN), achieving faster inference speed. Moreover, NAR4TSP is trained using an enhanced Reinforcement Learning (RL) strategy, eliminating the dependency on costly labels used to train conventional supervised learning-based NAR models. To the best of our knowledge, NAR4TSP is the first TSP solver that successfully combines RL and NAR decoding. The experimental results on both synthetic and real-world TSP instances demonstrate that NAR4TSP outperforms four state-of-the-art models in terms of solution quality, inference latency, and generalization ability. Lastly, we present visualizations of NAR4TSP's decoding process and its overall path planning to showcase the feasibility of implementing NAR4TSP in an end-to-end manner and its effectiveness, respectively.

This article aims to classify diabetic retinopathy (DR) disease into five different classes using an ensemble approach based on two popular pre-trained convolutional neural networks: VGG16 and Inception V3. The proposed model aims to leverage the strengths of the two individual nets to enhance the classification performance for diabetic retinopathy. The ensemble model architecture involves freezing a portion of the layers in each pre-trained model to utilize their learned representations effectively. Global average pooling layers are added to transform the output feature maps into fixed-length vectors. These vectors are then concatenated to form a consolidated representation of the input image. The ensemble model is trained using a dataset of diabetic retinopathy images (APTOS), divided into training and validation sets. During the training process, the model learns to classify the retinal images into the corresponding diabetic retinopathy classes. Experimental results on the test set demonstrate the efficacy of the proposed ensemble model for DR classification achieving an accuracy of 96.4%.

Graph-based tests are a class of non-parametric two-sample tests useful for analyzing high-dimensional data. The framework offers both flexibility and power in a wide-range of testing scenarios. The test statistics are constructed from similarity graphs (such as $K$-nearest neighbor graphs) and consequently, their performance is sensitive to the structure of the graph. When the graph has problematic structures, as is common for high-dimensional data, this can result in poor or unstable performance among existing graph-based tests. We address this challenge and develop graph-based test statistics that are robust to problematic structures of the graph. The limiting null distribution of the robust test statistics is derived. We illustrate the new tests via simulation studies and a real-world application on Chicago taxi trip-data.

In real-world applications, perfect labels are rarely available, making it challenging to develop robust machine learning algorithms that can handle noisy labels. Recent methods have focused on filtering noise based on the discrepancy between model predictions and given noisy labels, assuming that samples with small classification losses are clean. This work takes a different approach by leveraging the consistency between the learned model and the entire noisy dataset using the rich representational and topological information in the data. We introduce LaplaceConfidence, a method that to obtain label confidence (i.e., clean probabilities) utilizing the Laplacian energy. Specifically, it first constructs graphs based on the feature representations of all noisy samples and minimizes the Laplacian energy to produce a low-energy graph. Clean labels should fit well into the low-energy graph while noisy ones should not, allowing our method to determine data's clean probabilities. Furthermore, LaplaceConfidence is embedded into a holistic method for robust training, where co-training technique generates unbiased label confidence and label refurbishment technique better utilizes it. We also explore the dimensionality reduction technique to accommodate our method on large-scale noisy datasets. Our experiments demonstrate that LaplaceConfidence outperforms state-of-the-art methods on benchmark datasets under both synthetic and real-world noise.

Network slicing (NS) is a key technology in 5G networks that enables the customization and efficient sharing of network resources to support the diverse requirements of nextgeneration services. This paper proposes a resource allocation scheme for NS based on the Fisher-market model and the Trading-post mechanism. The scheme aims to achieve efficient resource utilization while ensuring multi-level fairness, dynamic load conditions, and the protection of service level agreements (SLAs) for slice tenants. In the proposed scheme, each service provider (SP) is allocated a budget representing its infrastructure share or purchasing power in the market. SPs acquire different resources by spending their budgets to offer services to different classes of users, classified based on their service needs and priorities. The scheme assumes that SPs employ the $\alpha$-fairness criteria to deliver services to their subscribers. The resource allocation problem is formulated as a convex optimization problem to find a market equilibrium (ME) solution that provides allocation and resource pricing. A privacy-preserving learning algorithm is developed to enable SPs to reach the ME in a decentralized manner. The performance of the proposed scheme is evaluated through theoretical analysis and extensive numerical simulations, comparing it with the Social Optimal and Static Proportional sharing schemes.

The future networks pose intense demands for intelligent and customized designs to cope with the surging network scale, dynamically time-varying environments, diverse user requirements, and complicated manual configuration. However, traditional rule-based solutions heavily rely on human efforts and expertise, while data-driven intelligent algorithms still lack interpretability and generalization. In this paper, we propose the AIGN (AI-Generated Network), a novel intention-driven paradigm for network design, which allows operators to quickly generate a variety of customized network solutions and achieve expert-free problem optimization. Driven by the diffusion model-based learning approach, AIGN has great potential to learn the reward-maximizing trajectories, automatically satisfy multiple constraints, adapt to different objectives and scenarios, or even intelligently create novel designs and mechanisms unseen in existing network environments. Finally, we conduct a use case to demonstrate that AIGN can effectively guide the design of transmit power allocation in digital twin-based access networks.

Cyber-physical systems (CPS) offer immense optimization potential for manufacturing processes through the availability of multivariate time series data of actors and sensors. Based on automated analysis software, the deployment of adaptive and responsive measures is possible for time series data. Due to the complex and dynamic nature of modern manufacturing, analysis and modeling often cannot be entirely automated. Even machine- or deep learning approaches often depend on a priori expert knowledge and labelling. In this paper, an information-based data preprocessing approach is proposed. By applying statistical methods including variance and correlation analysis, an approximation of the sampling rate in event-based systems and the utilization of spectral analysis, knowledge about the underlying manufacturing processes can be gained prior to modeling. The paper presents, how statistical analysis enables the pruning of a dataset's least important features and how the sampling rate approximation approach sets the base for further data analysis and modeling. The data's underlying periodicity, originating from the cyclic nature of an automated manufacturing process, will be detected by utilizing the fast Fourier transform. This information-based preprocessing method will then be validated for process time series data of cyber-physical systems' programmable logic controllers (PLC).

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司