亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article proposes a novel high-performance computing approach for the prediction of the temperature field in powder bed fusion (PBF) additive manufacturing processes. In contrast to many existing approaches to part-scale simulations, the underlying computational model consistently resolves physical scan tracks without additional heat source scaling, agglomeration strategies or any other heuristic modeling assumptions. A growing, adaptively refined mesh accurately captures all details of the laser beam motion. Critically, the fine spatial resolution required for resolved scan tracks in combination with the high scan velocities underlying these processes mandates the use of comparatively small time steps to resolve the underlying physics. Explicit time integration schemes are well-suited for this setting, while unconditionally stable implicit time integration schemes are employed for the interlayer cool down phase governed by significantly larger time scales. These two schemes are combined and implemented in an efficient fast operator evaluation framework providing significant performance gains and optimization opportunities. The capabilities of the novel framework are demonstrated through realistic AM examples on the centimeter scale including the first scan-resolved simulation of the entire NIST AM Benchmark cantilever specimen, with a computation time of less than one day. Apart from physical insights gained through these simulation examples, also numerical aspects are thoroughly studied on basis of weak and strong parallel scaling tests. As potential applications, the proposed thermal PBF simulation framework can serve as a basis for microstructure and thermo-mechanical predictions on the part-scale, but also to assess the influence of scan pattern and part geometry on melt pool shape and temperature, which are important indicators for well-known process instabilities.

相關內容

Local modifications of a computational domain are often performed in order to simplify the meshing process and to reduce computational costs and memory requirements. However, removing geometrical features of a domain often introduces a non-negligible error in the solution of a differential problem in which it is defined. In this work, we extend the results from [1] by studying the case of domains containing an arbitrary number of distinct Neumann features, and by performing an analysis on Poisson's, linear elasticity, and Stokes' equations. We introduce a simple, computationally cheap, reliable, and efficient a posteriori estimator of the geometrical defeaturing error. Moreover, we also introduce a geometric refinement strategy that accounts for the defeaturing error: Starting from a fully defeatured geometry, the algorithm determines at each iteration step which features need to be added to the geometrical model to reduce the defeaturing error. These important features are then added to the (partially) defeatured geometrical model at the next iteration, until the solution attains a prescribed accuracy. A wide range of two- and three-dimensional numerical experiments are finally reported to illustrate this work.

Poisson's equation plays an important role in modeling many physical systems. In electrostatic self-consistent low-temperature plasma (LTP) simulations, Poisson's equation is solved at each simulation time step, which can amount to a significant computational cost for the entire simulation. In this paper, we describe the development of a generic machine-learned Poisson solver specifically designed for the requirements of LTP simulations in complex 2D reactor geometries on structured Cartesian grids. Here, the reactor geometries can consist of inner electrodes and dielectric materials as often found in LTP simulations. The approach leverages a hybrid CNN-transformer network architecture in combination with a weighted multiterm loss function. We train the network using highly-randomized synthetic data to ensure the generalizability of the learned solver to unseen reactor geometries. The results demonstrate that the learned solver is able to produce quantitatively and qualitatively accurate solutions. Furthermore, it generalizes well on new reactor geometries such as reference geometries found in the literature. To increase the numerical accuracy of the solutions required in LTP simulations, we employ a conventional iterative solver to refine the raw predictions, especially to recover the high-frequency features not resolved by the initial prediction. With this, the proposed learned Poisson solver provides the required accuracy and is potentially faster than a pure GPU-based conventional iterative solver. This opens up new possibilities for developing a generic and high-performing learned Poisson solver for LTP systems in complex geometries.

Graphene is one of the most researched two dimensional (2D) material due to its unique combination of mechanical, thermal and electrical properties. Special 2D structure of graphene enables it to exhibit a wide range of peculiar material properties like high Young's modulus, high specific strength etc. which are critical for myriad of applications including light weight structural materials, multi-functional coating and flexible electronics. It is quite challenging and costly to experimentally investigate graphene/graphene based nanocomposites, computational simulations such as molecular dynamics (MD) simulations are widely adopted for understanding the microscopic origins of their unique properties. However, disparate results were reported from computational studies, especially MD simulations using various empirical inter-atomic potentials. In this work, an artificial neural network based interatomic potential has been developed for graphene to represent the potential energy surface based on first principle calculations. The developed machine learning potential (MLP) facilitates high fidelity MD simulations to approach the accuracy of ab initio methods but with a fraction of computational cost, which allows larger simulation size/length, and thereby enables accelerated discovery/design of graphene-based novel materials. Lattice parameter, coefficient of thermal expansion (CTE), Young's modulus and yield strength are estimated using machine learning accelerated MD simulations (MLMD), which are compared to experimental/first principle calculations from previous literatures. It is demonstrated that MLMD can capture the dominating mechanism governing CTE of graphene, including effects from lattice parameter and out of plane rippling.

Computational efficiency is a major bottleneck in using classic graph-based approaches for semi-supervised learning on datasets with a large number of unlabeled examples. Known techniques to improve efficiency typically involve an approximation of the graph regularization objective, but suffer two major drawbacks - first the graph is assumed to be known or constructed with heuristic hyperparameter values, second they do not provide a principled approximation guarantee for learning over the full unlabeled dataset. Building on recent work on learning graphs for semi-supervised learning from multiple datasets for problems from the same domain, and leveraging techniques for fast approximations for solving linear systems in the graph Laplacian matrix, we propose algorithms that overcome both the above limitations. We show a formal separation in the learning-theoretic complexity of sparse and dense graph families. We further show how to approximately learn the best graphs from the sparse families efficiently using the conjugate gradient method. Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions. Our online learning results are stated generally, and may be useful for approximate and efficient parameter tuning in other problems. We implement our approach and demonstrate significant ($\sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.

Engaging students in creating novel content, also referred to as learnersourcing, is increasingly recognised as an effective approach to promoting higher-order learning, deeply engaging students with course material and developing large repositories of content suitable for personalized learning. Despite these benefits, some common concerns and criticisms are associated with learnersourcing (e.g., the quality of resources created by students, challenges in incentivising engagement and lack of availability of reliable learnersourcing systems), which have limited its adoption. This paper presents a framework that considers the existing learnersourcing literature, the latest insights from the learning sciences and advances in AI to offer promising future directions for developing learnersourcing systems. The framework is designed around important questions and human-AI partnerships relating to four key aspects: (1) creating novel content, (2) evaluating the quality of the created content, (3) utilising learnersourced contributions of students and (4) enabling instructors to support students in the learnersourcing process. We then present two comprehensive case studies that illustrate the application of the proposed framework in relation to two existing popular learnersourcing systems.

Neural ordinary differential equations (neural ODEs) have emerged as a novel network architecture that bridges dynamical systems and deep learning. However, the gradient obtained with the continuous adjoint method in the vanilla neural ODE is not reverse-accurate. Other approaches suffer either from an excessive memory requirement due to deep computational graphs or from limited choices for the time integration scheme, hampering their application to large-scale complex dynamical systems. To achieve accurate gradients without compromising memory efficiency and flexibility, we present a new neural ODE framework, PNODE, based on high-level discrete adjoint algorithmic differentiation. By leveraging discrete adjoint time integrators and advanced checkpointing strategies tailored for these integrators, PNODE can provide a balance between memory and computational costs, while computing the gradients consistently and accurately. We provide an open-source implementation based on PyTorch and PETSc, one of the most commonly used portable, scalable scientific computing libraries. We demonstrate the performance through extensive numerical experiments on image classification and continuous normalizing flow problems. We show that PNODE achieves the highest memory efficiency when compared with other reverse-accurate methods. On the image classification problems, PNODE is up to two times faster than the vanilla neural ODE and up to 2.3 times faster than the best existing reverse-accurate method. We also show that PNODE enables the use of the implicit time integration methods that are needed for stiff dynamical systems.

This paper introduces an efficient and generic framework for finite-element simulations under an implicit time integration scheme. Being compatible with generic constitutive models, a fast matrix assembly method exploits the fact that system matrices are created in a deterministic way as long as the mesh topology remains constant. Using the sparsity pattern of the assembled system brings about significant optimizations on the assembly stage. As a result, developed techniques of GPU-based parallelization can be directly applied with the assembled system. Moreover, an asynchronous Cholesky precondition scheme is used to improve the convergence of the system solver. On this basis, a GPU-based Cholesky preconditioner is developed, significantly reducing the data transfer between the CPU/GPU during the solving stage. We evaluate the performance of our method with different mesh elements and hyperelastic models and compare it with typical approaches on the CPU and the GPU.

High-fidelity datasets play a pivotal role in imbuing simulators with realism, enabling the benchmarking of various state-of-the-art deep inference models. These models are particularly instrumental in tasks such as semantic segmentation, classification, and localization. This study showcases the efficacy of a customized manufacturing dataset comprising 60 classes in the creation of a high-fidelity digital twin of a robotic manipulation environment. By leveraging the concept of transfer learning, different 6D pose estimation models are trained within the simulated environment using domain randomization and subsequently tested on real-world objects to assess domain adaptation. To ascertain the effectiveness and realism of the created data-set, pose accuracy and mean absolute error (MAE) metrics are reported to quantify the model2real gap.

The incredible development of federated learning (FL) has benefited various tasks in the domains of computer vision and natural language processing, and the existing frameworks such as TFF and FATE has made the deployment easy in real-world applications. However, federated graph learning (FGL), even though graph data are prevalent, has not been well supported due to its unique characteristics and requirements. The lack of FGL-related framework increases the efforts for accomplishing reproducible research and deploying in real-world applications. Motivated by such strong demand, in this paper, we first discuss the challenges in creating an easy-to-use FGL package and accordingly present our implemented package FederatedScope-GNN (FS-G), which provides (1) a unified view for modularizing and expressing FGL algorithms; (2) comprehensive DataZoo and ModelZoo for out-of-the-box FGL capability; (3) an efficient model auto-tuning component; and (4) off-the-shelf privacy attack and defense abilities. We validate the effectiveness of FS-G by conducting extensive experiments, which simultaneously gains many valuable insights about FGL for the community. Moreover, we employ FS-G to serve the FGL application in real-world E-commerce scenarios, where the attained improvements indicate great potential business benefits. We publicly release FS-G, as submodules of FederatedScope, at //github.com/alibaba/FederatedScope to promote FGL's research and enable broad applications that would otherwise be infeasible due to the lack of a dedicated package.

We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200X faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI.

北京阿比特科技有限公司