亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a new Integrated Finite Element Neural Network framework (I-FENN), with the objective to accelerate the numerical solution of nonlinear computational mechanics problems. We leverage the swift predictive capability of neural networks (NNs) and we embed them inside the finite element stiffness function, to compute element-level state variables and their derivatives within a nonlinear, iterative numerical solution. This process is conducted jointly with conventional finite element methods that involve shape functions: the NN receives input data that resembles the material point deformation and its output is used to construct element-level field variables such as the element Jacobian matrix and residual vector. Here we introduce I-FENN to the continuum damage analysis of quasi-brittle materials, and we establish a new non-local gradient-based damage framework which operates at the cost of a local damage approach. First, we develop a physics informed neural network (PINN) to resemble the non-local gradient model and then we train the neural network offline. The network learns to predict the non-local equivalent strain at each material point, as well as its derivative with respect to the local strain. Then, the PINN is integrated in the element stiffness definition and conducts the local to non-local strain transformation, whereas the two PINN outputs are used to construct the element Jacobian matrix and residual vector. This process is carried out within the nonlinear solver, until numerical convergence is achieved. The resulting method bears the computational cost of the conventional local damage approach, but ensures mesh-independent results and a diffused non-local strain and damage profile. As a result, the proposed method tackles the vital drawbacks of both the local and non-local gradient method, respectively being the mesh-dependence and additional computational cost.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集(ji)成(cheng),VLSI雜志。 Publisher:Elsevier。 SIT:

In this paper, we develop a novel class of linear energy-preserving integrating factor methods for the 2D nonlinear Schr\"odinger equation with wave operator (NLSW), combining the scalar auxiliary variable approach and the integrating factor methods. A second-order scheme is first proposed, which is rigorously proved to be energy-preserving. By using the energy methods, we analyze its optimal convergence in the $H^1$ norm without any restrictions on the grid ratio, where a novel technique and an improved induction argument are proposed to overcome the difficulty posed by the unavailability of a priori $L^\infty$ estimates of numerical solutions. Based on the integrating factor Runge-Kutta methods, we extend the proposed scheme to arbitrarily high order, which is also linear and conservative. Numerical experiments are presented to confirm the theoretical analysis and demonstrate the advantages of the proposed methods.

In biomechanics, geometries representing complicated organic structures are consistently segmented from sparse volumetric data or morphed from template geometries resulting in initial overclosure between adjacent geometries. In FEA, these overclosures result in numerical instability and inaccuracy as part of contact analysis. Several techniques exist to fix overclosures, but most suffer from several drawbacks. This work introduces a novel automated algorithm in an iterative process to remove overclosure and create a desired minimum gap for 2D and 3D finite element models. The RBF Network algorithm was introduced by its four major steps to remove the initial overclosure. Additionally, the algorithm was validated using two test cases against conventional nodal adjustment. The first case compared the ability of each algorithm to remove differing levels of overclosure between two deformable muscles and the effects on mesh quality. The second case used a non-deformable femur and deformable distal femoral cartilage geometry with initial overclosure to test both algorithms and observe the effects on the resulting contact FEA. The RBF Network in the first case study was successfully able to remove all overclosures. In the second case, the nodal adjustment method failed to create a usable FEA model, while the RBF Network had no such issue. This work proposed an algorithm to remove initial overclosures prior to FEA that has improved performance over conventional nodal adjustment, especially in complicated situations and those involving 3D elements. The work can be included in existing FEA modeling workflows to improve FEA results in situations involving sparse volumetric segmentation and mesh morphing. This algorithm has been implemented in MATLAB, and the source code is publicly available to download at the following GitHub repository: //github.com/thor-andreassen/femors

We present an implementation of a fully stage-parallel preconditioner for Radau IIA type fully implicit Runge--Kutta methods, which approximates the inverse of $A_Q$ from the Butcher tableau by the lower triangular matrix resulting from an LU decomposition and diagonalizes the system with as many blocks as stages. For the transformed system, we employ a block preconditioner where each block is distributed and solved by a subgroup of processes in parallel. For combination of partial results, we either use a communication pattern resembling Cannon's algorithm or shared memory. A performance model and a large set of performance studies (including strong scaling runs with up to 150k processes on 3k compute nodes) conducted for a time-dependent heat problem, using matrix-free finite element methods, indicate that the stage-parallel implementation can reach higher throughputs when the block solvers operate at lower parallel efficiencies, which occurs near the scaling limit. Achievable speedup increases linearly with number of stages and are bounded by the number of stages. Furthermore, we show that the presented stage-parallel concepts are also applicable to the case that $A_Q$ is directly diagonalized, which requires complex arithmetic or the solution of two-by-two blocks and sequentializes parts of the algorithm. Alternatively to distributing stages and assigning them to distinct processes, we discuss the possibility of batching operations from different stages together.

In this paper, federated learning (FL) over wireless networks is investigated. In each communication round, a subset of devices is selected to participate in the aggregation with limited time and energy. In order to minimize the convergence time, global loss and latency are jointly considered in a Stackelberg game based framework. Specifically, age of information (AoI) based device selection is considered at leader-level as a global loss minimization problem, while sub-channel assignment, computational resource allocation, and power allocation are considered at follower-level as a latency minimization problem. By dividing the follower-level problem into two sub-problems, the best response of the follower is obtained by a monotonic optimization based resource allocation algorithm and a matching based sub-channel assignment algorithm. By deriving the upper bound of convergence rate, the leader-level problem is reformulated, and then a list based device selection algorithm is proposed to achieve Stackelberg equilibrium. Simulation results indicate that the proposed device selection scheme outperforms other schemes in terms of the global loss, and the developed algorithms can significantly decrease the time consumption of computation and communication.

Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels. In this paper, we propose a novel label refinement method to correct such errors from an initial segmentation, implicitly incorporating information about label structure. This method features two novel parts: 1) a model that generates synthetic structural errors, and 2) a label appearance simulation network that produces synthetic segmentations (with errors) that are similar in appearance to the real initial segmentations. Using these synthetic segmentations and the original images, the label refinement network is trained to correct errors and improve the initial segmentations. The proposed method is validated on two segmentation tasks: airway segmentation from chest computed tomography (CT) scans and brain vessel segmentation from 3D CT angiography (CTA) images of the brain. In both applications, our method significantly outperformed a standard 3D U-Net and other previous refinement approaches. Improvements are even larger when additional unlabeled data is used for model training. In an ablation study, we demonstrate the value of the different components of the proposed method.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

Deep neural network architectures have traditionally been designed and explored with human expertise in a long-lasting trial-and-error process. This process requires huge amount of time, expertise, and resources. To address this tedious problem, we propose a novel algorithm to optimally find hyperparameters of a deep network architecture automatically. We specifically focus on designing neural architectures for medical image segmentation task. Our proposed method is based on a policy gradient reinforcement learning for which the reward function is assigned a segmentation evaluation utility (i.e., dice index). We show the efficacy of the proposed method with its low computational cost in comparison with the state-of-the-art medical image segmentation networks. We also present a new architecture design, a densely connected encoder-decoder CNN, as a strong baseline architecture to apply the proposed hyperparameter search algorithm. We apply the proposed algorithm to each layer of the baseline architectures. As an application, we train the proposed system on cine cardiac MR images from Automated Cardiac Diagnosis Challenge (ACDC) MICCAI 2017. Starting from a baseline segmentation architecture, the resulting network architecture obtains the state-of-the-art results in accuracy without performing any trial-and-error based architecture design approaches or close supervision of the hyperparameters changes.

A variety of deep neural networks have been applied in medical image segmentation and achieve good performance. Unlike natural images, medical images of the same imaging modality are characterized by the same pattern, which indicates that same normal organs or tissues locate at similar positions in the images. Thus, in this paper we try to incorporate the prior knowledge of medical images into the structure of neural networks such that the prior knowledge can be utilized for accurate segmentation. Based on this idea, we propose a novel deep network called knowledge-based fully convolutional network (KFCN) for medical image segmentation. The segmentation function and corresponding error is analyzed. We show the existence of an asymptotically stable region for KFCN which traditional FCN doesn't possess. Experiments validate our knowledge assumption about the incorporation of prior knowledge into the convolution kernels of KFCN and show that KFCN can achieve a reasonable segmentation and a satisfactory accuracy.

Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. More specifically, these techniques have been successfully applied to medical image classification, segmentation, and detection tasks. One deep learning technique, U-Net, has become one of the most popular for these applications. In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively. The proposed models utilize the power of U-Net, Residual Network, as well as RCNN. There are several advantages of these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architecture. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architecture with same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets such as blood vessel segmentation in retina images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models including U-Net and residual U-Net (ResU-Net).

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司