亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose new strategies to handle polygonal grids refinement based on Convolutional Neural Networks (CNNs). We show that CNNs can be successfully employed to identify correctly the "shape" of a polygonal element so as to design suitable refinement criteria to be possibly employed within adaptive refinement strategies. We propose two refinement strategies that exploit the use of CNNs to classify elements' shape, at a low computational cost. We test the proposed idea considering two families of finite element methods that support arbitrarily shaped polygonal elements, namely Polygonal Discontinuous Galerkin (PolyDG) methods and Virtual Element Methods (VEMs). We demonstrate that the proposed algorithms can greatly improve the performance of the discretization schemes both in terms of accuracy and quality of the underlying grids. Moreover, since the training phase is performed off-line and is independent of the differential model the overall computational costs are kept low.

相關內容

Computational Fluid Dynamics (CFD) simulation by the numerical solution of the Navier-Stokes equations is an essential tool in a wide range of applications from engineering design to climate modeling. However, the computational cost and memory demand required by CFD codes may become very high for flows of practical interest, such as in aerodynamic shape optimization. This expense is associated with the complexity of the fluid flow governing equations, which include non-linear partial derivative terms that are of difficult solution, leading to long computational times and limiting the number of hypotheses that can be tested during the process of iterative design. Therefore, we propose DeepCFD: a convolutional neural network (CNN) based model that efficiently approximates solutions for the problem of non-uniform steady laminar flows. The proposed model is able to learn complete solutions of the Navier-Stokes equations, for both velocity and pressure fields, directly from ground-truth data generated using a state-of-the-art CFD code. Using DeepCFD, we found a speedup of up to 3 orders of magnitude compared to the standard CFD approach at a cost of low error rates.

We propose an efficient, accurate and robust IMEX solver for the compressible Navier-Stokes equation with general equation of state. The method, which is based on an $h-$adaptive Discontinuos Galerkin spatial discretization and on an Additive Runge Kutta IMEX method for time discretization, is tailored for low Mach number applications and allows to simulate low Mach regimes at a significantly reduced computational cost, while maintaining full second order accuracy also for higher Mach number regimes. The method has been implemented in the framework of the deal.II numerical library, whose adaptive mesh refinement capabilities are employed to enhance efficiency. Refinement indicators appropriate for real gas phenomena have been introduced. A number of numerical experiments on classical benchmarks for compressible flows and their extension to real gases demonstrate the properties of the proposed method.

We introduce and analyse the first order Enlarged Enhancement Virtual Element Method (E$^2$VEM) for the Poisson problem. The method has the interesting property of allowing the definition of bilinear forms that do not require a stabilization term. We provide a proof of well-posedness and optimal order a priori error estimates. Numerical tests on convex and non-convex polygonal meshes confirm the theoretical convergence rates.

We describe the first gradient methods on Riemannian manifolds to achieve accelerated rates in the non-convex case. Under Lipschitz assumptions on the Riemannian gradient and Hessian of the cost function, these methods find approximate first-order critical points faster than regular gradient descent. A randomized version also finds approximate second-order critical points. Both the algorithms and their analyses build extensively on existing work in the Euclidean case. The basic operation consists in running the Euclidean accelerated gradient descent method (appropriately safe-guarded against non-convexity) in the current tangent space, then moving back to the manifold and repeating. This requires lifting the cost function from the manifold to the tangent space, which can be done for example through the Riemannian exponential map. For this approach to succeed, the lifted cost function (called the pullback) must retain certain Lipschitz properties. As a contribution of independent interest, we prove precise claims to that effect, with explicit constants. Those claims are affected by the Riemannian curvature of the manifold, which in turn affects the worst-case complexity bounds for our optimization algorithms.

We propose a new method for solving the Gelfand-Levitan-Marchenko equation (GLME) based on the block version of the Toeplitz Inner-Bordering (TIB) with an arbitrary point to start the calculation. This makes it possible to find solutions of the GLME at an arbitrary point with a cutoff of the matrix coefficient, which allows to avoid the occurrence of numerical instability and to perform calculations for soliton solutions spaced apart in the time domain. Using an example of two solitons, we demonstrate our method and its range of applicability. An example of eight solitons shows how the method can be applied to a more complex signal configuration.

We present a family of discretizations for the Variable Eddington Factor (VEF) equations that have high-order accuracy on curved meshes and efficient preconditioned iterative solvers. The VEF discretizations are combined with a high-order Discontinuous Galerkin transport discretization to form an effective high-order, linear transport method. The VEF discretizations are derived by extending the unified analysis of Discontinuous Galerkin methods for elliptic problems to the VEF equations. This framework is used to define analogs of the interior penalty, second method of Bassi and Rebay, minimal dissipation local Discontinuous Galerkin, and continuous finite element methods. The analysis of subspace correction preconditioners, which use a continuous operator to iteratively precondition the discontinuous discretization, is extended to the case of the non-symmetric VEF system. Numerical results demonstrate that the VEF discretizations have arbitrary-order accuracy on curved meshes, preserve the thick diffusion limit, and are effective on a proxy problem from thermal radiative transfer in both outer transport iterations and inner preconditioned linear solver iterations. In addition, a parallel weak scaling study of the interior penalty VEF discretization demonstrates the scalability of the method out to 1152 processors.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Deep reinforcement learning (RL) has achieved many recent successes, yet experiment turn-around time remains a key bottleneck in research and in practice. We investigate how to optimize existing deep RL algorithms for modern computers, specifically for a combination of CPUs and GPUs. We confirm that both policy gradient and Q-value learning algorithms can be adapted to learn using many parallel simulator instances. We further find it possible to train using batch sizes considerably larger than are standard, without negatively affecting sample complexity or final performance. We leverage these facts to build a unified framework for parallelization that dramatically hastens experiments in both classes of algorithm. All neural network computations use GPUs, accelerating both data collection and training. Our results include using an entire DGX-1 to learn successful strategies in Atari games in mere minutes, using both synchronous and asynchronous algorithms.

We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as "real" samples to slow down the convergence of discriminator that enables to reduce the gradient vanishing problem and stabilize the model. Secondly, from mappings between latent and data spaces provided by AE, we further regularize AE by the relative distance between the latent and data samples to explicitly prevent the generator falling into mode collapse setting. This idea comes when we find a new way to visualize the mode collapse on MNIST dataset. To the best of our knowledge, our method is the first to propose and apply successfully the relative distance of latent and data samples for stabilizing GAN. Thirdly, our proposed model, namely Generative Adversarial Autoencoder Networks (GAAN), is stable and has suffered from neither gradient vanishing nor mode collapse issues, as empirically demonstrated on synthetic, MNIST, MNIST-1K, CelebA and CIFAR-10 datasets. Experimental results show that our method can approximate well multi-modal distribution and achieve better results than state-of-the-art methods on these benchmark datasets. Our model implementation is published here: //github.com/tntrung/gaan

We propose a conditional non-autoregressive neural sequence model based on iterative refinement. The proposed model is designed based on the principles of latent variable models and denoising autoencoders, and is generally applicable to any sequence generation task. We extensively evaluate the proposed model on machine translation (En-De and En-Ro) and image caption generation, and observe that it significantly speeds up decoding while maintaining the generation quality comparable to the autoregressive counterpart.

北京阿比特科技有限公司