亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the competitiveness of semi-implicit Runge-Kutta (RK) and spectral deferred correction (SDC) time-integration methods up to order six for incompressible Navier-Stokes problems in conjunction with a high-order discontinuous Galerkin method for space discretization. It is proposed to harness the implicit and explicit RK parts as a partitioned scheme, which provides a natural basis for the underlying projection scheme and yields a straight-forward approach for accommodating nonlinear viscosity. Numerical experiments on laminar flow, variable viscosity and transition to turbulence are carried out to assess accuracy, convergence and computational efficiency. Although the methods of order 3 or higher are susceptible to order reduction due to time-dependent boundary conditions, two third-order RK methods are identified that perform well in all test cases and clearly surpass all second-order schemes including the popular extrapolated backward difference method. The considered SDC methods are more accurate than the RK methods, but become competitive only for relative errors smaller than ca $10^{-5}$.

相關內容

A new and efficient neural-network and finite-difference hybrid method is developed for solving Poisson equation in a regular domain with jump discontinuities on embedded irregular interfaces. Since the solution has low regularity across the interface, when applying finite difference discretization to this problem, an additional treatment accounting for the jump discontinuities must be employed. Here, we aim to elevate such an extra effort to ease our implementation by machine learning methodology. The key idea is to decompose the solution into singular and regular parts. The neural network learning machinery incorporating the given jump conditions finds the singular solution, while the standard finite difference method is used to obtain the regular solution with associated boundary conditions. Regardless of the interface geometry, these two tasks only require supervised learning for function approximation and a fast direct solver for Poisson equation, making the hybrid method easy to implement and efficient. The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives, and it is comparable with the traditional immersed interface method in the literature. As an application, we solve the Stokes equations with singular forces to demonstrate the robustness of the present method.

In this paper, we propose a new approach for the time-discretization of the incompressible stochastic Stokes equations with multiplicative noise. Our new strategy is based on the classical Milstein method from stochastic differential equations. We use the energy method for its error analysis and show a strong convergence order of at most $1$ for both velocity and pressure approximations. The proof is based on a new H\"older continuity estimate of the velocity solution. While the errors of the velocity approximation are estimated in the standard $L^2$- and $H^1$-norms, the pressure errors are carefully analyzed in a special norm because of the low regularity of the pressure solution. In addition, a new interpretation of the pressure solution, which is very useful in computation, is also introduced. Numerical experiments are also provided to validate the error estimates and their sharpness.

In this work we propose a weighted hybridizable discontinuous Galerkin method (W-HDG) for drift-diffusion problems. By using specific exponential weights when computing the $L^2$ product in each cell of the discretization, we are able to replicate the behavior of the Slotboom change of variables, and eliminate the drift term from the local matrix contributions. We show that the proposed numerical scheme is well-posed, and numerically validates that it has the same properties of classical HDG methods, including optimal convergence, and superconvergence of postprocessed solutions. For polynomial degree zero, dimension one, and vanishing HDG stabilization parameter, W-HDG coincides with the Scharfetter-Gummel stabilized finite volume scheme (i.e., it produces the same system matrix). The use of local exponential weights generalizes the Scharfetter-Gummel stabilization (the state-of-the-art for Finite Volume discretization of transport-dominated problems) to arbitrary high-order approximations.

In this work, we propose a high-order multiscale method for an elliptic model problem with rough and possibly highly oscillatory coefficients. Convergence rates of higher order are obtained using the regularity of the right-hand side only. Hence, no restrictive assumptions on the coefficient, the domain, or the exact solution are required. In the spirit of the Localized Orthogonal Decomposition, the method constructs coarse problem-adapted ansatz spaces by solving auxiliary problems on local subdomains. More precisely, our approach is based on the strategy presented by Maier [SIAM J. Numer. Anal. 59(2), 2021]. The unique selling point of the proposed method is an improved localization strategy curing the effect of deteriorating errors with respect to the mesh size when the local subdomains are not large enough. We present a rigorous a priori error analysis and demonstrate the performance of the method in a series of numerical experiments.

Deep learning technology has made great progress in multi-view 3D reconstruction tasks. At present, most mainstream solutions establish the mapping between views and shape of an object by assembling the networks of 2D encoder and 3D decoder as the basic structure while they adopt different approaches to obtain aggregation of features from several views. Among them, the methods using attention-based fusion perform better and more stable than the others, however, they still have an obvious shortcoming -- the strong independence of each view during predicting the weights for merging leads to a lack of adaption of the global state. In this paper, we propose a global-aware attention-based fusion approach that builds the correlation between each branch and the global to provide a comprehensive foundation for weights inference. In order to enhance the ability of the network, we introduce a novel loss function to supervise the shape overall and propose a dynamic two-stage training strategy that can effectively adapt to all reconstructors with attention-based fusion. Experiments on ShapeNet verify that our method outperforms existing SOTA methods while the amount of parameters is far less than the same type of algorithm, Pix2Vox++. Furthermore, we propose a view-reduction method based on maximizing diversity and discuss the cost-performance tradeoff of our model to achieve a better performance when facing heavy input amount and limited computational cost.

Machine learning is the study of computer algorithms that can automatically improve based on data and experience. Machine learning algorithms build a model from sample data, called training data, to make predictions or judgments without being explicitly programmed to do so. A variety of wellknown machine learning algorithms have been developed for use in the field of computer science to analyze data. This paper introduced a new machine learning algorithm called impact learning. Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems. It can furthermore manifest its superiority in analyzing competitive data. This algorithm is remarkable for learning from the competitive situation and the competition comes from the effects of autonomous features. It is prepared by the impacts of the highlights from the intrinsic rate of natural increase (RNI). We, moreover, manifest the prevalence of the impact learning over the conventional machine learning algorithm.

Gradient compression is a popular technique for improving communication complexity of stochastic first-order methods in distributed training of machine learning models. However, the existing works consider only with-replacement sampling of stochastic gradients. In contrast, it is well-known in practice and recently confirmed in theory that stochastic methods based on without-replacement sampling, e.g., Random Reshuffling (RR) method, perform better than ones that sample the gradients with-replacement. In this work, we close this gap in the literature and provide the first analysis of methods with gradient compression and without-replacement sampling. We first develop a na\"ive combination of random reshuffling with gradient compression (Q-RR). Perhaps surprisingly, but the theoretical analysis of Q-RR does not show any benefits of using RR. Our extensive numerical experiments confirm this phenomenon. This happens due to the additional compression variance. To reveal the true advantages of RR in the distributed learning with compression, we propose a new method called DIANA-RR that reduces the compression variance and has provably better convergence rates than existing counterparts with with-replacement sampling of stochastic gradients. Next, to have a better fit to Federated Learning applications, we incorporate local computation, i.e., we propose and analyze the variants of Q-RR and DIANA-RR -- Q-NASTYA and DIANA-NASTYA that use local gradient steps and different local and global stepsizes. Finally, we conducted several numerical experiments to illustrate our theoretical results.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

北京阿比特科技有限公司