亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modern shock-capturing schemes often suffer from numerical shock anomalies if the flow field contains strong shocks, which may limit their further application in hypersonic flow computations. In the current study, we devote our efforts to exploring the primary numerical characteristics and the underlying mechanism of shock instability for second-order finite-volume schemes. To this end, we, for the first time, develop the matrix stability analysis method for the finite-volume MUSCL approach. Such a linearized analysis method allows to investigate the shock instability problem of the finite-volume shock-capturing schemes in a quantitative and efficient manner. Results of the stability analysis demonstrate that the shock stability of second-order scheme is strongly related to the Riemann solver, Mach number, limiter function, numerical shock structure, and computational grid. Unique stability characteristics associated with these factors for second-order methods are revealed quantitatively with the established method. Source location of instability is also clarified by the matrix stability analysis method. Results show that the shock instability originates from the numerical shock structure. Such conclusions pave the way to better understand the shock instability problem and may shed new light on developing more reliable shock-capturing methods for compressible flows with high Mach number.

相關內容

This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace iteration, and randomized block Krylov iteration. The paper compares the procedures via theoretical analyses and numerical studies to highlight how the best choice of algorithm depends on spectral properties of the matrix and the computational resources available. Despite superior performance for many problems, randomized block Krylov iteration has not been widely adopted in computational science. This paper strengthens the case for this method in three ways. First, it presents new pseudocode that can significantly reduce computational costs. Second, it provides a new analysis that yields simple, precise, and informative error bounds. Last, it showcases applications to challenging scientific problems, including principal component analysis for genetic data and spectral clustering for molecular dynamics data.

In this contribution, we provide a new mass lumping scheme for explicit dynamics in isogeometric analysis (IGA). To this end, an element formulation based on the idea of dual functionals is developed. Non-Uniform Rational B-splines (NURBS) are applied as shape functions and their corresponding dual basis functions are applied as test functions in the variational form, where two kinds of dual basis functions are compared. The first type are approximate dual basis functions (AD) with varying degree of reproduction, resulting in banded mass matrices. Dual basis functions derived from the inversion of the Gram matrix (IG) are the second type and already yield diagonal mass matrices. We will show that it is possible to apply the dual scheme as a transformation of the resulting system of equations based on NURBS as shape and test functions. Hence, it can be easily implemented into existing IGA routines. Treating the application of dual test functions as preconditioner reduces the additional computational effort, but it cannot entirely erase it and the density of the stiffness matrix still remains higher than in standard Bubnov-Galerkin formulations. In return applying additional row-sum lumping to the mass matrices is either not necessary for IG or the caused loss of accuracy is lowered to a reasonable magnitude in the case of AD. Numerical examples show a significantly better approximation of the dynamic behavior for the dual lumping scheme compared to standard NURBS approaches making use of row-sum lumping. Applying IG yields accurate numerical results without additional lumping. But as result of the global support of the IG dual basis functions, fully populated stiffness matrices occur, which are entirely unsuitable for explicit dynamic simulations. Combining AD and row-sum lumping leads to an efficient computation regarding effort and accuracy.

We present the formulation and optimization of a Runge-Kutta-type time-stepping scheme for solving the shallow water equations, aimed at substantially increasing the effective allowable time-step over that of comparable methods. This scheme, called FB-RK(3,2), uses weighted forward-backward averaging of thickness data to advance the momentum equation. The weights for this averaging are chosen with an optimization process that employs a von Neumann-type analysis, ensuring that the weights maximize the admittable Courant number. Through a simplified local truncation error analysis and numerical experiments, we show that the method is at least second order in time for any choice of weights and exhibits low dispersion and dissipation errors for well-resolved waves. Further, we show that an optimized FB-RK(3,2) can take time-steps up to 2.8 times as large as a popular three-stage, third-order strong stability preserving Runge-Kutta method in a quasi-linear test case. In fully nonlinear shallow water test cases relevant to oceanic and atmospheric flows, FB-RK(3,2) outperforms SSPRK3 in admittable time-step by factors between roughly between 1.6 and 2.2, making the scheme approximately twice as computationally efficient with little to no effect on solution quality.

A key consideration in the development of numerical schemes for time-dependent partial differential equations (PDEs) is the ability to preserve certain properties of the continuum solution, such as associated conservation laws or other geometric structures of the solution. There is a long history of the development and analysis of such structure-preserving discretisation schemes, including both proofs that standard schemes have structure-preserving properties and proposals for novel schemes that achieve both high-order accuracy and exact preservation of certain properties of the continuum differential equation. When coupled with implicit time-stepping methods, a major downside to these schemes is that their structure-preserving properties generally rely on exact solution of the (possibly nonlinear) systems of equations defining each time-step in the discrete scheme. For small systems, this is often possible (up to the accuracy of floating-point arithmetic), but it becomes impractical for the large linear systems that arise when considering typical discretisation of space-time PDEs. In this paper, we propose a modification to the standard flexible generalised minimum residual (FGMRES) iteration that enforces selected constraints on approximate numerical solutions. We demonstrate its application to both systems of conservation laws and dissipative systems.

We apply the collision-based hybrid introduced in \cite{hauck} to the Boltzmann equation with the BGK operator and a hyperbolic scaling. An implicit treatment of the source term is used to handle stiffness associated with the BGK operator. Although it helps the numerical scheme become stable with a large time step size, it is still not obvious to achieve the desired order of accuracy due to the relationship between the size of the spatial cell and the mean free path. Without asymptotic preserving property, a very restricted grid size is required to resolve the mean free path, which is not practical. Our approaches are based on the noncollision-collision decomposition of the BGK equation. We introduce the arbitrary order of nodal discontinuous Galerkin (DG) discretization in space with a semi-implicit time-stepping method; we employ the backward Euler time integration for the uncollided equation and the 2nd order predictor-corrector scheme for the collided equation, i.e., both source terms in uncollided and collided equations are treated implicitly and only streaming term in the collided equation is solved explicitly. This improves the computational efficiency without the complexity of the numerical implementation. Numerical results are presented for various Knudsen numbers to present the effectiveness and accuracy of our hybrid method. Also, we compare the solutions of the hybrid and non-hybrid schemes.

An intensive line of research on fixed parameter tractability of integer programming is focused on exploiting the relation between the sparsity of a constraint matrix $A$ and the norm of the elements of its Graver basis. In particular, integer programming is fixed parameter tractable when parameterized by the primal tree-depth and the entry complexity of $A$, and when parameterized by the dual tree-depth and the entry complexity of $A$; both these parameterization imply that $A$ is sparse, in particular, the number of its non-zero entries is linear in the number of columns or rows, respectively. We study preconditioners transforming a given matrix to a row-equivalent sparse matrix if it exists and provide structural results characterizing the existence of a sparse row-equivalent matrix in terms of the structural properties of the associated column matroid. In particular, our results imply that the $\ell_1$-norm of the Graver basis is bounded by a function of the maximum $\ell_1$-norm of a circuit of $A$. We use our results to design a parameterized algorithm that constructs a matrix row-equivalent to an input matrix $A$ that has small primal/dual tree-depth and entry complexity if such a row-equivalent matrix exists. Our results yield parameterized algorithms for integer programming when parameterized by the $\ell_1$-norm of the Graver basis of the constraint matrix, when parameterized by the $\ell_1$-norm of the circuits of the constraint matrix, when parameterized by the smallest primal tree-depth and entry complexity of a matrix row-equivalent to the constraint matrix, and when parameterized by the smallest dual tree-depth and entry complexity of a matrix row-equivalent to the constraint matrix.

In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases

We propose a novel collocated projection method for solving the incompressible Navier-Stokes equations with arbitrary boundaries. Our approach employs non-graded octree grids, where all variables are stored at the nodes. To discretize the viscosity and projection steps, we utilize supra-convergent finite difference approximations with sharp boundary treatments. We demonstrate the stability of our projection on uniform grids, identify a sufficient stability condition on adaptive grids, and validate these findings numerically. We further demonstrate the accuracy and capabilities of our solver with several canonical two- and three-dimensional simulations of incompressible fluid flows. Overall, our method is second-order accurate, allows for dynamic grid adaptivity with arbitrary geometries, and reduces the overhead in code development through data collocation.

Krylov subspace methods are a ubiquitous tool for computing near-optimal rank $k$ approximations of large matrices. While "large block" Krylov methods with block size at least $k$ give the best known theoretical guarantees, block size one (a single vector) or a small constant is often preferred in practice. Despite their popularity, we lack theoretical bounds on the performance of such "small block" Krylov methods for low-rank approximation. We address this gap between theory and practice by proving that small block Krylov methods essentially match all known low-rank approximation guarantees for large block methods. Via a black-box reduction we show, for example, that the standard single vector Krylov method run for $t$ iterations obtains the same spectral norm and Frobenius norm error bounds as a Krylov method with block size $\ell \geq k$ run for $O(t/\ell)$ iterations, up to a logarithmic dependence on the smallest gap between sequential singular values. That is, for a given number of matrix-vector products, single vector methods are essentially as effective as any choice of large block size. By combining our result with tail-bounds on eigenvalue gaps in random matrices, we prove that the dependence on the smallest singular value gap can be eliminated if the input matrix is perturbed by a small random matrix. Further, we show that single vector methods match the more complex algorithm of [Bakshi et al. `22], which combines the results of multiple block sizes to achieve an improved algorithm for Schatten $p$-norm low-rank approximation.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司