亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.

相關內容

信息檢索雜志(IR)為信息檢索的廣泛領域中的理論、算法分析和實驗的發布提供了一個國際論壇。感興趣的主題包括對應用程序(例如Web,社交和流媒體,推薦系統和文本檔案)的搜索、索引、分析和評估。這包括對搜索中人為因素的研究、橋接人工智能和信息檢索以及特定領域的搜索應用程序。 官網地址:

We propose a new second-order accurate lattice Boltzmann formulation for linear elastodynamics that is stable for arbitrary combinations of material parameters under a CFL-like condition. The construction of the numerical scheme uses an equivalent first-order hyperbolic system of equations as an intermediate step, for which a vectorial lattice Boltzmann formulation is introduced. The only difference to conventional lattice Boltzmann formulations is the usage of vector-valued populations, so that all computational benefits of the algorithm are preserved. Using the asymptotic expansion technique and the notion of pre-stability structures we further establish second-order consistency as well as analytical stability estimates. Lastly, we introduce a second-order consistent initialization of the populations as well as a boundary formulation for Dirichlet boundary conditions on 2D rectangular domains. All theoretical derivations are numerically verified by convergence studies using manufactured solutions and long-term stability tests.

Approximating invariant subspaces of generalized eigenvalue problems (GEPs) is a fundamental computational problem at the core of machine learning and scientific computing. It is, for example, the root of Principal Component Analysis (PCA) for dimensionality reduction, data visualization, and noise filtering, and of Density Functional Theory (DFT), arguably the most popular method to calculate the electronic structure of materials. Given Hermitian $H,S\in\mathbb{C}^{n\times n}$, where $S$ is positive-definite, let $\Pi_k$ be the true spectral projector on the invariant subspace that is associated with the $k$ smallest (or largest) eigenvalues of the GEP $HC=SC\Lambda$, for some $k\in[n]$. We show that we can compute a matrix $\widetilde\Pi_k$ such that $\lVert\Pi_k-\widetilde\Pi_k\rVert_2\leq \epsilon$, in $O\left( n^{\omega+\eta}\mathrm{polylog}(n,\epsilon^{-1},\kappa(S),\mathrm{gap}_k^{-1}) \right)$ bit operations in the floating point model, for some $\epsilon\in(0,1)$, with probability $1-1/n$. Here, $\eta>0$ is arbitrarily small, $\omega\lesssim 2.372$ is the matrix multiplication exponent, $\kappa(S)=\lVert S\rVert_2\lVert S^{-1}\rVert_2$, and $\mathrm{gap}_k$ is the gap between eigenvalues $k$ and $k+1$. To achieve such provable "forward-error" guarantees, our methods rely on a new $O(n^{\omega+\eta})$ stability analysis for the Cholesky factorization, and a smoothed analysis for computing spectral gaps, which can be of independent interest. Ultimately, we obtain new matrix multiplication-type bit complexity upper bounds for PCA problems, including classical PCA and (randomized) low-rank approximation.

We investigate the convergence of the primal-dual algorithm for composite optimization problems when the objective functions are weakly convex. We introduce a modified duality gap function, which is a lower bound of the standard duality gap function. Under the sharpness condition of this new function, we identify the area around the set of saddle points where we obtain the convergence of the primal-dual algorithm. We give numerical examples and applications in image denoising and deblurring to demonstrate our results.

Discontinuous Galerkin (DG) methods for solving elliptic equations are gaining popularity in the computational physics community for their high-order spectral convergence and their potential for parallelization on computing clusters. However, problems in numerical relativity with extremely stretched grids, such as initial data problems for binary black holes that impose boundary conditions at large distances from the black holes, have proven challenging for DG methods. To alleviate this problem we have developed a primal DG scheme that is generically applicable to a large class of elliptic equations, including problems on curved and extremely stretched grids. The DG scheme accommodates two widely used initial data formulations in numerical relativity, namely the puncture formulation and the extended conformal thin-sandwich (XCTS) formulation. We find that our DG scheme is able to stretch the grid by a factor of $\sim 10^9$ and hence allows to impose boundary conditions at large distances. The scheme converges exponentially with resolution both for the smooth XCTS problem and for the nonsmooth puncture problem. With this method we are able to generate high-quality initial data for binary black hole problems using a parallelizable DG scheme. The code is publicly available in the open-source SpECTRE numerical relativity code.

This paper introduces a nonconforming virtual element method for general second-order elliptic problems with variable coefficients on domains with curved boundaries and curved internal interfaces. We prove arbitrary order optimal convergence in the energy and $L^2$ norms, confirmed by numerical experiments on a set of polygonal meshes. The accuracy of the numerical approximation provided by the method is shown to be comparable with the theoretical analysis.

Deep learning has been highly successful in some applications. Nevertheless, its use for solving partial differential equations (PDEs) has only been of recent interest with current state-of-the-art machine learning libraries, e.g., TensorFlow or PyTorch. Physics-informed neural networks (PINNs) are an attractive tool for solving partial differential equations based on sparse and noisy data. Here extend PINNs to solve obstacle-related PDEs which present a great computational challenge because they necessitate numerical methods that can yield an accurate approximation of the solution that lies above a given obstacle. The performance of the proposed PINNs is demonstrated in multiple scenarios for linear and nonlinear PDEs subject to regular and irregular obstacles.

Principal component analysis (PCA) is one of the most popular dimension reduction techniques in statistics and is especially powerful when a multivariate distribution is concentrated near a lower-dimensional subspace. Multivariate extreme value distributions have turned out to provide challenges for the application of PCA since their constraint support impedes the detection of lower-dimensional structures and heavy-tails can imply that second moments do not exist, thereby preventing the application of classical variance-based techniques for PCA. We adapt PCA to max-stable distributions using a regression setting and employ max-linear maps to project the random vector to a lower-dimensional space while preserving max-stability. We also provide a characterization of those distributions which allow for a perfect reconstruction from the lower-dimensional representation. Finally, we demonstrate how an optimal projection matrix can be consistently estimated and show viability in practice with a simulation study and application to a benchmark dataset.

In the modelling of stochastic phenomena, such as quasi-reaction systems, parameter estimation of kinetic rates can be challenging, particularly when the time gap between consecutive measurements is large. Local linear approximation approaches account for the stochasticity in the system but fail to capture the nonlinear nature of the underlying process. At the mean level, the dynamics of the system can be described by a system of ODEs, which have an explicit solution only for simple unitary systems. An analytical solution for generic quasi-reaction systems is proposed via a first order Taylor approximation of the hazard rate. This allows a nonlinear forward prediction of the future dynamics given the current state of the system. Predictions and corresponding observations are embedded in a nonlinear least-squares approach for parameter estimation. The performance of the algorithm is compared to existing SDE and ODE-based methods via a simulation study. Besides the increased computational efficiency of the approach, the results show an improvement in the kinetic rate estimation, particularly for data observed at large time intervals. Additionally, the availability of an explicit solution makes the method robust to stiffness, which is often present in biological systems. An illustration on Rhesus Macaque data shows the applicability of the approach to the study of cell differentiation.

This contribution is dedicated to the exploration of exponential operator splitting methods for the time integration of evolution equations. It entails the review of previous achievements as well as the depiction of novel results. The standard class of splitting methods involving real coefficients is contrasted with an alternative approach that relies on the incorporation of complex coefficients. In view of long-term computations for linear evolution equations, it is expedient to distinguish symmetric, symmetric-conjugate, and alternating-conjugate schemes. The scope of applications comprises high-order reaction-diffusion equations and complex Ginzburg-Landau equations, which are of relevance in the theories of patterns and superconductivity. Time-dependent Gross-Pitaevskii equations and their parabolic counterparts, which model the dynamics of Bose-Einstein condensates and arise in ground state computations, are formally included as special cases. Numerical experiments confirm the validity of theoretical stability conditions and global error bounds as well as the benefits of higher-order complex splitting methods in comparison with standard schemes.

The incompressible Euler equations are an important model system in computational fluid dynamics. Fast high-order methods for the solution of this time-dependent system of partial differential equations are of particular interest: due to their exponential convergence in the polynomial degree they can make efficient use of computational resources. To address this challenge we describe a novel timestepping method which combines a hybridised Discontinuous Galerkin method for the spatial discretisation with IMEX timestepping schemes, thus achieving high-order accuracy in both space and time. The computational bottleneck is the solution of a (block-) sparse linear system to compute updates to pressure and velocity at each stage of the IMEX integrator. Following Chorin's projection approach, this update of the velocity and pressure fields is split into two stages. As a result, the hybridised equation for the implicit pressure-velocity problem is reduced to the well-known system which arises in hybridised mixed formulations of the Poisson- or diffusion problem and for which efficient multigrid preconditioners have been developed. Splitting errors can be reduced systematically by embedding this update into a preconditioned Richardson iteration. The accuracy and efficiency of the new method is demonstrated numerically for two time-dependent testcases that have been previously studied in the literature.

北京阿比特科技有限公司