亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Numerical shock instability is a complexity which may occur in supersonic simulations. Riemann solver is usually the crucial factor that affects both the computation accuracy and numerical shock stability. In this paper, several classical Riemann solvers are discussed, and the intrinsic mechanism of shock instability is especially concerned. It can be found that the momentum perturbation traversing shock wave is a major reason that invokes instability. Furthermore, slope limiters used to depress oscillation across shock wave is also a key factor for computation stability. Several slope limiters can cause significant numerical errors near shock waves, and make the computation fail to converge. Extra dissipation of Riemann solvers and slope limiters can be helpful to eliminate instability, but reduces the computation accuracy. Therefore, to properly introduce numerical dissipation is critical for numerical computations. Here, pressure based shock indicator is used to show the position of shock wave and tunes the numerical dissipation. Overall, the presented methods are showing satisfactory results in both the accuracy and stability.

相關內容

機器學習系統設計系統評估標準

Machine-learned normalizing flows can be used in the context of lattice quantum field theory to generate statistically correlated ensembles of lattice gauge fields at different action parameters. This work demonstrates how these correlations can be exploited for variance reduction in the computation of observables. Three different proof-of-concept applications are demonstrated using a novel residual flow architecture: continuum limits of gauge theories, the mass dependence of QCD observables, and hadronic matrix elements based on the Feynman-Hellmann approach. In all three cases, it is shown that statistical uncertainties are significantly reduced when machine-learned flows are incorporated as compared with the same calculations performed with uncorrelated ensembles or direct reweighting.

The calculation of the acoustic field in or around objects is an important task in acoustic engineering. To numerically solve this task, the boundary element method (BEM) is a commonly used method especially for infinite domains. The open source tool Mesh2HRTF and its BEM core NumCalc provide users with a collection of free software for acoustic simulations without the need of having an in-depth knowledge into numerical methods. However, we feel that users should have a basic understanding with respect to the methods behind the software they are using. We are convinced that this basic understanding helps in avoiding common mistakes and also helps to understand the requirements to use the software. To provide this background is the first motivation for this paper. A second motivation for this paper is to demonstrate the accuracy of NumCalc when solving benchmark problems. Thus, users can get an idea about the accuracy they can expect when using NumCalc as well as the memory and CPU requirements of NumCalc. A third motivation for this paper is to give users detailed information about some parts of the actual implementation that are usually not mentioned in literature, e.g., the specific version of the fast multipole method and its clustering process or how to use frequency-dependent admittance boundary conditions.

Optical coherence tomography (OCT) suffers from speckle noise, causing the deterioration of image quality, especially in high-resolution modalities like visible light OCT (vis-OCT). The potential of conventional supervised deep learning denoising methods is limited by the difficulty of obtaining clean data. Here, we proposed an innovative self-supervised strategy called Sub2Full (S2F) for OCT despeckling without clean data. This approach works by acquiring two repeated B-scans, splitting the spectrum of the first repeat as a low-resolution input, and utilizing the full spectrum of the second repeat as the high-resolution target. The proposed method was validated on vis-OCT retinal images visualizing sublaminar structures in outer retina and demonstrated superior performance over conventional Noise2Noise and Noise2Void schemes. The code is available at //github.com/PittOCT/Sub2Full-OCT-Denoising.

Anderson acceleration (AA) is a technique for accelerating the convergence of an underlying fixed-point iteration. AA is widely used within computational science, with applications ranging from electronic structure calculation to the training of neural networks. Despite AA's widespread use, relatively little is understood about it theoretically. An important and unanswered question in this context is: To what extent can AA actually accelerate convergence of the underlying fixed-point iteration? While simple enough to state, this question appears rather difficult to answer. For example, it is unanswered even in the simplest (non-trivial) case where the underlying fixed-point iteration consists of applying a two-dimensional affine function. In this note we consider a restarted variant of AA applied to solve symmetric linear systems with restart window of size one. Several results are derived from the analytical solution of a nonlinear eigenvalue problem characterizing residual propagation of the AA iteration. This includes a complete characterization of the method to solve $2 \times 2$ linear systems, rigorously quantifying how the asymptotic convergence factor depends on the initial iterate, and quantifying by how much AA accelerates the underlying fixed-point iteration. We also prove that even if the underlying fixed-point iteration diverges, the associated AA iteration may still converge.

Deep neural network models for image segmentation can be a powerful tool for the automation of motor claims handling processes in the insurance industry. A crucial aspect is the reliability of the model outputs when facing adverse conditions, such as low quality photos taken by claimants to document damages. We explore the use of a meta-classification model to assess the precision of segments predicted by a model trained for the semantic segmentation of car body parts. Different sets of features correlated with the quality of a segment are compared, and an AUROC score of 0.915 is achieved for distinguishing between high- and low-quality segments. By removing low-quality segments, the average mIoU of the segmentation output is improved by 16 percentage points and the number of wrongly predicted segments is reduced by 77%.

Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.

The optimal one-sided parametric polynomial approximants of a circular arc are considered. More precisely, the approximant must be entirely in or out of the underlying circle of an arc. The natural restriction to an arc's approximants interpolating boundary points is assumed. However, the study of approximants, which additionally interpolate corresponding tangent directions and curvatures at the boundary of an arc, is also considered. Several low-degree polynomial approximants are studied in detail. When several solutions fulfilling the interpolation conditions exist, the optimal one is characterized, and a numerical algorithm for its construction is suggested. Theoretical results are demonstrated with several numerical examples and a comparison with general (i.e. non-one-sided) approximants are provided.

Barrier functions are crucial for maintaining an intersection and inversion free simulation trajectory but existing methods which directly use distance can restrict implementation design and performance. We present an approach to rewriting the barrier function for arriving at an efficient and robust approximation of its Hessian. The key idea is to formulate a simplicial geometric measure of contact using mesh boundary elements, from which analytic eigensystems are derived and enhanced with filtering and stiffening terms that ensure robustness with respect to the convergence of a Project-Newton solver. A further advantage of our rewriting of the barrier function is that it naturally caters to the notorious case of nearly-parallel edge-edge contacts for which we also present a novel analytic eigensystem. Our approach is thus well suited for standard second order unconstrained optimization strategies for resolving contacts, minimizing nonlinear nonconvex functions where the Hessian may be indefinite. The efficiency of our eigensystems alone yields a 3x speedup over the standard IPC barrier formulation. We further apply our analytic proxy eigensystems to produce an entirely GPU-based implementation of IPC with significant further acceleration.

We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas--Its--Kitaev Riemann--Hilbert representation of the orthogonal polynomials to produce an $\mathrm{O}(N)$ method to compute the first $N$ recurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory.

Pairwise comparison models are used for quantitatively evaluating utility and ranking in various fields. The increasing scale of modern problems underscores the need to understand statistical inference in these models when the number of subjects diverges, which is currently lacking in the literature except in a few special instances. This paper addresses this gap by establishing an asymptotic normality result for the maximum likelihood estimator in a broad class of pairwise comparison models. The key idea lies in identifying the Fisher information matrix as a weighted graph Laplacian matrix which can be studied via a meticulous spectral analysis. Our findings provide the first unified theory for performing statistical inference in a wide range of pairwise comparison models beyond the Bradley--Terry model, benefiting practitioners with a solid theoretical guarantee for their use. Simulations utilizing synthetic data are conducted to validate the asymptotic normality result, followed by a hypothesis test using a tennis competition dataset.

北京阿比特科技有限公司