亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The aim of this paper is to design the explicit radial basis function (RBF) Runge-Kutta methods for the initial value problem. We construct the two-, three- and four-stage RBF Runge-Kutta methods based on the Gaussian RBF Euler method with the shape parameter, where the analysis of the local truncation error shows that the s-stage RBF Runge-Kutta method could formally achieve order s+1. The proof for the convergence of those RBF Runge-Kutta methods follows. We then plot the stability region of each RBF Runge-Kutta method proposed and compare with the one of the correspondent Runge-Kutta method. Numerical experiments are provided to exhibit the improved behavior of the RBF Runge-Kutta methods over the standard ones.

相關內容

Quantized tensor trains (QTTs) have recently emerged as a framework for the numerical discretization of continuous functions, with the potential for widespread applications in numerical analysis. However, the theory of QTT approximation is not fully understood. In this work, we advance this theory from the point of view of multiscale polynomial interpolation. This perspective clarifies why QTT ranks decay with increasing depth, quantitatively controls QTT rank in terms of smoothness of the target function, and explains why certain functions with sharp features and poor quantitative smoothness can still be well approximated by QTTs. The perspective also motivates new practical and efficient algorithms for the construction of QTTs from function evaluations on multiresolution grids.

Many analyses of multivariate data focus on evaluating the dependence between two sets of variables, rather than the dependence among individual variables within each set. Canonical correlation analysis (CCA) is a classical data analysis technique that estimates parameters describing the dependence between such sets. However, inference procedures based on traditional CCA rely on the assumption that all variables are jointly normally distributed. We present a semiparametric approach to CCA in which the multivariate margins of each variable set may be arbitrary, but the dependence between variable sets is described by a parametric model that provides low-dimensional summaries of dependence. While maximum likelihood estimation in the proposed model is intractable, we propose two estimation strategies: one using a pseudolikelihood for the model and one using a Markov chain Monte Carlo (MCMC) algorithm that provides Bayesian estimates and confidence regions for the between-set dependence parameters. The MCMC algorithm is derived from a multirank likelihood function, which uses only part of the information in the observed data in exchange for being free of assumptions about the multivariate margins. We apply the proposed Bayesian inference procedure to Brazilian climate data and monthly stock returns from the materials and communications market sectors.

Motivated by the need for the rigorous analysis of the numerical stability of variational least-squares kernel-based methods for solving second-order elliptic partial differential equations, we provide previously lacking stability inequalities. This fills a significant theoretical gap in the previous work [Comput. Math. Appl. 103 (2021) 1-11], which provided error estimates based on a conjecture on the stability. With the stability estimate now rigorously proven, we complete the theoretical foundations and compare the convergence behavior to the proven rates. Furthermore, we establish another stability inequality involving weighted-discrete norms, and provide a theoretical proof demonstrating that the exact quadrature weights are not necessary for the weighted least-squares kernel-based collocation method to converge. Our novel theoretical insights are validated by numerical examples, which showcase the relative efficiency and accuracy of these methods on data sets with large mesh ratios. The results confirm our theoretical predictions regarding the performance of variational least-squares kernel-based method, least-squares kernel-based collocation method, and our new weighted least-squares kernel-based collocation method. Most importantly, our results demonstrate that all methods converge at the same rate, validating the convergence theory of weighted least-squares in our proven theories.

This paper studies a quantum simulation technique for solving the Fokker-Planck equation. Traditional semi-discretization methods often fail to preserve the underlying Hamiltonian dynamics and may even modify the Hamiltonian structure, particularly when incorporating boundary conditions. We address this challenge by employing the Schrodingerization method-it converts any linear partial and ordinary differential equation with non-Hermitian dynamics into systems of Schrodinger-type equations. We explore the application in two distinct forms of the Fokker-Planck equation. For the conservation form, we show that the semi-discretization-based Schrodingerization is preferable, especially when dealing with non-periodic boundary conditions. Additionally, we analyze the Schrodingerization approach for unstable systems that possess positive eigenvalues in the real part of the coefficient matrix or differential operator. Our analysis reveals that the direct use of Schrodingerization has the same effect as a stabilization procedure. For the heat equation form, we propose a quantum simulation procedure based on the time-splitting technique. We discuss the relationship between operator splitting in the Schrodingerization method and its application directly to the original problem, illustrating how the Schrodingerization method accurately reproduces the time-splitting solutions at each step. Furthermore, we explore finite difference discretizations of the heat equation form using shift operators. Utilizing Fourier bases, we diagonalize the shift operators, enabling efficient simulation in the frequency space. Providing additional guidance on implementing the diagonal unitary operators, we conduct a comparative analysis between diagonalizations in the Bell and the Fourier bases, and show that the former generally exhibits greater efficiency than the latter.

In this paper, a force-based beam finite element model based on a modified higher-order shear deformation theory is proposed for the accurate analysis of functionally graded beams. In the modified higher-order shear deformation theory, the distribution of transverse shear stress across the beam's thickness is obtained from the differential equilibrium equation on stress, and a modified shear stiffness is derived to take the effect of transverse shear stress distribution into consideration. In the proposed beam element model, unlike traditional beam finite elements that regard generalized displacements as unknown fields, the internal forces are considered as the unknown fields, and they are predefined by using the closed-form solutions of the differential equilibrium equations of higher-order shear beam. Then, the generalized displacements are expressed by the internal forces with the introduction of geometric relations and constitutive equations, and the equation system of the beam element is constructed based on the equilibrium conditions at the boundaries and the compatibility condition within the element. Numerical examples underscore the accuracy and efficacy of the proposed higher-order beam element model in the static analysis of functionally graded sandwich beams, particularly in terms of true transverse shear stress distribution.

In this paper, we plan to show an eigenvalue algorithm for block Hessenberg matrices by using the idea of non-commutative integrable systems and matrix-valued orthogonal polynomials. We introduce adjacent families of matrix-valued $\theta$-deformed bi-orthogonal polynomials, and derive corresponding discrete non-commutative hungry Toda lattice from discrete spectral transformations for polynomials. It is shown that this discrete system can be used as a pre-precessing algorithm for block Hessenberg matrices. Besides, some convergence analysis and numerical examples of this algorithm are presented.

In this paper, we innovatively develop uniform/variable-time-step weighted and shifted BDF2 (WSBDF2) methods for the anisotropic Cahn-Hilliard (CH) model, combining the scalar auxiliary variable (SAV) approach with two types of stabilized techniques. Using the concept of $G$-stability, the uniform-time-step WSBDF2 method is theoretically proved to be energy-stable. Due to the inapplicability of the relevant G-stability properties, another technique is adopted in this work to demonstrate the energy stability of the variable-time-step WSBDF2 method. In addition, the two numerical schemes are all mass-conservative.Finally, numerous numerical simulations are presented to demonstrate the stability and accuracy of these schemes.

Bootstrap is a widely used technique that allows estimating the properties of a given estimator, such as its bias and standard error. In this paper, we evaluate and compare five bootstrap-based methods for making confidence intervals: two of them (Normal and Studentized) based on the bootstrap estimate of the standard error; another two (Quantile and Better) based on the estimated distribution of the parameter estimator; and finally, considering an interval constructed based on Bayesian bootstrap, relying on the notion of credible interval. The methods are compared through Monte Carlo simulations in different scenarios, including samples with autocorrelation induced by a copula model. The results are also compared with respect to the coverage rate, the median interval length and a novel indicator, proposed in this paper, combining both of them. The results show that the Studentized method has the best coverage rate, although the smallest intervals are attained by the Bayesian method. In general, all methods are appropriate and demonstrated good performance even in the scenarios violating the independence assumption.

Uniformly random unitaries, i.e. unitaries drawn from the Haar measure, have many useful properties, but cannot be implemented efficiently. This has motivated a long line of research into random unitaries that "look" sufficiently Haar random while also being efficient to implement. Two different notions of derandomisation have emerged: $t$-designs are random unitaries that information-theoretically reproduce the first $t$ moments of the Haar measure, and pseudorandom unitaries (PRUs) are random unitaries that are computationally indistinguishable from Haar random. In this work, we take a unified approach to constructing $t$-designs and PRUs. For this, we introduce and analyse the "$PFC$ ensemble", the product of a random computational basis permutation $P$, a random binary phase operator $F$, and a random Clifford unitary $C$. We show that this ensemble reproduces exponentially high moments of the Haar measure. We can then derandomise the $PFC$ ensemble to show the following: (1) Linear-depth $t$-designs. We give the first construction of a (diamond-error) approximate $t$-design with circuit depth linear in $t$. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their $2t$-wise independent counterparts. (2) Non-adaptive PRUs. We give the first construction of PRUs with non-adaptive security, i.e. we construct unitaries that are indistinguishable from Haar random to polynomial-time distinguishers that query the unitary in parallel on an arbitary state. This follows from the $PFC$ ensemble by replacing the random phase and permutation operators with their pseudorandom counterparts. (3) Adaptive pseudorandom isometries. We show that if one considers isometries (rather than unitaries) from $n$ to $n + \omega(\log n)$ qubits, a small modification of our PRU construction achieves general adaptive security.

We introduce a framework rooted in a rate distortion problem for Markov chains, and show how a suite of commonly used Markov Chain Monte Carlo (MCMC) algorithms are specific instances within it, where the target stationary distribution is controlled by the distortion function. Our approach offers a unified variational view on the optimality of algorithms such as Metropolis-Hastings, Glauber dynamics, the swapping algorithm and Feynman-Kac path models. Along the way, we analyze factorizability and geometry of multivariate Markov chains. Specifically, we demonstrate that induced chains on factors of a product space can be regarded as information projections with respect to a particular divergence. This perspective yields Han--Shearer type inequalities for Markov chains as well as applications in the context of large deviations and mixing time comparison.

北京阿比特科技有限公司