亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A backward stable numerical calculation of a function with condition number $\kappa$ will have a relative accuracy of $\kappa\epsilon_{\text{machine}}$. Standard formulations and software implementations of finite-strain elastic materials models make use of the deformation gradient $\boldsymbol F = I + \partial \boldsymbol u/\partial \boldsymbol X$ and Cauchy-Green tensors. These formulations are not numerically stable, leading to loss of several digits of accuracy when used in the small strain regime, and often precluding the use of single precision floating point arithmetic. We trace the source of this instability to specific points of numerical cancellation, interpretable as ill-conditioned steps. We show how to compute various strain measures in a stable way and how to transform common constitutive models to their stable representations, formulated in either initial or current configuration. The stable formulations all provide accuracy of order $\epsilon_{\text{machine}}$. In many cases, the stable formulations have elegant representations in terms of appropriate strain measures and offer geometric intuition that is lacking in their standard representation. We show that algorithmic differentiation can stably compute stresses so long as the strain energy is expressed stably, and give principles for stable computation that can be applied to inelastic materials.

相關內容

機器學習系統設計系統評估標準

Generative diffusion models apply the concept of Langevin dynamics in physics to machine leaning, attracting a lot of interests from engineering, statistics and physics, but a complete picture about inherent mechanisms is still lacking. In this paper, we provide a transparent physics analysis of diffusion models, formulating the fluctuation theorem, entropy production, equilibrium measure, and Franz-Parisi potential to understand the dynamic process and intrinsic phase transitions. Our analysis is rooted in a path integral representation of both forward and backward dynamics, and in treating the reverse diffusion generative process as a statistical inference, where the time-dependent state variables serve as quenched disorder akin to that in spin glass theory. Our study thus links stochastic thermodynamics, statistical inference and geometry based analysis together to yield a coherent picture about how the generative diffusion models work.

Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an $n\times n$ matrix $A$ and vectors $u_1,\ldots,u_m$. The space spanned by all iterates $A^k u_j$ admits a particular basis -- the \emph{maximal Krylov basis} -- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\ldots$, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound $O(n^\omega \log(n))$ comes from repeated squarings of $A$ and logarithmically many Gaussian eliminations. Here $\omega>2$ is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in $O(n^\omega\log\log(n))$ field operations when $m \in O(n)$, and even $O(n^\omega)$ as soon as $m\in O(n/\log(n)^c)$ for some fixed real $c>0$. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in $O(n^\omega (\log\log(n))^2)$, and therefore matrix exponentiation~$A^k$ can be performed in the latter complexity if $\log(k) \in O(n^{\omega-1-\varepsilon})$ for some fixed $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for $m\times m$ polynomial matrices of average degree $n/m$, involving high-order lifting and minimal kernel bases.

We show that Pfaffians or contiguity relations of hypergeometric functions of several variables give a direct sampling algorithm from toric models in statistics, which is a Markov chain on a lattice generated by a matrix $A$. A correspondence among graphical models and $A$-hypergeometric system is discussed and we give a sum formula of special values of $A$-hypergeometric polynomials. Some hypergeometric series which are interesting in view of statistics are presented.

We consider the two-pronged fork frame $F$ and the variety $\mathbf{Eq}(B_F)$ generated by its dual closure algebra $B_F$. We describe the finite projective algebras in $\mathbf{Eq}(B_F)$ and give a purely semantic proof that unification in $\mathbf{Eq}(B_F)$ is finitary and not unitary.

Consider the generalized linear least squares (GLS) problem $\min\|Lx\|_2 \ \mathrm{s.t.} \ \|M(Ax-b)\|_2=\min$. The weighted pseudoinverse $A_{ML}^{\dag}$ is the matrix that maps $b$ to the minimum 2-norm solution of this GLS problem. By introducing a linear operator induced by $\{A, M, L\}$ between two finite-dimensional Hilbert spaces, we show that the minimum 2-norm solution of the GLS problem is equivalent to the minimum norm solution of a linear least squares problem involving this linear operator, and $A_{ML}^{\dag}$ can be expressed as the composition of the Moore-Penrose pseudoinverse of this linear operator and an orthogonal projector. With this new interpretation, we establish the generalized Moore-Penrose equations that completely characterize the weighted pseudoinverse, give a closed-form expression of the weighted pseudoinverse using the generalized singular value decomposition (GSVD), and propose a generalized LSQR (gLSQR) algorithm for iteratively solving the GLS problem. We construct several numerical examples to test the proposed iterative algorithm for solving GLS problems. Our results highlight the close connections between GLS, weighted pseudoinverse, GSVD and gLSQR, providing new tools for both analysis and computations.

High-dimensional sample correlation matrices are a crucial class of random matrices in multivariate statistical analysis. The central limit theorem (CLT) provides a theoretical foundation for statistical inference. In this paper, assuming that the data dimension increases proportionally with the sample size, we derive the limiting spectral distribution of the matrix $\widehat{\mathbf{R}}_n\mathbf{M}$ and establish the CLTs for the linear spectral statistics (LSS) of $\widehat{\mathbf{R}}_n\mathbf{M}$ in two structures: linear independent component structure and elliptical structure. In contrast to existing literature, our proposed spectral properties do not require $\mathbf{M}$ to be an identity matrix. Moreover, we also derive the joint limiting distribution of LSSs of $\widehat{\mathbf{R}}_n \mathbf{M}_1,\ldots,\widehat{\mathbf{R}}_n \mathbf{M}_K$. As an illustration, an application is given for the CLT.

Consider a linear operator equation $x - Kx = f$, where $f$ is given and $K$ is a Fredholm integral operator with a Green's function type kernel defined on $C[0, 1]$. For $r \geq 0$, we employ the interpolatory projection at $2r + 1$ collocation points (not necessarily Gauss points) onto a space of piecewise polynomials of degree $\leq 2r$ with respect to a uniform partition of $[0, 1]$. Previous researchers have established that, in the case of smooth kernels with piecewise polynomials of even degree, iteration in the collocation method and its variants improves the order of convergence by projection methods. In this article, we demonstrate the improvement in order of convergence by modified collocation method when the kernel is of Green's function type.

We consider the following task: suppose an algorithm is given copies of an unknown $n$-qubit quantum state $|\psi\rangle$ promised $(i)$ $|\psi\rangle$ is $\varepsilon_1$-close to a stabilizer state in fidelity or $(ii)$ $|\psi\rangle$ is $\varepsilon_2$-far from all stabilizer states, decide which is the case. We give a $\textsf{poly}(1/\varepsilon_1)$-sample and $n\cdot \textsf{poly}(1/\varepsilon_1)$-time algorithm for this task for every $\varepsilon_1>0$ and $\varepsilon_2\leq 2^{-\textsf{poly}(1/\varepsilon_1)}$. Our proof includes a new definition of Gowers norm for quantum states, an inverse theorem for the Gowers-$3$ norm of states and new bounds on stabilizer covering for structured subsets of Paulis using results in additive combinatorics.

We present a proof showing that the weak error of a system of $n$ interacting stochastic particles approximating the solution of the McKean-Vlasov equation is $\mathcal O(n^{-1})$. Our proof is based on the Kolmogorov backward equation for the particle system and bounds on the derivatives of its solution, which we derive more generally using the variations of the stochastic particle system. The convergence rate is verified by numerical experiments, which also indicate that the assumptions made here and in the literature can be relaxed.

Given a finite set of matrices with integer entries, the matrix mortality problem asks if there exists a product of these matrices equal to the zero matrix. We consider a special case of this problem where all entries of the matrices are nonnegative. This case is equivalent to the NFA mortality problem, which, given an NFA, asks for a word $w$ such that the image of every state under $w$ is the empty set. The size of the alphabet of the NFA is then equal to the number of matrices in the set. We study the length of shortest such words depending on the size of the alphabet. We show that for an NFA with $n$ states this length can be at least $2^n - 1$ for an alphabet of size $n$, $2^{(n - 4)/2}$ for an alphabet of size $3$ and $2^{(n - 2)/3}$ for an alphabet of size $2$. We also discuss further open problems related to mortality of NFAs and DFAs.

北京阿比特科技有限公司