亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Consider the generalized linear least squares (GLS) problem $\min\|Lx\|_2 \ \mathrm{s.t.} \ \|M(Ax-b)\|_2=\min$. The weighted pseudoinverse $A_{ML}^{\dag}$ is the matrix that maps $b$ to the minimum 2-norm solution of this GLS problem. By introducing a linear operator induced by $\{A, M, L\}$ between two finite-dimensional Hilbert spaces, we show that the minimum 2-norm solution of the GLS problem is equivalent to the minimum norm solution of a linear least squares problem involving this linear operator, and $A_{ML}^{\dag}$ can be expressed as the composition of the Moore-Penrose pseudoinverse of this linear operator and an orthogonal projector. With this new interpretation, we establish the generalized Moore-Penrose equations that completely characterize the weighted pseudoinverse, give a closed-form expression of the weighted pseudoinverse using the generalized singular value decomposition (GSVD), and propose a generalized LSQR (gLSQR) algorithm for iteratively solving the GLS problem. We construct several numerical examples to test the proposed iterative algorithm for solving GLS problems. Our results highlight the close connections between GLS, weighted pseudoinverse, GSVD and gLSQR, providing new tools for both analysis and computations.

相關內容

In this paper, a two-sided variable-coefficient space-fractional diffusion equation with fractional Neumann boundary condition is considered. To conquer the weak singularity caused by nonlocal space-fractional differential operators, a fractional block-centered finite difference (BCFD) method on general nonuniform grids is proposed. However, this discretization still results in an unstructured dense coefficient matrix with huge memory requirement and computational complexity. To address this issue, a fast version fractional BCFD algorithm by employing the well-known sum-of-exponentials (SOE) approximation technique is also proposed. Based upon the Krylov subspace iterative methods, fast matrix-vector multiplications of the resulting coefficient matrices with any vector are developed, in which they can be implemented in only $\mathcal{O}(MN_{exp})$ operations per iteration without losing any accuracy compared to the direct solvers, where $N_{exp}\ll M$ is the number of exponentials in the SOE approximation. Moreover, the coefficient matrices do not necessarily need to be generated explicitly, while they can be stored in $\mathcal{O}(MN_{exp})$ memory by only storing some coefficient vectors. Numerical experiments are provided to demonstrate the efficiency and accuracy of the method.

For a hypergraph $\mathcal{H}=(X,\mathcal{E})$ a \emph{support} is a graph $G$ on $X$ such that for each $E\in\mathcal{E}$, the induced subgraph of $G$ on the elements in $E$ is connected. If $G$ is planar, we call it a planar support. A set of axis parallel rectangles $\mathcal{R}$ forms a non-piercing family if for any $R_1, R_2 \in \mathcal{R}$, $R_1 \setminus R_2$ is connected. Given a set $P$ of $n$ points in $\mathbb{R}^2$ and a set $\mathcal{R}$ of $m$ \emph{non-piercing} axis-aligned rectangles, we give an algorithm for computing a planar support for the hypergraph $(P,\mathcal{R})$ in $O(n\log^2 n + (n+m)\log m)$ time, where each $R\in\mathcal{R}$ defines a hyperedge consisting of all points of $P$ contained in~$R$. We use this result to show that if for a family of axis-parallel rectangles, any point in the plane is contained in at most $k$ pairwise \emph{crossing} rectangles (a pair of intersecting rectangles such that neither contains a corner of the other is called a crossing pair of rectangles), then we can obtain a support as the union of $k$ planar graphs.

Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.

We explore a linear inhomogeneous elasticity equation with random Lam\'e parameters. The latter are parameterized by a countably infinite number of terms in separated expansions. The main aim of this work is to estimate expected values (considered as an infinite dimensional integral on the parametric space corresponding to the random coefficients) of linear functionals acting on the solution of the elasticity equation. To achieve this, the expansions of the random parameters are truncated, a high-order quasi-Monte Carlo (QMC) is combined with a sparse grid approach to approximate the high dimensional integral, and a Galerkin finite element method (FEM) is introduced to approximate the solution of the elasticity equation over the physical domain. The error estimates from (1) truncating the infinite expansion, (2) the Galerkin FEM, and (3) the QMC sparse grid quadrature rule are all studied. For this purpose, we show certain required regularity properties of the continuous solution with respect to both the parametric and physical variables. To achieve our theoretical regularity and convergence results, some reasonable assumptions on the expansions of the random coefficients are imposed. Finally, some numerical results are delivered.

The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.

We consider the problem $(\mathrm{P})$ of fitting $n$ standard Gaussian random vectors in $\mathbb{R}^d$ to the boundary of a centered ellipsoid, as $n, d \to \infty$. This problem is conjectured to have a sharp feasibility transition: for any $\varepsilon > 0$, if $n \leq (1 - \varepsilon) d^2 / 4$ then $(\mathrm{P})$ has a solution with high probability, while $(\mathrm{P})$ has no solutions with high probability if $n \geq (1 + \varepsilon) d^2 /4$. So far, only a trivial bound $n \geq d^2 / 2$ is known on the negative side, while the best results on the positive side assume $n \leq d^2 / \mathrm{polylog}(d)$. In this work, we improve over previous approaches using a key result of Bartl & Mendelson (2022) on the concentration of Gram matrices of random vectors under mild assumptions on their tail behavior. This allows us to give a simple proof that $(\mathrm{P})$ is feasible with high probability when $n \leq d^2 / C$, for a (possibly large) constant $C > 0$.

We propose a fast scheme for approximating the Mittag-Leffler function by an efficient sum-of-exponentials (SOE), and apply the scheme to the viscoelastic model of wave propagation with mixed finite element methods for the spatial discretization and the Newmark-beta scheme for the second-order temporal derivative. Compared with traditional L1 scheme for fractional derivative, our fast scheme reduces the memory complexity from $\mathcal O(N_sN) $ to $\mathcal O(N_sN_{exp})$ and the computation complexity from $\mathcal O(N_sN^2)$ to $\mathcal O(N_sN_{exp}N)$, where $N$ denotes the total number of temporal grid points, $N_{exp}$ is the number of exponentials in SOE, and $N_s$ represents the complexity of memory and computation related to the spatial discretization. Numerical experiments are provided to verify the theoretical results.

We investigate a structural generalisation of treewidth we call $\mathcal{A}$-blind-treewidth where $\mathcal{A}$ denotes an annotated graph class. This width parameter is defined by evaluating only the size of those bags $B$ of tree-decompositions for a graph $G$ where ${(G,B) \notin \mathcal{A}}$. For the two cases where $\mathcal{A}$ is (i) the class $\mathcal{B}$ of all pairs ${(G,X)}$ such that no odd cycle in $G$ contains more than one vertex of ${X \subseteq V(G)}$ and (ii) the class $\mathcal{B}$ together with the class $\mathcal{P}$ of all pairs ${(G,X)}$ such that the "torso" of $X$ in $G$ is planar. For both classes, $\mathcal{B}$ and ${\mathcal{B} \cup \mathcal{P}}$, we obtain analogues of the Grid Theorem by Robertson and Seymour and FPT-algorithms that either compute decompositions of small width or correctly determine that the width of a given graph is large. Moreover, we present FPT-algorithms for Maximum Independent Set on graphs of bounded $\mathcal{B}$-blind-treewidth and Maximum Cut on graphs of bounded ${(\mathcal{B}\cup\mathcal{P})}$-blind-treewidth.

Finding eigenvalue distributions for a number of sparse random matrix ensembles can be reduced to solving nonlinear integral equations of the Hammerstein type. While a systematic mathematical theory of such equations exists, it has not been previously applied to sparse matrix problems. We close this gap in the literature by showing how one can employ numerical solutions of Hammerstein equations to accurately recover the spectra of adjacency matrices and Laplacians of random graphs. While our treatment focuses on random graphs for concreteness, the methodology has broad applications to more general sparse random matrices.

We prove that multilevel Picard approximations are capable of approximating solutions of semilinear heat equations in $L^{p}$-sense, ${p}\in [2,\infty)$, in the case of gradient-dependent, Lipschitz-continuous nonlinearities, while the computational effort of the multilevel Picard approximations grow at most polynomially in both dimension $d$ and prescribed reciprocal accuracy $\epsilon$.

北京阿比特科技有限公司