亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The computation of the partial generalized singular value decomposition (GSVD) of large-scale matrix pairs can be approached by means of iterative methods based on expanding subspaces, particularly Krylov subspaces. We consider the joint Lanczos bidiagonalization method, and analyze the feasibility of adapting the thick restart technique that is being used successfully in the context of other linear algebra problems. Numerical experiments illustrate the effectiveness of the proposed method. We also compare the new method with an alternative solution via equivalent eigenvalue problems, considering accuracy as well as computational performance. The analysis is done using a parallel implementation in the SLEPc library.

相關內容

The problem of estimating return levels of river discharge, relevant in flood frequency analysis, is tackled by relying on the extreme value theory. The Generalized Extreme Value (GEV) distribution is assumed to model annual maxima values of river discharge registered at multiple gauging stations belonging to the same river basin. The specific features of the data from the Upper Danube basin drive the definition of the proposed statistical model. Firstly, Bayesian P-splines are considered to account for the non-linear effects of station-specific covariates on the GEV parameters. Secondly, the problem of functional and variable selection is addressed by imposing a grouped horseshoe prior on the coefficients, to encourage the shrinkage of non-relevant components to zero. A cross-validation study is organized to compare the proposed modeling solution to other models, showing its potential in reducing the uncertainty of the ungauged predictions without affecting their calibration.

We present a finite element scheme for fractional diffusion problems with varying diffusivity and fractional order. We consider a symmetric integral form of these nonlocal equations defined on general geometries and in arbitrary bounded domains. A number of challenges are encountered when discretizing these equations. The first comes from the heterogeneous kernel singularity in the fractional integral operator. The second comes from the dense discrete operator with its quadratic growth in memory footprint and arithmetic operations. An additional challenge comes from the need to handle volume conditions-the generalization of classical local boundary conditions to the nonlocal setting. Satisfying these conditions requires that the effect of the whole domain, including both the interior and exterior regions, can be computed on every interior point in the discretization. Performed directly, this would result in quadratic complexity. To address these challenges, we propose a strategy that decomposes the stiffness matrix into three components. The first is a sparse matrix that handles the singular near-field separately and is computed by adapting singular quadrature techniques available for the homogeneous case to the case of spatially variable order. The second component handles the remaining smooth part of the near-field as well as the far field and is approximated by a hierarchical $\mathcal{H}^{2}$ matrix that maintains linear complexity in storage and operations. The third component handles the effect of the global mesh at every node and is written as a weighted mass matrix whose density is computed by a fast-multipole type method. The resulting algorithm has therefore overall linear space and time complexity. Analysis of the consistency of the stiffness matrix is provided and numerical experiments are conducted to illustrate the convergence and performance of the proposed algorithm.

We show the strong convergence in arbitrary Sobolev norms of solutions of the discrete nonlinear Schr{\"o}dinger on an infinite lattice towards those of the nonlinear Schr{\"o}dinger equation on the whole space. We restrict our attention to the one and two-dimensional case, with a set of parameters which implies global well-posedness for the continuous equation. Our proof relies on the use of bilinear estimates for the Shannon interpolation as well as the control of the growth of discrete Sobolev norms that we both prove.

Spectral clustering is one of the most popular unsupervised machine learning methods. Constructing similarity matrix is crucial to this type of method. In most existing works, the similarity matrix is computed once for all or is updated alternatively. However, the former is difficult to reflect comprehensive relationships among data points, and the latter is time-consuming and is even infeasible for large-scale problems. In this work, we propose a restarted clustering framework with self-guiding and block diagonal representation. An advantage of the strategy is that some useful clustering information obtained from previous cycles could be preserved as much as possible. To the best of our knowledge, this is the first work that applies restarting strategy to spectral clustering. The key difference is that we reclassify the samples in each cycle of our method, while they are classified only once in existing methods. To further release the overhead, we introduce a block diagonal representation with Nystr\"{o}m approximation for constructing the similarity matrix. Theoretical results are established to show the rationality of inexact computations in spectral clustering. Comprehensive experiments are performed on some benchmark databases, which show the superiority of our proposed algorithms over many state-of-the-art algorithms for large-scale problems. Specifically, our framework has a potential boost for clustering algorithms and works well even using an initial guess chosen randomly.

Separating signals from an additive mixture may be an unnecessarily hard problem when one is only interested in specific properties of a given signal. In this work, we tackle simpler "statistical component separation" problems that focus on recovering a predefined set of statistical descriptors of a target signal from a noisy mixture. Assuming access to samples of the noise process, we investigate a method devised to match the statistics of the solution candidate corrupted by noise samples with those of the observed mixture. We first analyze the behavior of this method using simple examples with analytically tractable calculations. Then, we apply it in an image denoising context employing 1) wavelet-based descriptors, 2) ConvNet-based descriptors on astrophysics and ImageNet data. In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations. Additionally, despite not constructed for this purpose, it performs surprisingly well in terms of peak signal-to-noise ratio on full signal reconstruction. In comparison, representation 2) appears less suitable for image denoising. Finally, we extend this method by introducing a diffusive stepwise algorithm which gives a new perspective to the initial method and leads to promising results for image denoising under specific circumstances.

This paper presents new methods for analyzing and evaluating generalized plans that can solve broad classes of related planning problems. Although synthesis and learning of generalized plans has been a longstanding goal in AI, it remains challenging due to fundamental gaps in methods for analyzing the scope and utility of a given generalized plan. This paper addresses these gaps by developing a new conceptual framework along with proof techniques and algorithmic processes for assessing termination and goal-reachability related properties of generalized plans. We build upon classic results from graph theory to decompose generalized plans into smaller components that are then used to derive hierarchical termination arguments. These methods can be used to determine the utility of a given generalized plan, as well as to guide the synthesis and learning processes for generalized plans. We present theoretical as well as empirical results illustrating the scope of this new approach. Our analysis shows that this approach significantly extends the class of generalized plans that can be assessed automatically, thereby reducing barriers in the synthesis and learning of reliable generalized plans.

We present a new restricted SVD-based CUR (RSVD-CUR) factorization for matrix triplets $(A, B, G)$ that aims to extract meaningful information by providing a low-rank approximation of the three matrices using a subset of their rows and columns. The proposed method utilizes the discrete empirical interpolation method (DEIM) to select the subset of rows and columns from the orthogonal and nonsingular matrices obtained through a restricted singular value decomposition of the matrix triplet. We explore the relationships between a DEIM type RSVD-CUR factorization, a DEIM type CUR factorization, and a DEIM type generalized CUR decomposition, and provide an error analysis that establishes the accuracy of the RSVD-CUR decomposition within a factor of the approximation error of the restricted singular value decomposition of the given matrices. The RSVD-CUR factorization can be used in applications that require approximating one data matrix relative to two other given matrices. We discuss two such applications, namely multi-view dimension reduction and data perturbation problems where a correlated noise matrix is added to the input data matrix. Our numerical experiments demonstrate the advantages of the proposed method over the standard CUR approximation in these scenarios.

The generalized Lanczos trust-region (GLTR) method is one of the most popular approaches for solving large-scale trust-region subproblem (TRS). Recently, Jia and Wang [Z. Jia and F. Wang, \emph{SIAM J. Optim., 31 (2021), pp. 887--914}] considered the convergence of this method and established some {\it a prior} error bounds on the residual, the solution and the Largrange multiplier. In this paper, we revisit the convergence of the GLTR method and try to improve these bounds. First, we establish a sharper upper bound on the residual. Second, we give a new bound on the distance between the approximation and the exact solution, and show that the convergence of the approximation has nothing to do with the associated spectral separation. Third, we present some non-asymptotic bounds for the convergence of the Largrange multiplier, and define a factor that plays an important role on the convergence of the Largrange multiplier. Numerical experiments demonstrate the effectiveness of our theoretical results.

Functional principal component analysis (FPCA) is an important technique for dimension reduction in functional data analysis (FDA). Classical FPCA method is based on the Karhunen-Lo\`{e}ve expansion, which assumes a linear structure of the observed functional data. However, the assumption may not always be satisfied, and the FPCA method can become inefficient when the data deviates from the linear assumption. In this paper, we propose a novel FPCA method that is suitable for data with a nonlinear structure by neural network approach. We construct networks that can be applied to functional data and explore the corresponding universal approximation property. The main use of our proposed nonlinear FPCA method is curve reconstruction. We conduct a simulation study to evaluate the performance of our method. The proposed method is also applied to two real-world data sets to further demonstrate its superiority.

We consider the problem of testing for the martingale difference hypothesis for univariate strictly stationary time series by implementing a novel test for conditional mean independence based on the concept of martingale difference divergence. The martingale difference divergence function allows us to measure the degree to which a certain variable is conditionally mean dependent upon its past values: in particular, it does so by computing the regularized norm of the covariance between the current value of the variable and the characteristic function of its past values. In this paper, we make use of such a concept, along with the theoretical framework of generalized spectral density, to construct a Ljung-Box type test for the martingale difference hypothesis. In addition to the results obtained with the implementation of the test statistic, we proceed to show some asymptotics for martingale difference divergence in the time series framework.

北京阿比特科技有限公司