亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multivariate imputation by chained equations (MICE) is one of the most popular approaches to address missing values in a data set. This approach requires specifying a univariate imputation model for every variable under imputation. The specification of which predictors should be included in these univariate imputation models can be a daunting task. Principal component analysis (PCA) can simplify this process by replacing all of the potential imputation model predictors with a few components summarizing their variance. In this article, we extend the use of PCA with MICE to include a supervised aspect whereby information from the variables under imputation is incorporated into the principal component estimation. We conducted an extensive simulation study to assess the statistical properties of MICE with different versions of supervised dimensionality reduction and we compared them with the use of classical unsupervised PCA as a simpler dimensionality reduction technique.

相關內容

在統計中,主成分分析(PCA)是一種通過最大化每個維度的方差來將較高維度空間中的數據投影到較低維度空間中的方法。給定二維,三維或更高維空間中的點集合,可以將“最佳擬合”線定義為最小化從點到線的平均平方距離的線。可以從垂直于第一條直線的方向類似地選擇下一條最佳擬合線。重復此過程會產生一個正交的基礎,其中數據的不同單個維度是不相關的。 這些基向量稱為主成分。

In this paper, a two-sided variable-coefficient space-fractional diffusion equation with fractional Neumann boundary condition is considered. To conquer the weak singularity caused by the nonlocal space-fractional differential operators, by introducing an auxiliary fractional flux variable and using piecewise linear interpolations, a fractional block-centered finite difference (BCFD) method on general nonuniform grids is proposed. However, like other numerical methods, the proposed method still produces linear algebraic systems with unstructured dense coefficient matrices under the general nonuniform grids.Consequently, traditional direct solvers such as Gaussian elimination method shall require $\mathcal{O}(M^2)$ memory and $\mathcal{O}(M^3)$ computational work per time level, where $M$ is the number of spatial unknowns in the numerical discretization. To address this issue, we combine the well-known sum-of-exponentials (SOE) approximation technique with the fractional BCFD method to propose a fast version fractional BCFD algorithm. Based upon the Krylov subspace iterative methods, fast matrix-vector multiplications of the resulting coefficient matrices with any vector are developed, in which they can be implemented in only $\mathcal{O}(MN_{exp})$ operations per iteration, where $N_{exp}\ll M$ is the number of exponentials in the SOE approximation. Moreover, the coefficient matrices do not necessarily need to be generated explicitly, while they can be stored in $\mathcal{O}(MN_{exp})$ memory by only storing some coefficient vectors. Numerical experiments are provided to demonstrate the efficiency and accuracy of the method.

In this paper we consider the numerical solution of fractional differential equations. In particular, we study a step-by-step graded mesh procedure based on an expansion of the vector field using orthonormal Jacobi polynomials. Under mild hypotheses, the proposed procedure is capable of getting spectral accuracy. A few numerical examples are reported to confirm the theoretical findings.

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex optimization that sequentially minimizes a majorizing surrogate of the objective function in each block coordinate while the other block coordinates are held fixed. We consider a family of BMM algorithms for minimizing smooth nonconvex objectives, where each parameter block is constrained within a subset of a Riemannian manifold. We establish that this algorithm converges asymptotically to the set of stationary points, and attains an $\epsilon$-stationary point within $\widetilde{O}(\epsilon^{-2})$ iterations. In particular, the assumptions for our complexity results are completely Euclidean when the underlying manifold is a product of Euclidean or Stiefel manifolds, although our analysis makes explicit use of the Riemannian geometry. Our general analysis applies to a wide range of algorithms with Riemannian constraints: Riemannian MM, block projected gradient descent, optimistic likelihood estimation, geodesically constrained subspace tracking, robust PCA, and Riemannian CP-dictionary-learning. We experimentally validate that our algorithm converges faster than standard Euclidean algorithms applied to the Riemannian setting.

In this paper, a new two-relaxation-time regularized (TRT-R) lattice Boltzmann (LB) model for convection-diffusion equation (CDE) with variable coefficients is proposed. Within this framework, we first derive a TRT-R collision operator by constructing a new regularized procedure through the high-order Hermite expansion of non-equilibrium. Then a first-order discrete-velocity form of discrete source term is introduced to improve the accuracy of the source term. Finally and most importantly, a new first-order space-derivative auxiliary term is proposed to recover the correct CDE with variable coefficients. To evaluate this model, we simulate a classic benchmark problem of the rotating Gaussian pulse. The results show that our model has better accuracy, stability and convergence than other popular LB models, especially in the case of a large time step.

Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.

We investigate the performance of two approximation algorithms for the Hafnian of a nonnegative square matrix, namely the Barvinok and Godsil-Gutman estimators. We observe that, while there are examples of matrices for which these algorithms fail to provide a good approximation, the algorithms perform surprisingly well for adjacency matrices of random graphs. In most cases, the Godsil-Gutman estimator provides a far superior accuracy. For dense graphs, however, both estimators demonstrate a slow growth of the variance. For complete graphs, we show analytically that the relative variance $\sigma / \mu$ grows as a square root of the size of the graph. Finally, we simulate a Gaussian Boson Sampling experiment using the Godsil-Gutman estimator and show that the technique used can successfully reproduce low-order correlation functions.

This paper develops some theory of the matrix Dyson equation (MDE) for correlated linearizations and uses it to solve a problem on asymptotic deterministic equivalent for the test error in random features regression. The theory developed for the correlated MDE includes existence-uniqueness, spectral support bounds, and stability properties of the MDE. This theory is new for constructing deterministic equivalents for pseudoresolvents of a class of correlated linear pencils. In the application, this theory is used to give a deterministic equivalent of the test error in random features ridge regression, in a proportional scaling regime, wherein we have conditioned on both training and test datasets.

Quantum computing devices are believed to be powerful in solving the prime factorization problem, which is at the heart of widely deployed public-key cryptographic tools. However, the implementation of Shor's quantum factorization algorithm requires significant resources scaling linearly with the number size; taking into account an overhead that is required for quantum error correction the estimation is that 20 millions of (noisy) physical qubits are required for factoring 2048-bit RSA key in 8 hours. Recent proposal by Yan et al. claims a possibility of solving the factorization problem with sublinear quantum resources. As we demonstrate in our work, this proposal lacks systematic analysis of the computational complexity of the classical part of the algorithm, which exploits the Schnorr's lattice-based approach. We provide several examples illustrating the need in additional resource analysis for the proposed quantum factorization algorithm.

This manuscript is devoted to investigating the conservation laws of incompressible Navier-Stokes equations(NSEs), written in the energy-momentum-angular momentum conserving(EMAC) formulation, after being linearized by the two-level methods. With appropriate correction steps(e.g., Stoke/Newton corrections), we show that the two-level methods, discretized from EMAC NSEs, could preserve momentum, angular momentum, and asymptotically preserve energy. Error estimates and (asymptotic) conservative properties are analyzed and obtained, and numerical experiments are conducted to validate the theoretical results, mainly confirming that the two-level linearized methods indeed possess the property of (almost) retainability on conservation laws. Moreover, experimental error estimates and optimal convergence rates of two newly defined types of pressure approximation in EMAC NSEs are also obtained.

Rule-based reasoning is an essential part of human intelligence prominently formalized in artificial intelligence research via logic programs. Describing complex objects as the composition of elementary ones is a common strategy in computer science and science in general. The author has recently introduced the sequential composition of logic programs in the context of logic-based analogical reasoning and learning in logic programming. Motivated by these applications, in this paper we construct a qualitative and algebraic notion of syntactic logic program similarity from sequential decompositions of programs. We then show how similarity can be used to answer queries across different domains via a one-step reduction. In a broader sense, this paper is a further step towards an algebraic theory of logic programming.

北京阿比特科技有限公司