亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Data visualisation helps understanding data represented by multiple variables, also called features, stored in a large matrix where individuals are stored in lines and variable values in columns. These data structures are frequently called multidimensional spaces.In this paper, we illustrate ways of employing the visual results of multidimensional projection algorithms to understand and fine-tune the parameters of their mathematical framework. Some of the common mathematical common to these approaches are Laplacian matrices, Euclidian distance, Cosine distance, and statistical methods such as Kullback-Leibler divergence, employed to fit probability distributions and reduce dimensions. Two of the relevant algorithms in the data visualisation field are t-distributed stochastic neighbourhood embedding (t-SNE) and Least-Square Projection (LSP). These algorithms can be used to understand several ranges of mathematical functions including their impact on datasets. In this article, mathematical parameters of underlying techniques such as Principal Component Analysis (PCA) behind t-SNE and mesh reconstruction methods behind LSP are adjusted to reflect the properties afforded by the mathematical formulation. The results, supported by illustrative methods of the processes of LSP and t-SNE, are meant to inspire students in understanding the mathematics behind such methods, in order to apply them in effective data analysis tasks in multiple applications.

相關內容

Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive accuracy. However, computing LOO-CV criteria can be computationally expensive due to the need to fit the model multiple times. In the Bayesian context, importance sampling provides a possible solution but classical approaches can easily produce estimators whose variance is infinite, making them potentially unreliable. Here we propose and analyze a novel mixture estimator to compute Bayesian LOO-CV criteria. Our method retains the simplicity and computational convenience of classical approaches, while guaranteeing finite variance of the resulting estimators. Both theoretical and numerical results are provided to illustrate the improved robustness and efficiency. The computational benefits are particularly significant in high-dimensional problems, allowing to perform Bayesian LOO-CV for a broader range of models. The proposed methodology is easily implementable in standard probabilistic programming software and has a computational cost roughly equivalent to fitting the original model once.

This paper studies the impact of bootstrap procedure on the eigenvalue distributions of the sample covariance matrix under the high-dimensional factor structure. We provide asymptotic distributions for the top eigenvalues of bootstrapped sample covariance matrix under mild conditions. After bootstrap, the spiked eigenvalues which are driven by common factors will converge weakly to Gaussian limits via proper scaling and centralization. However, the largest non-spiked eigenvalue is mainly determined by order statistics of bootstrap resampling weights, and follows extreme value distribution. Based on the disparate behavior of the spiked and non-spiked eigenvalues, we propose innovative methods to test the number of common factors. According to the simulations and a real data example, the proposed methods are the only ones performing reliably and convincingly under the existence of both weak factors and cross-sectionally correlated errors. Our technical details contribute to random matrix theory on spiked covariance model with convexly decaying density and unbounded support, or with general elliptical distributions.

We construct quantum algorithms to compute the solution and/or physical observables of nonlinear ordinary differential equations (ODEs) and nonlinear Hamilton-Jacobi equations (HJE) via linear representations or exact mappings between nonlinear ODEs/HJE and linear partial differential equations (the Liouville equation and the Koopman-von Neumann equation). The connection between the linear representations and the original nonlinear system is established through the Dirac delta function or the level set mechanism. We compare the quantum linear systems algorithms based methods and the quantum simulation methods arising from different numerical approximations, including the finite difference discretisations and the Fourier spectral discretisations for the two different linear representations, with the result showing that the quantum simulation methods usually give the best performance in time complexity. We also propose the Schr\"odinger framework to solve the Liouville equation for the HJE, since it can be recast as the semiclassical limit of the Wigner transform of the Schr\"odinger equation. Comparsion between the Schr\"odinger and the Liouville framework will also be made.

Assessing goodness-of-fit is challenging because theoretically there is no uniformly powerful test, whereas in practice the question `what would be a preferable default test?' is important to applied statisticians. To take a look at this so-called omnibus testing problem, this paper considers the class of reweighted Anderson-Darling tests and makes two fold contributions. The first contribution is to provide a geometric understanding of the problem via establishing an explicit one-to-one correspondence between the weights and their focal directions of deviations of the distributions under alternative hypothesis from those under the null. It is argued that the weights that produce the test statistic with minimum variance can serve as a general-purpose test. In addition, this default or optimal weights-based test is found to be practically equivalent to the Zhang test, which has been commonly perceived powerful. The second contribution is to establish new large-sample results. It is shown that like Anderson-Darling, the minimum variance test statistic under the null has the same distribution as that of a weighted sum of an infinite number of independent squared normal random variables. These theoretical results are shown to be useful for large sample-based approximations. Finally, the paper concludes with a few remarks, including how the present approach can be extended to create new multinomial goodness-of-fit tests.

In this paper, we consider function-indexed normalized weighted integrated periodograms for equidistantly sampled multivariate continuous-time state space models which are multivariate continuous-time ARMA processes. Thereby, the sampling distance is fixed and the driving L\'evy process has at least a finite fourth moment. Under different assumptions on the function space and the moments of the driving L\'evy process we derive a central limit theorem for the function-indexed normalized weighted integrated periodogram. Either the assumption on the function space or the assumption on the existence of moments of the L\'evy process is weaker. Furthermore, we show the weak convergence in both the space of continuous functions and in the dual space to a Gaussian process and give an explicit representation of the covariance function. The results can be used to derive the asymptotic behavior of the Whittle estimator and to construct goodness-of-fit test statistics as the Grenander-Rosenblatt statistic and the Cram\'er-von Mises statistic. We present the exact limit distributions of both statistics and show their performance through a simulation study.

We introduce two new tools to assess the validity of statistical distributions. These tools are based on components derived from a new statistical quantity, the $comparison$ $curve$. The first tool is a graphical representation of these components on a $bar$ $plot$ (B plot), which can provide a detailed appraisal of the validity of the statistical model, in particular when supplemented by acceptance regions related to the model. The knowledge gained from this representation can sometimes suggest an existing $goodness$-$of$-$fit$ test to supplement this visual assessment with a control of the type I error. Otherwise, an adaptive test may be preferable and the second tool is the combination of these components to produce a powerful $\chi^2$-type goodness-of-fit test. Because the number of these components can be large, we introduce a new selection rule to decide, in a data driven fashion, on their proper number to take into consideration. In a simulation, our goodness-of-fit tests are seen to be powerwise competitive with the best solutions that have been recommended in the context of a fully specified model as well as when some parameters must be estimated. Practical examples show how to use these tools to derive principled information about where the model departs from the data.

Interacting particle systems undergoing repeated mutation and selection steps model genetic evolution, and also describe a broad class of sequential Monte Carlo methods. The genealogical tree embedded into the system is important in both applications. Under neutrality, when fitnesses of particles are independent from those of their parents, rescaled genealogies are known to converge to Kingman's coalescent. Recent work has established convergence under non-neutrality, but only for finite-dimensional distributions. We prove weak convergence of non-neutral genealogies on the space of c\`adl\`ag paths under standard assumptions, enabling analysis of the whole genealogical tree.

Self-supervised depth learning from monocular images normally relies on the 2D pixel-wise photometric relation between temporally adjacent image frames. However, they neither fully exploit the 3D point-wise geometric correspondences, nor effectively tackle the ambiguities in the photometric warping caused by occlusions or illumination inconsistency. To address these problems, this work proposes Density Volume Construction Network (DevNet), a novel self-supervised monocular depth learning framework, that can consider 3D spatial information, and exploit stronger geometric constraints among adjacent camera frustums. Instead of directly regressing the pixel value from a single image, our DevNet divides the camera frustum into multiple parallel planes and predicts the pointwise occlusion probability density on each plane. The final depth map is generated by integrating the density along corresponding rays. During the training process, novel regularization strategies and loss functions are introduced to mitigate photometric ambiguities and overfitting. Without obviously enlarging model parameters size or running time, DevNet outperforms several representative baselines on both the KITTI-2015 outdoor dataset and NYU-V2 indoor dataset. In particular, the root-mean-square-deviation is reduced by around 4% with DevNet on both KITTI-2015 and NYU-V2 in the task of depth estimation. Code is available at //github.com/gitkaichenzhou/DevNet.

This study formally adapts the time-domain linear sampling method (TLSM) for ultrasonic imaging of stationary and evolving fractures in safety-critical components. The TLSM indicator is then applied to the laboratory test data of [22, 18] and the obtained reconstructions are compared to their frequency-domain counterparts. The results highlight the unique capability of the time-domain imaging functional for high-fidelity tracking of evolving damage, and its relative robustness to sparse and reduced aperture data at moderate noise levels. A comparative analysis of the TLSM images against the multifrequency LSM maps of [22] further reveals that thanks to the full-waveform inversion in time and space, the TLSM generates images of remarkably higher quality with the same dataset.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司