亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper deals with Elliptical Wishart distributions - which generalize the Wishart distribution - in the context of signal processing and machine learning. Two algorithms to compute the maximum likelihood estimator (MLE) are proposed: a fixed point algorithm and a Riemannian optimization method based on the derived information geometry of Elliptical Wishart distributions. The existence and uniqueness of the MLE are characterized as well as the convergence of both estimation algorithms. Statistical properties of the MLE are also investigated such as consistency, asymptotic normality and an intrinsic version of Fisher efficiency. On the statistical learning side, novel classification and clustering methods are designed. For the $t$-Wishart distribution, the performance of the MLE and statistical learning algorithms are evaluated on both simulated and real EEG and hyperspectral data, showcasing the interest of our proposed methods.

相關內容

Computing the crossing number of a graph is one of the most classical problems in computational geometry. Both it and numerous variations of the problem have been studied, and overcoming their frequent computational difficulty is an active area of research. Particularly recently, there has been increased effort to show and understand the parameterized tractability of various crossing number variants. While many results in this direction use a similar approach, a general framework remains elusive. We suggest such a framework that generalizes important previous results, and can even be used to show the tractability of deciding crossing number variants for which this was stated as an open problem in previous literature. Our framework targets variants that prescribe a partial predrawing and some kind of topological restrictions on crossings. Additionally, to provide evidence for the non-generalizability of previous approaches for the partially crossing number problem to allow for geometric restrictions, we show a new more constrained hardness result for partially predrawn rectilinear crossing number. In particular, we show W-hardness of deciding Straight-Line Planarity Extension parameterized by the number of missing edges.

In this note, we derive the closed form formulae for moments of Student's t-distribution in the one dimensional case as well as in higher dimensions through a unified probability framework. Interestingly, the closed form expressions for the moments of Student's t-distribution can be written in terms of the familiar Gamma function, Kummer's confluent hypergeometric function, and the hypergeometric function.

We design and investigate a variety of multigrid solvers for high-order local discontinuous Galerkin methods applied to elliptic interface and multiphase Stokes problems. Using the template of a standard multigrid V-cycle, we consider a variety of element-wise block smoothers, including Jacobi, multi-coloured Gauss-Seidel, processor-block Gauss-Seidel, and with special interest, smoothers based on sparse approximate inverse (SAI) methods. In particular, we develop SAI methods that: (i) balance the smoothing of velocity and pressure variables in Stokes problems; and (ii) robustly handles high-contrast viscosity coefficients in multiphase problems. Across a broad range of two- and three-dimensional test cases, including Poisson, elliptic interface, steady-state Stokes, and unsteady Stokes problems, we examine a multitude of multigrid smoother and solver combinations. In every case, there is at least one approach that matches the performance of classical geometric multigrid algorithms, e.g., 4 to 8 iterations to reduce the residual by 10 orders of magnitude. We also discuss their relative merits with regard to simplicity, robustness, computational cost, and parallelisation.

This paper presents a fast and robust numerical method for reconstructing point-like sources in the time-harmonic Maxwell's equations given Cauchy data at a fixed frequency. This is an electromagnetic inverse source problem with broad applications, such as antenna synthesis and design, medical imaging, and pollution source tracing. We introduce new imaging functions and a computational algorithm to determine the number of point sources, their locations, and associated moment vectors, even when these vectors have notably different magnitudes. The number of sources and locations are estimated using significant peaks of the imaging functions, and the moment vectors are computed via explicitly simple formulas. The theoretical analysis and stability of the imaging functions are investigated, where the main challenge lies in analyzing the behavior of the dot products between the columns of the imaginary part of the Green's tensor and the unknown moment vectors. Additionally, we extend our method to reconstruct small-volume sources using an asymptotic expansion of their radiated electric field. We provide numerical examples in three dimensions to demonstrate the performance of our method.

Most of the scientific literature on causal modeling considers the structural framework of Pearl and the potential-outcome framework of Rubin to be formally equivalent, and therefore interchangeably uses do-interventions and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we agnostically superimpose the two causal models to specify under which mathematical conditions structural counterfactual outcomes and potential outcomes need to, do not need to, can, or cannot be equal (almost surely or law). Our comparison reminds that a structural causal model and a Rubin causal model compatible with the same observations do not have to coincide, and highlights real-world problems where they even cannot correspond. Then, we examine common claims and practices from the causal-inference literature in the light of these results. In doing so, we aim at clarifying the relationship between the two causal frameworks, and the interpretation of their respective counterfactuals.

A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.

This study focuses on addressing the challenge of solving the reduced biquaternion equality constrained least squares (RBLSE) problem. We develop algebraic techniques to derive both complex and real solutions for the RBLSE problem by utilizing the complex and real forms of reduced biquaternion matrices. Additionally, we conduct a perturbation analysis for the RBLSE problem and establish an upper bound for the relative forward error of these solutions. Numerical examples are presented to illustrate the effectiveness of the proposed approaches and to verify the accuracy of the established upper bound for the relative forward errors.

Sequential positivity is often a necessary assumption for drawing causal inferences, such as through marginal structural modeling. Unfortunately, verification of this assumption can be challenging because it usually relies on multiple parametric propensity score models, unlikely all correctly specified. Therefore, we propose a new algorithm, called "sequential Positivity Regression Tree" (sPoRT), to check this assumption with greater ease under either static or dynamic treatment strategies. This algorithm also identifies the subgroups found to be violating this assumption, allowing for insights about the nature of the violations and potential solutions. We first present different versions of sPoRT based on either stratifying or pooling over time. Finally, we illustrate its use in a real-life application of HIV-positive children in Southern Africa with and without pooling over time. An R notebook showing how to use sPoRT is available at github.com/ArthurChatton/sPoRT-notebook.

In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.

北京阿比特科技有限公司