亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we construct the wavelet eigenvalue regression methodology in high dimensions. We assume that possibly non-Gaussian, finite-variance $p$-variate measurements are made of a low-dimensional $r$-variate ($r \ll p$) fractional stochastic process with non-canonical scaling coordinates and in the presence of additive high-dimensional noise. The measurements are correlated both time-wise and between rows. Building upon the asymptotic and large scale properties of wavelet random matrices in high dimensions, the wavelet eigenvalue regression is shown to be consistent and, under additional assumptions, asymptotically Gaussian in the estimation of the fractal structure of the system. We further construct a consistent estimator of the effective dimension $r$ of the system that significantly increases the robustness of the methodology. The estimation performance over finite samples is studied by means of simulations.

相關內容

We consider the problem of jointly modeling and clustering populations of tensors by introducing a high-dimensional tensor mixture model with heterogeneous covariances. To effectively tackle the high dimensionality of tensor objects, we employ plausible dimension reduction assumptions that exploit the intrinsic structures of tensors such as low-rankness in the mean and separability in the covariance. In estimation, we develop an efficient high-dimensional expectation-conditional-maximization (HECM) algorithm that breaks the intractable optimization in the M-step into a sequence of much simpler conditional optimization problems, each of which is convex, admits regularization and has closed-form updating formulas. Our theoretical analysis is challenged by both the non-convexity in the EM-type estimation and having access to only the solutions of conditional maximizations in the M-step, leading to the notion of dual non-convexity. We demonstrate that the proposed HECM algorithm, with an appropriate initialization, converges geometrically to a neighborhood that is within statistical precision of the true parameter. The efficacy of our proposed method is demonstrated through comparative numerical experiments and an application to a medical study, where our proposal achieves an improved clustering accuracy over existing benchmarking methods.

Logistic regression is one of the most fundamental methods for modeling the probability of a binary outcome based on a collection of covariates. However, the classical formulation of logistic regression relies on the independent sampling assumption, which is often violated when the outcomes interact through an underlying network structure. This necessitates the development of models that can simultaneously handle both the network peer-effect (arising from neighborhood interactions) and the effect of high-dimensional covariates. In this paper, we develop a framework for incorporating such dependencies in a high-dimensional logistic regression model by introducing a quadratic interaction term, as in the Ising model, designed to capture pairwise interactions from the underlying network. The resulting model can also be viewed as an Ising model, where the node-dependent external fields linearly encode the high-dimensional covariates. We propose a penalized maximum pseudo-likelihood method for estimating the network peer-effect and the effect of the covariates, which, in addition to handling the high-dimensionality of the parameters, conveniently avoids the computational intractability of the maximum likelihood approach. Consequently, our method is computationally efficient and, under various standard regularity conditions, our estimate attains the classical high-dimensional rate of consistency. In particular, our results imply that even under network dependence it is possible to consistently estimate the model parameters at the same rate as in classical logistic regression, when the true parameter is sparse and the underlying network is not too dense. As a consequence of the general results, we derive the rates of consistency of our estimator for various natural graph ensembles, such as bounded degree graphs, sparse Erd\H{o}s-R\'{e}nyi random graphs, and stochastic block models.

In this article we study the asymptotic behaviour of the least square estimator in a linear regression model based on random observation instances. We provide mild assumptions on the moments and dependence structure on the randomly spaced observations and the residuals under which the estimator is strongly consistent. In particular, we consider observation instances that are negatively superadditive dependent within each other, while for the residuals we merely assume that they are generated by some continuous function. In addition, we prove that the rate of convergence is proportional to the sampling rate $N$, and we complement our findings with a simulation study providing insights on finite sample properties.

In the regression problem, we consider the problem of estimating the variance function by the means of aggregation methods. We focus on two particular aggregation setting: Model Selection aggregation (MS) and Convex aggregation (C) where the goal is to select the best candidate and to build the best convex combination of candidates respectively among a collection of candidates. In both cases, the construction of the estimator relies on a two-step procedure and requires two independent samples. The first step exploits the first sample to build the candidate estimators for the variance function by the residual-based method and then the second dataset is used to perform the aggregation step. We show the consistency of the proposed method with respect to the L 2error both for MS and C aggregations. We evaluate the performance of these two methods in the heteroscedastic model and illustrate their interest in the regression problem with reject option.

The efficient estimation of an approximate model order is very important for real applications with multi-dimensional data if the observed low-rank data is corrupted by additive noise. In this paper, we present a novel robust method for model order estimation of noise-corrupted multi-dimensional low-rank data based on the LineAr Regression of Global Eigenvalues (LaRGE). The LaRGE method uses the multi-linear singular values obtained from the HOSVD of the measurement tensor to construct global eigenvalues. In contrast to the Modified Exponential Test (EFT) that also exploits the approximate exponential profile of the noise eigenvalues, LaRGE does not require the calculation of the probability of false alarm. Moreover, LaRGE achieves a significantly improved performance in comparison with popular state-of-the-art methods. It is well suited for the analysis of biomedical data. The excellent performance of the LaRGE method is illustrated via simulations and results obtained from EEG recordings.

In this chapter, we show how to efficiently model high-dimensional extreme peaks-over-threshold events over space in complex non-stationary settings, using extended latent Gaussian Models (LGMs), and how to exploit the fitted model in practice for the computation of long-term return levels. The extended LGM framework assumes that the data follow a specific parametric distribution, whose unknown parameters are transformed using a multivariate link function and are then further modeled at the latent level in terms of fixed and random effects that have a joint Gaussian distribution. In the extremal context, we here assume that the data level distribution is described in terms of a Poisson point process likelihood, motivated by asymptotic extreme-value theory, and which conveniently exploits information from all threshold exceedances. This contrasts with the more common data-wasteful approach based on block maxima, which are typically modeled with the generalized extreme-value (GEV) distribution. When conditional independence can be assumed at the data level and latent random effects have a sparse probabilistic structure, fast approximate Bayesian inference becomes possible in very high dimensions, and we here present the recently proposed inference approach called "Max-and-Smooth", which provides exceptional speed-up compared to alternative methods. The proposed methodology is illustrated by application to satellite-derived precipitation data over Saudi Arabia, obtained from the Tropical Rainfall Measuring Mission, with 2738 grid cells and about 20 million spatio-temporal observations in total. Our fitted model captures the spatial variability of extreme precipitation satisfactorily and our results show that the most intense precipitation events are expected near the south-western part of Saudi Arabia, along the Red Sea coastline.

Simultaneous analysis of gene expression data and genetic variants is highly of interest, especially when the number of gene expressions and genetic variants are both greater than the sample size. Association of both causal genes and effective SNPs makes the use of sparse modeling of such genetic data sets, highly important. The high-dimensional sparse instrumental variables models are one of such useful association models, which models the simultaneous relation of the gene expressions and genetic variants with complex traits. From a Bayesian viewpoint, the sparsity can be favored using sparsity-enforcing priors such as spike-and-slab priors. A two-stage modification of the expectation propagation (EP) algorithm is proposed and examined for approximate inference in high-dimensional sparse instrumental variables models with spike-and-slab priors. This method is an adoption of the classical two-stage least squares method, to be used with the Bayes context. A simulation study is performed to examine the performance of the methods. The proposed method is applied to analysis of the mouse obesity data.

The dynamic behaviour of periodic thermodiffusive multi-layered media excited by harmonic oscillations is studied. In the framework of linear thermodiffusive elasticity, periodic laminates, whose elementary cell is composed by an arbitrary number of layers, are considered. The generalized Floquet-Bloch conditions are imposed, and the universal dispersion relation of the composite is obtained by means of an approach based on the formal solution for a single layer together with the transfer matrix method. The eigenvalue problem associated with the dispersion equation is solved by means of an analytical procedure based on the symplecticity properties of the transfer matrix to which corresponds a palindromic characteristic polynomial, and the frequency band structure associated to wave propagating inside the medium are finally derived. The proposed approach is tested through illustrative examples where thermodiffusive multilayered structures of interest for renewable energy devices fabrication are analyzed. The effects of thermodiffusion coupling on both the propagation and attenuation of Bloch waves in these systems are investigated in detail.

We study the classification problem for high-dimensional data with $n$ observations on $p$ features where the $p \times p$ covariance matrix $\Sigma$ exhibits a spiked eigenvalues structure and the vector $\zeta$, given by the difference between the whitened mean vectors, is sparse with sparsity at most $s$. We propose an adaptive classifier (adaptive with respect to the sparsity $s$) that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space, i.e., the classifier whitened the data, then screen the features by keeping only those corresponding to the $s$ largest coordinates of $\zeta$ and finally apply Fisher linear discriminant on the selected features. Leveraging recent results on entrywise matrix perturbation bounds for covariance matrices, we show that the resulting classifier is Bayes optimal whenever $n \rightarrow \infty$ and $s \sqrt{n^{-1} \ln p} \rightarrow 0$. Experimental results on real and synthetic data sets indicate that the proposed classifier is competitive with existing state-of-the-art methods while also selecting a smaller number of features.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司