亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Functional data analysis is an important research field in statistics which treats data as random functions drawn from some infinite-dimensional functional space, and functional principal component analysis (FPCA) based on eigen-decomposition plays a central role for data reduction and representation. After nearly three decades of research, there remains a key problem unsolved, namely, the perturbation analysis of covariance operator for diverging number of eigencomponents obtained from noisy and discretely observed data. This is fundamental for studying models and methods based on FPCA, while there has not been substantial progress since Hall, M\"uller and Wang (2006)'s result for a fixed number of eigenfunction estimates. In this work, we aim to establish a unified theory for this problem, deriving the moment bounds of eigenfunctions and asymptotic distributions of eigenvalues for a wide range of sampling schemes. Our results provide insight into the phenomenon when the $\mathcal{L}^{2}$ bound of eigenfunction estimates with diverging indices is minimax optimal as if the curves are fully observed, and reveal the transition of convergence rates from nonparametric to parametric regimes in connection to sparse or dense sampling. We also propose a double truncation technique to derive the uniform convergence (in time domain) of estimated eigenfunctions for the first time. The technical arguments in this work are useful for handling the perturbation series with noisy and discretely observed data and can be applied in models or those involving inverse problems based on FPCA as regularization, such as functional linear regression.

相關內容

Study samples often differ from the target populations of inference and policy decisions in non-random ways. Researchers typically believe that such departures from random sampling -- due to changes in the population over time and space, or difficulties in sampling truly randomly -- are small, and their corresponding impact on the inference should be small as well. We might therefore be concerned if the conclusions of our studies are excessively sensitive to a very small proportion of our sample data. We propose a method to assess the sensitivity of applied econometric conclusions to the removal of a small fraction of the sample. Manually checking the influence of all possible small subsets is computationally infeasible, so we use an approximation to find the most influential subset. Our metric, the "Approximate Maximum Influence Perturbation," is based on the classical influence function, and is automatically computable for common methods including (but not limited to) OLS, IV, MLE, GMM, and variational Bayes. We provide finite-sample error bounds on approximation performance. At minimal extra cost, we provide an exact finite-sample lower bound on sensitivity. We find that sensitivity is driven by a signal-to-noise ratio in the inference problem, is not reflected in standard errors, does not disappear asymptotically, and is not due to misspecification. While some empirical applications are robust, results of several influential economics papers can be overturned by removing less than 1% of the sample.

We present a generalized version of the discretization-invariant neural operator and prove that the network is a universal approximation in the operator sense. Moreover, by incorporating additional terms in the architecture, we establish a connection between this discretization-invariant neural operator network and those discussed before. The discretization-invariance property of the operator network implies that different input functions can be sampled using various sensor locations within the same training and testing phases. Additionally, since the network learns a ``basis'' for the input and output function spaces, our approach enables the evaluation of input functions on different discretizations. To evaluate the performance of the proposed discretization-invariant neural operator, we focus on challenging examples from multiscale partial differential equations. Our experimental results indicate that the method achieves lower prediction errors compared to previous networks and benefits from its discretization-invariant property.

We develop a robust Bayesian functional principal component analysis (FPCA) by incorporating skew elliptical classes of distributions. The proposed method effectively captures the primary source of variation among curves, even when abnormal observations contaminate the data. We model the observations using skew elliptical distributions by introducing skewness with transformation and conditioning into the multivariate elliptical symmetric distribution. To recast the covariance function, we employ an approximate spectral decomposition. We discuss the selection of prior specifications and provide detailed information on posterior inference, including the forms of the full conditional distributions, choices of hyperparameters, and model selection strategies. Furthermore, we extend our model to accommodate sparse functional data with only a few observations per curve, thereby creating a more general Bayesian framework for FPCA. To assess the performance of our proposed model, we conduct simulation studies comparing it to well-known frequentist methods and conventional Bayesian methods. The results demonstrate that our method outperforms existing approaches in the presence of outliers and performs competitively in outlier-free datasets. Furthermore, we illustrate the effectiveness of our method by applying it to environmental and biological data to identify outlying functional data. The implementation of our proposed method and applications are available at //github.com/SFU-Stat-ML/RBFPCA.

Low-rank tensor analysis has received widespread attention with many practical applications. However, the tensor data are often contaminated by outliers or sample-specific corruptions. How to recover the tensor data that are corrupted by outliers and perform data clustering remains a challenging problem. This paper develops an outlier-robust tensor low-rank representation (OR-TLRR) method for simultaneous outlier detection and tensor data clustering based on the tensor singular value decomposition (t-SVD) algebraic framework. It is motivated by the recently proposed tensor-tensor product induced by invertible linear transforms that satisfy certain conditions. For tensor observations with arbitrary outlier corruptions, OR-TLRR has provable performance guarantee for exactly recovering the row space of clean data and detecting outliers under mild conditions. Moreover, an extension of OR-TLRR is also proposed to handle the case when parts of the data are missing. Finally, extensive experimental results on both synthetic and real data demonstrate the effectiveness of the proposed algorithms.

Current statistical methods in differential proteomics analysis generally leave aside several challenges, such as missing values, correlations between peptide intensities and uncertainty quantification. Moreover, they provide point estimates, such as the mean intensity for a given peptide or protein in a given condition. The decision of whether an analyte should be considered as differential is then based on comparing the p-value to a significance threshold, usually 5%. In the state-of-the-art limma approach, a hierarchical model is used to deduce the posterior distribution of the variance estimator for each analyte. The expectation of this distribution is then used as a moderated estimation of variance and is injected directly into the expression of the t-statistic. However, instead of merely relying on the moderated estimates, we could provide more powerful and intuitive results by leveraging a fully Bayesian approach and hence allow the quantification of uncertainty. The present work introduces this idea by taking advantage of standard results from Bayesian inference with conjugate priors in hierarchical models to derive a methodology tailored to handle multiple imputation contexts. Furthermore, we aim to tackle a more general problem of multivariate differential analysis, to account for possible inter-peptide correlations. By defining a hierarchical model with prior distributions on both mean and variance parameters, we achieve a global quantification of uncertainty for differential analysis. The inference is thus performed by computing the posterior distribution for the difference in mean peptide intensities between two experimental conditions. In contrast to more flexible models that can be achieved with hierarchical structures, our choice of conjugate priors maintains analytical expressions for direct sampling from posterior distributions without requiring expensive MCMC methods.

We study the validity of the Neumann or Born series approach in solving the Helmholtz equation and coefficient identification in related inverse scattering problems. Precisely, we derive a sufficient and necessary condition under which the series is strongly convergent. We also investigate the rate of convergence of the series. The obtained condition is optimal and it can be much weaker than the traditional requirement for the convergence of the series. Our approach makes use of reduction space techniques proposed by Suzuki \cite{Suzuki-1976}. Furthermore we propose an interpolation method that allows the use of the Neumann series in all cases. Finally, we provide several numerical tests with different medium functions and frequency values to validate our theoretical results.

We study the problem of estimating a large, low-rank matrix corrupted by additive noise of unknown covariance, assuming one has access to additional side information in the form of noise-only measurements. We study the Whiten-Shrink-reColor (WSC) workflow, where a "noise covariance whitening" transformation is applied to the observations, followed by appropriate singular value shrinkage and a "noise covariance re-coloring" transformation. We show that under the mean square error loss, a unique, asymptotically optimal shrinkage nonlinearity exists for the WSC denoising workflow, and calculate it in closed form. To this end, we calculate the asymptotic eigenvector rotation of the random spiked F-matrix ensemble, a result which may be of independent interest. With sufficiently many pure-noise measurements, our optimally-tuned WSC denoising workflow outperforms, in mean square error, matrix denoising algorithms based on optimal singular value shrinkage which do not make similar use of noise-only side information; numerical experiments show that our procedure's relative performance is particularly strong in challenging statistical settings with high dimensionality and large degree of heteroscedasticity.

Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data gathering processes. Despite its advantages, such data collection mechanism often introduces complexities to the statistical inference procedure. For instance, the ordinary least squares (OLS) estimator in an adaptive linear regression model can exhibit non-normal asymptotic behavior, posing challenges for accurate inference and interpretation. In this paper, we propose a general method for constructing debiased estimator which remedies this issue. It makes use of the idea of adaptive linear estimating equations, and we establish theoretical guarantees of asymptotic normality, supplemented by discussions on achieving near-optimal asymptotic variance. A salient feature of our estimator is that in the context of multi-armed bandits, our estimator retains the non-asymptotic performance of the least square estimator while obtaining asymptotic normality property. Consequently, this work helps connect two fruitful paradigms of adaptive inference: a) non-asymptotic inference using concentration inequalities and b) asymptotic inference via asymptotic normality.

The edit distance is a fundamental measure of sequence similarity, defined as the minimum number of character insertions, deletions, and substitutions needed to transform one string into the other. Given two strings of length at most $n$, simple dynamic programming computes their edit distance exactly in $O(n^2)$ time, which is also the best possible (up to subpolynomial factors) assuming the Strong Exponential Time Hypothesis (SETH). The last few decades have seen tremendous progress in edit distance approximation, where the runtime has been brought down to subquadratic, near-linear, and even sublinear at the cost of approximation. In this paper, we study the dynamic edit distance problem, where the strings change dynamically as the characters are substituted, inserted, or deleted over time. Each change may happen at any location of either of the two strings. The goal is to maintain the (exact or approximate) edit distance of such dynamic strings while minimizing the update time. The exact edit distance can be maintained in $\tilde{O}(n)$ time per update (Charalampopoulos, Kociumaka, Mozes; 2020), which is again tight assuming SETH. Unfortunately, even with the unprecedented progress in edit distance approximation in the static setting, strikingly little is known regarding dynamic edit distance approximation. Utilizing the off-the-shelf tools, it is possible to achieve an $O(n^{c})$-approximation in $n^{0.5-c+o(1)}$ update time for any constant $c\in [0,\frac16]$. Improving upon this trade-off remains open. The contribution of this work is a dynamic $n^{o(1)}$-approximation algorithm with amortized expected update time of $n^{o(1)}$. In other words, we bring the approximation-ratio and update-time product down to $n^{o(1)}$. Our solution utilizes an elegant framework of precision sampling tree for edit distance approximation (Andoni, Krauthgamer, Onak; 2010).

Mixtures of factor analysers (MFA) models represent a popular tool for finding structure in data, particularly high-dimensional data. While in most applications the number of clusters, and especially the number of latent factors within clusters, is mostly fixed in advance, in the recent literature models with automatic inference on both the number of clusters and latent factors have been introduced. The automatic inference is usually done by assigning a nonparametric prior and allowing the number of clusters and factors to potentially go to infinity. The MCMC estimation is performed via an adaptive algorithm, in which the parameters associated with the redundant factors are discarded as the chain moves. While this approach has clear advantages, it also bears some significant drawbacks. Running a separate factor-analytical model for each cluster involves matrices of changing dimensions, which can make the model and programming somewhat cumbersome. In addition, discarding the parameters associated with the redundant factors could lead to a bias in estimating cluster covariance matrices. At last, identification remains problematic for infinite factor models. The current work contributes to the MFA literature by providing for the automatic inference on the number of clusters and the number of cluster-specific factors while keeping both cluster and factor dimensions finite. This allows us to avoid many of the aforementioned drawbacks of the infinite models. For the automatic inference on the cluster structure, we employ the dynamic mixture of finite mixtures (MFM) model. Automatic inference on cluster-specific factors is performed by assigning an exchangeable shrinkage process (ESP) prior to the columns of the factor loading matrices. The performance of the model is demonstrated on several benchmark data sets as well as real data applications.

北京阿比特科技有限公司