亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

More often than not, we encounter problems with varying parameters as opposed to those that are static. In this paper, we treat the estimation of parameters which vary with space. We use Metropolis-Hastings algorithm as a selection criteria for the maximum filter likelihood. Comparisons are made with the use of joint estimation of both the spatially varying parameters and the state. We illustrate the procedures employed in this paper by means of two hyperbolic SPDEs: the advection and the wave equation. The Metropolis-Hastings procedure registers better estimates.

相關內容

We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance. We provide non asymptotic lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator, and the minimax mean square estimation risk. These bounds are sharp with explicit constants when the matrix of the dynamics has no eigenvalues on the unit circle and are rate-optimal when they do. Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk and to systems with a general noise covariance. Instrumental to our derivation are new concentration results for rescaled sample covariances and deviation results for the corresponding multiplication processes of the covariates, a differential geometric construction of a prior on the unit operator ball of small Fisher information, and an extension of the Cram\'er-Rao and van Treesinequalities to matrix-valued estimators.

We introduce an original method of multidimensional ridge penalization in functional local linear regressions. The nonparametric regression of functional data is extended from its multivariate counterpart, and is known to be sensitive to the choice of $J$, where $J$ is the dimension of the projection subspace of the data. Under multivariate setting, a roughness penalty is helpful for variance reduction. However, among the limited works covering roughness penalty under the functional setting, most only use a single scalar for tuning. Our new approach proposes a class of data-adaptive ridge penalties, meaning that the model automatically adjusts the structure of the penalty according to the data sets. This structure has $J$ free parameters and enables a quadratic programming search for optimal tuning parameters that minimize the estimated mean squared error (MSE) of prediction, and is capable of applying different roughness penalty levels to each of the $J$ basis. The strength of the method in prediction accuracy and variance reduction with finite data is demonstrated through multiple simulation scenarios and two real-data examples. Its asymptotic performance is proved and compared to the unpenalized functional local linear regressions.

Over the past decades, linear mixed models have attracted considerable attention in various fields of applied statistics. They are popular whenever clustered, hierarchical or longitudinal data are investigated. Nonetheless, statistical tools for valid simultaneous inference for mixed parameters are rare. This is surprising because one often faces inferential problems beyond the pointwise examination of fixed or mixed parameters. For example, there is an interest in a comparative analysis of cluster-level parameters or subject-specific estimates in studies with repeated measurements. We discuss methods for simultaneous inference assuming a linear mixed model. Specifically, we develop simultaneous prediction intervals as well as multiple testing procedures for mixed parameters. They are useful for joint considerations or comparisons of cluster-level parameters. We employ a consistent bootstrap approximation of the distribution of max-type statistic to construct our tools. The numerical performance of the developed methodology is studied in simulation experiments and illustrated in a data example on household incomes in small areas.

This work contributes to the limited literature on estimating the diffusivity or drift coefficient of nonlinear SPDEs driven by additive noise. Assuming that the solution is measured locally in space and over a finite time interval, we show that the augmented maximum likelihood estimator introduced in Altmeyer, Reiss (2020) retains its asymptotic properties when used for semilinear SPDEs that satisfy some abstract, and verifiable, conditions. The proofs of asymptotic results are based on splitting the solution in linear and nonlinear parts and fine regularity properties in $L^p$-spaces. The obtained general results are applied to particular classes of equations, including stochastic reaction-diffusion equations. The stochastic Burgers equation, as an example with first order nonlinearity, is an interesting borderline case of the general results, and is treated by a Wiener chaos expansion. We conclude with numerical examples that validate the theoretical results.

We consider a minimal residual discretization of a simultaneous space-time variational formulation of parabolic evolution equations. Under the usual `LBB' stability condition on pairs of trial- and test spaces we show quasi-optimality of the numerical approximations without assuming symmetry of the spatial part of the differential operator. Under a stronger LBB condition we show error estimates in an energy-norm which are independent of this spatial differential operator.

Differential Granger causality, that is understanding how Granger causal relations differ between two related time series, is of interest in many scientific applications. Modeling each time series by a vector autoregressive (VAR) model, we propose a new method to directly learn the difference between the corresponding transition matrices in high dimensions. Key to the new method is an estimating equation constructed based on the Yule-Walker equation that links the difference in transition matrices to the difference in the corresponding precision matrices. In contrast to separately estimating each transition matrix and then calculating the difference, the proposed direct estimation method only requires sparsity of the difference of the two VAR models, and hence allows hub nodes in each high-dimensional time series. The direct estimator is shown to be consistent in estimation and support recovery under mild assumptions. These results also lead to novel consistency results with potentially faster convergence rates for estimating differences between precision matrices of i.i.d observations under weaker assumptions than existing results. We evaluate the finite sample performance of the proposed method using simulation studies and an application to electroencephalogram (EEG) data.

Image segmentation algorithms often depend on appearance models that characterize the distribution of pixel values in different image regions. We describe a new approach for estimating appearance models directly from an image, without explicit consideration of the pixels that make up each region. Our approach is based on novel algebraic expressions that relate local image statistics to the appearance of spatially coherent regions. We describe two algorithms that can use the aforementioned algebraic expressions to estimate appearance models directly from an image. The first algorithm solves a system of linear and quadratic equations using a least squares formulation. The second algorithm is a spectral method based on an eigenvector computation. We present experimental results that demonstrate the proposed methods work well in practice and lead to effective image segmentation algorithms.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be view-invariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets.

北京阿比特科技有限公司