亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Estimation of the mean vector and covariance matrix is of central importance in the analysis of multivariate data. In the framework of generalized linear models, usually the variances are certain functions of the means with the normal distribution being an exception. We study some implications of functional relationships between covariance and the mean by focusing on the maximum likelihood and Bayesian estimation of the mean-covariance under the joint constraint $\bm{\Sigma}\bm{\mu} = \bm{\mu}$ for a multivariate normal distribution. A novel structured covariance is proposed through reparameterization of the spectral decomposition of $\bm{\Sigma}$ involving its eigenvalues and $\bm{\mu}$. This is designed to address the challenging issue of positive-definiteness and to reduce the number of covariance parameters from quadratic to linear function of the dimension. We propose a fast (noniterative) method for approximating the maximum likelihood estimator by maximizing a lower bound for the profile likelihood function, which is concave. We use normal and inverse gamma priors on the mean and eigenvalues, and approximate the maximum aposteriori estimators by both MH within Gibbs sampling and a faster iterative method. A simulation study shows good performance of our estimators.

相關內容

We introduce a new class of estimators for the linear response of steady states of stochastic dynamics. We generalize the likelihood ratio approach and formulate the linear response as a product of two martingales, hence the name "martingale product estimators". We present a systematic derivation of the martingale product estimator, and show how to construct such estimator so its bias is consistent with the weak order of the numerical scheme that approximates the underlying stochastic differential equation. Motivated by the estimation of transport properties in molecular systems, we present a rigorous numerical analysis of the bias and variance for these new estimators in the case of Langevin dynamics. We prove that the variance is uniformly bounded in time and derive a specific form of the estimator for second-order splitting schemes for Langevin dynamics. For comparison, we also study the bias and variance of a Green-Kubo estimator, motivated, in part, by its variance growing linearly in time. Presented analysis shows that the new martingale product estimators, having uniformly bounded variance in time, offer a competitive alternative to the traditional Green-Kubo estimator. We compare on illustrative numerical tests the new estimators with results obtained by the Green-Kubo method.

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

Defeaturing consists in simplifying geometrical models by removing the geometrical features that are considered not relevant for a given simulation. Feature removal and simplification of computer-aided design models enables faster simulations for engineering analysis problems, and simplifies the meshing problem that is otherwise often unfeasible. The effects of defeaturing on the analysis are then neglected and, as of today, there are basically very few strategies to quantitatively evaluate such an impact. Understanding well the effects of this process is an important step for automatic integration of design and analysis. We formalize the process of defeaturing by understanding its effect on the solution of Poisson equation defined on the geometrical model of interest containing a single feature, with Neumann boundary conditions on the feature itself. We derive an a posteriori estimator of the energy error between the solutions of the exact and the defeatured geometries in $\mathbb{R}^n$, $n\in\{2,3\}$, that is simple, reliable and efficient up to oscillations. The dependence of the estimator upon the size of the features is explicit.

We study the problem of bounding path-dependent expectations (within any finite time horizon $d$) over the class of discrete-time martingales whose marginal distributions lie within a prescribed tolerance of a given collection of benchmark marginal distributions. This problem is a relaxation of the martingale optimal transport (MOT) problem and is motivated by applications to super-hedging in financial markets. We show that the empirical version of our relaxed MOT problem can be approximated within $O\left( n^{-1/2}\right)$ error where $n$ is the number of samples of each of the individual marginal distributions (generated independently) and using a suitably constructed finite-dimensional linear programming problem.

We aim to make inferences about a smooth, finite-dimensional parameter by fusing data from multiple sources together. Previous works have studied the estimation of a variety of parameters in similar data fusion settings, including in the estimation of the average treatment effect, optimal treatment rule, and average reward, with the majority of them merging one historical data source with covariates, actions, and rewards and one data source of the same covariates. In this work, we consider the general case where one or more data sources align with each part of the distribution of the target population, for example, the conditional distribution of the reward given actions and covariates. We describe potential gains in efficiency that can arise from fusing these data sources together in a single analysis, which we characterize by a reduction in the semiparametric efficiency bound. We also provide a general means to construct estimators that achieve these bounds. In numerical experiments, we show marked improvements in efficiency from using our proposed estimators rather than their natural alternatives. Finally, we illustrate the magnitude of efficiency gains that can be realized in vaccine immunogenicity studies by fusing data from two HIV vaccine trials.

We study the non-parametric estimation of the value ${\theta}(f )$ of a linear functional evaluated at an unknown density function f with support on $R_+$ based on an i.i.d. sample with multiplicative measurement errors. The proposed estimation procedure combines the estimation of the Mellin transform of the density $f$ and a regularisation of the inverse of the Mellin transform by a spectral cut-off. In order to bound the mean squared error we distinguish several scenarios characterised through different decays of the upcoming Mellin transforms and the smoothnes of the linear functional. In fact, we identify scenarios, where a non-trivial choice of the upcoming tuning parameter is necessary and propose a data-driven choice based on a Goldenshluger-Lepski method. Additionally, we show minimax-optimality over Mellin-Sobolev spaces of the estimator.

Proximal Policy Optimization (PPO) is a highly popular model-free reinforcement learning (RL) approach. However, in continuous state and actions spaces and a Gaussian policy -- common in computer animation and robotics -- PPO is prone to getting stuck in local optima. In this paper, we observe a tendency of PPO to prematurely shrink the exploration variance, which naturally leads to slow progress. Motivated by this, we borrow ideas from CMA-ES, a black-box optimization method designed for intelligent adaptive Gaussian exploration, to derive PPO-CMA, a novel proximal policy optimization approach that can expand the exploration variance on objective function slopes and shrink the variance when close to the optimum. This is implemented by using separate neural networks for policy mean and variance and training the mean and variance in separate passes. Our experiments demonstrate a clear improvement over vanilla PPO in many difficult OpenAI Gym MuJoCo tasks.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司