亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The weighted nonlinear least-squares problem for low-rank signal estimation is considered. The problem of constructing a numerical solution that is stable and fast for long time series is addressed. A modified weighted Gauss-Newton method, which can be implemented through the direct variable projection onto a space of low-rank signals, is proposed. For a weight matrix which provides the maximum likelihood estimator of the signal in the presence of autoregressive noise of order $p$ the computational cost of iterations is $O(N r^2 + N p^2 + r N \log N)$ as $N$ tends to infinity, where $N$ is the time-series length, $r$ is the rank of the approximating time series. Moreover, the proposed method can be applied to data with missing values, without increasing the computational cost. The method is compared with state-of-the-art methods based on the variable projection approach in terms of floating-point numerical stability and computational cost.

相關內容

We consider the problem of learning an optimal prescriptive tree (i.e., a personalized treatment assignment policy in the form of a binary tree) of moderate depth, from observational data. This problem arises in numerous socially important domains such as public health and personalized medicine, where interpretable and data-driven interventions are sought based on data gathered in deployment, through passive collection of data, rather than from randomized trials. We propose a method for learning optimal prescriptive trees using mixed-integer optimization (MIO) technology. We show that under mild conditions our method is asymptotically exact in the sense that it converges to an optimal out-of-sample treatment assignment policy as the number of historical data samples tends to infinity. This sets us apart from existing literature on the topic which either requires data to be randomized or imposes stringent assumptions on the trees. Based on extensive computational experiments on both synthetic and real data, we demonstrate that our asymptotic guarantees translate to significant out-of-sample performance improvements even in finite samples.

We study a linear high-dimensional regression model in a semi-supervised setting, where for many observations only the vector of covariates $X$ is given with no response $Y$. We do not make any sparsity assumptions on the vector of coefficients, and aim at estimating Var$(Y|X)$. We propose an estimator, which is unbiased, consistent, and asymptotically normal. This estimator can be improved by adding zero-estimators arising from the unlabelled data. Adding zero-estimators does not affect the bias and potentially can reduce variance. In order to achieve optimal improvement, many zero-estimators should be used, but this raises the problem of estimating many parameters. Therefore, we introduce covariate selection algorithms that identify which zero-estimators should be used in order to improve the above estimator. We further illustrate our approach for other estimators, and present an algorithm that improves estimation for any given variance estimator. Our theoretical results are demonstrated in a simulation study.

As a classic parameter from the binomial distribution, the binomial proportion has been well studied in the literature owing to its wide range of applications. In contrast, the reciprocal of the binomial proportion, also known as the inverse proportion, is often overlooked, even though it also plays an important role in various fields including clinical studies and random sampling. The maximum likelihood estimator of the inverse proportion suffers from the zero-event problem, and to overcome it, alternative methods have been developed in the literature. Nevertheless, there is little work addressing the optimality of the existing estimators, as well as their practical performance comparison. Inspired by this, we propose to further advance the literature by developing an optimal estimator for the inverse proportion in a family of shrinkage estimators. We further derive the explicit and approximate formulas for the optimal shrinkage parameter under different settings. Simulation studies show that the performance of our new estimator performs better than, or as well as, the existing competitors in most practical settings. Finally, to illustrate the usefulness of our new method, we also revisit a recent meta-analysis on COVID-19 data for assessing the relative risks of physical distancing on the infection of coronavirus, in which six out of seven studies encounter the zero-event problem.

This paper is concerned with the introduction of Tikhonov regularization into least squares approximation scheme on $[-1,1]$ by orthonormal polynomials, in order to handle noisy data. This scheme includes interpolation and hyperinterpolation as special cases. With Gauss quadrature points employed as nodes, coefficients of the approximation polynomial with respect to given basis are derived in an entry-wise closed form. Under interpolatory conditions, the solution to the regularized approximation problem is rewritten in forms of two kinds of barycentric interpolation formulae, by introducing only a multiplicative correction factor into both classical barycentric formulae. An $L_2$ error bound and a uniform error bound are derived, providing similar information that Tikhonov regularization is able to reduce the operator norm (Lebesgue constant) and the error term related to the level of noise, both by multiplying a correction factor which is less than one. Numerical examples show the benefits of Tikhonov regularization when data is noisy or data size is relatively small.

Flexible estimation of multiple conditional quantiles is of interest in numerous applications, such as studying the effect of pregnancy-related factors on low and high birth weight. We propose a Bayesian non-parametric method to simultaneously estimate non-crossing, non-linear quantile curves. We expand the conditional distribution function of the response in I-spline basis functions where the covariate-dependent coefficients are modeled using neural networks. By leveraging the approximation power of splines and neural networks, our model can approximate any continuous quantile function. Compared to existing models, our model estimates all rather than a finite subset of quantiles, scales well to high dimensions, and accounts for estimation uncertainty. While the model is arbitrarily flexible, interpretable marginal quantile effects are estimated using accumulative local effect plots and variable importance measures. A simulation study shows that our model can better recover quantiles of the response distribution when the data is sparse, and an analysis of birth weight data is presented.

A kernel method for estimating a probability density function (pdf) from an i.i.d. sample drawn from such density is presented. Our estimator is a linear combination of kernel functions, the coefficients of which are determined by a linear equation. An error analysis for the mean integrated squared error is established in a general reproducing kernel Hilbert space setting. The theory developed is then applied to estimate pdfs belonging to weighted Korobov spaces, for which a dimension independent convergence rate is established. Under a suitable smoothness assumption, our method attains a rate arbitrarily close to the optimal rate. Numerical results support our theory.

The Richards equation is commonly used to model the flow of water and air through soil, and it serves as a gateway equation for multiphase flows through porous media. It is a nonlinear advection-reaction-diffusion equation that exhibits both parabolic-hyperbolic and parabolic-elliptic kinds of degeneracies. In this study, we provide reliable, fully computable, and locally space-time efficient a posteriori error bounds for numerical approximations of the fully degenerate Richards equation. For showing global reliability, a nonlocal-in-time error estimate is derived individually for the time-integrated $H^1(H^{-1})$, $L^2(L^2)$, and the $L^2(H^1)$ errors. A maximum principle and a degeneracy estimator are employed for the last one. Global and local space-time efficiency error bounds are then obtained in a standard $H^1(H^{-1})\cap L^2(H^1)$ norm. The reliability and efficiency norms employed coincide when there is no nonlinearity. Moreover, error contributors such as flux nonconformity, time discretization, quadrature, linearization, and data oscillation are identified and separated. The estimates are also valid in a setting where iterative linearization with inexact solvers is considered. Numerical tests are conducted for nondegenerate and degenerate cases having exact solutions, as well as for a realistic case. It is shown that the estimators correctly identify the errors up to a factor of the order of unity.

Let $F^{*}$ be an approximation of a given $(a \times b)$ matrix $F$ derived by methods that are not randomized. We prove that for a given $F$ and $F^{*}$, $H$ and $T$ can be computed by randomized algorithm such that $(HT)$ is an approximation of $F$ better than $F^{*}$.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.

北京阿比特科技有限公司