亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In nonparametric statistics, rate-optimal estimators typically balance bias and stochastic error. The recent work on overparametrization raises the question whether rate-optimal estimators exist that do not obey this trade-off. In this work we consider pointwise estimation in the Gaussian white noise model with regression function $f$ in a class of $\beta$-H\"older smooth functions. Let 'worst-case' refer to the supremum over all functions $f$ in the H\"older class. It is shown that any estimator with worst-case bias $\lesssim n^{-\beta/(2\beta+1)}=: \psi_n$ must necessarily also have a worst-case mean absolute deviation that is lower bounded by $\gtrsim \psi_n.$ To derive the result, we establish abstract inequalities relating the change of expectation for two probability measures to the mean absolute deviation.

相關內容

Which dynamic queries can be maintained efficiently? For constant-size changes, it is known that constant-depth circuits or, equivalently, first-order updates suffice for maintaining many important queries, among them reachability, tree isomorphism, and the word problem for context-free languages. In other words, these queries are in the dynamic complexity class DynFO. We show that most of the existing results for constant-size changes can be recovered for batch changes of polylogarithmic size if one allows circuits of depth O(log log n) or, equivalently, first-order updates that are iterated O(log log n) times.

There has been a resurgence of interest in the asymptotic normality of incomplete U-statistics that only sum over roughly as many kernel evaluations as there are data samples, due to its computational efficiency and usefulness in quantifying the uncertainty for ensemble-based predictions. In this paper, we focus on the normal convergence of one such construction, the incomplete U-statistic with Bernoulli sampling, based on a raw sample of size $n$ and a computational budget $N$. Under minimalistic moment assumptions on the kernel, we offer accompanying Berry-Esseen bounds of the natural rate $1/\sqrt{\min(N, n)}$ that characterize the normal approximating accuracy involved when $n \asymp N$, i.e. $n$ and $N$ are of the same order in such a way that $n/N$ is lower-and-upper bounded by constants. Our key techniques include Stein's method specialized for the so-called Studentized nonlinear statistics, and an exponential lower tail bound for non-negative kernel U-statistics.

Our main focus is on the generalization bound, which serves as an upper limit for the generalization error. Our analysis delves into regression and classification tasks separately to ensure a thorough examination. We assume the target function is real-valued and Lipschitz continuous for regression tasks. We use the 2-norm and a root-mean-square-error (RMSE) variant to measure the disparities between predictions and actual values. In the case of classification tasks, we treat the target function as a one-hot classifier, representing a piece-wise constant function, and employ 0/1 loss for error measurement. Our analysis underscores the differing sample complexity required to achieve a concentration inequality of generalization bounds, highlighting the variation in learning efficiency for regression and classification tasks. Furthermore, we demonstrate that the generalization bounds for regression and classification functions are inversely proportional to a polynomial of the number of parameters in a network, with the degree depending on the hypothesis class and the network architecture. These findings emphasize the advantages of over-parameterized networks and elucidate the conditions for benign overfitting in such systems.

Inverse problems, such as accelerated MRI reconstruction, are ill-posed and an infinite amount of possible and plausible solutions exist. This may not only lead to uncertainty in the reconstructed image but also in downstream tasks such as semantic segmentation. This uncertainty, however, is mostly not analyzed in the literature, even though probabilistic reconstruction models are commonly used. These models can be prone to ignore plausible but unlikely solutions like rare pathologies. Building on MRI reconstruction approaches based on diffusion models, we add guidance to the diffusion process during inference, generating two meaningfully diverse reconstructions corresponding to an upper and lower bound segmentation. The reconstruction uncertainty can then be quantified by the difference between these bounds, which we coin the 'uncertainty boundary'. We analyzed the behavior of the upper and lower bound segmentations for a wide range of acceleration factors and found the uncertainty boundary to be both more reliable and more accurate compared to repeated sampling. Code is available at //github.com/NikolasMorshuis/SGR

Uncertainty reduction is vital for improving system reliability and reducing risks. To identify the best target for uncertainty reduction, uncertainty importance measure is commonly used to prioritize the significance of input variable uncertainties. Then, designers will take steps to reduce the uncertainties of variables with high importance. However, for variables with minimal uncertainty, the cost of controlling their uncertainties can be unacceptable. Therefore, uncertainty magnitude should also be considered in developing uncertainty reduction strategies. Although variance-based methods have been developed for this purpose, they are dependent on statistical moments and have limitations when dealing with highly-skewed distributions that are commonly encountered in practical applications. Motivated by this problem, we propose a new uncertainty importance measure based on cumulative residual entropy. The proposed measure is moment-independent based on the cumulative distribution function, which can handle the highly-skewed distributions properly. Numerical implementations for estimating the proposed measure are devised and verified. A real-world engineering case considering highly-skewed distributions is introduced to show the procedure of developing uncertainty reduction strategies considering uncertainty magnitude and corresponding cost. The results demonstrate that the proposed measure can present a different uncertainty reduction recommendation compared to the variance-based approach because of its moment-independent characteristic.

When modeling biological responses using Bayesian non-parametric regression, prior information may be available on the shape of the response in the form of non-linear function spaces that define the general shape of the response. To incorporate such information into the analysis, we develop a non-linear functional shrinkage (NLFS) approach that uniformly shrinks the non-parametric fitted function into a non-linear function space while allowing for fits outside of this space when the data suggest alternative shapes. This approach extends existing functional shrinkage approaches into linear subspaces to shrinkage into non-linear function spaces using a Taylor series expansion and corresponding updating of non-linear parameters. We demonstrate this general approach on the Hill model, a popular, biologically motivated model, and show that shrinkage into combined function spaces, i.e., where one has two or more non-linear functions a priori, is straightforward. We demonstrate this approach through synthetic and real data. Computational details on the underlying MCMC sampling are provided with data and analysis available in an online supplement.

The insight that causal parameters are particularly suitable for out-of-sample prediction has sparked a lot development of causal-like predictors. However, the connection with strict causal targets, has limited the development with good risk minimization properties, but without a direct causal interpretation. In this manuscript we derive the optimal out-of-sample risk minimizing predictor of a certain target $Y$ in a non-linear system $(X,Y)$ that has been trained in several within-sample environments. We consider data from an observation environment, and several shifted environments. Each environment corresponds to a structural equation model (SEM), with random coefficients and with its own shift and noise vector, both in $L^2$. Unlike previous approaches, we also allow shifts in the target value. We define a sieve of out-of-sample environments, consisting of all shifts $\tilde{A}$ that are at most $\gamma$ times as strong as any weighted average of the observed shift vectors. For each $\beta\in\mathbb{R}^p$ we show that the supremum of the risk functions $R_{\tilde{A}}(\beta)$ has a worst-risk decomposition into a (positive) non-linear combination of risk functions, depending on $\gamma$. We then define the set $\mathcal{B}_\gamma$, as minimizers of this risk. The main result of the paper is that there is a unique minimizer ($|\mathcal{B}_\gamma|=1$) that can be consistently estimated by an explicit estimator, outside a set of zero Lebesgue measure in the parameter space. A practical obstacle for the initial method of estimation is that it involves the solution of a general degree polynomials. Therefore, we prove that an approximate estimator using the bisection method is also consistent.

The current study investigates the asymptotic spectral properties of a finite difference approximation of nonlocal Helmholtz equations with a Caputo fractional Laplacian and a variable coefficient wave number $\mu$, as it occurs when considering a wave propagation in complex media, characterized by nonlocal interactions and spatially varying wave speeds. More specifically, by using tools from Toeplitz and generalized locally Toeplitz theory, the present research delves into the spectral analysis of nonpreconditioned and preconditioned matrix-sequences. We report numerical evidences supporting the theoretical findings. Finally, open problems and potential extensions in various directions are presented and briefly discussed.

We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.

In toxicology, the validation of the concurrent control by historical control data (HCD) has become requirements. This validation is usually done by historical control limits (HCL) which in practice are often graphically displayed in a Sheward control chart like manner. In many applications, HCL are applied to dichotomous data, e.g. the number of rats with a tumor vs. the number of rats without a tumor (carcinogenicity studies) or the number of cells with a micronucleus out of a total number of cells. Dichotomous HCD may be overdispersed and can be heavily right- (or left-) skewed, which is usually not taken into account in the practical applications of HCL. To overcome this problem, four different prediction intervals (two frequentist, two Bayesian), that can be applied to such data, are proposed. Comprehensive Monte-Carlo simulations assessing the coverage probabilities of seven different methods for HCL calculation reveal, that frequentist bootstrap calibrated prediction intervals control the type-1-error best. Heuristics traditionally used in control charts (e.g. the limits in Sheward np-charts or the mean plus minus 2 SD) as well a the historical range fail to control a pre-specified coverage probability. The application of HCL is demonstrated based on a real life data set containing historical controls from long-term carcinogenicity studies run on behalf of the U.S. National Toxicology Program. The proposed frequentist prediction intervals are publicly available from the R package predint, whereas R code for the computation of the Bayesian prediction intervals is provided via GitHub.

北京阿比特科技有限公司