亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Radial basis functions (RBFs) are prominent examples for reproducing kernels with associated reproducing kernel Hilbert spaces (RKHSs). The convergence theory for the kernel-based interpolation in that space is well understood and optimal rates for the whole RKHS are often known. Schaback added the doubling trick, which shows that functions having double the smoothness required by the RKHS (along with complicated, but well understood boundary behavior) can be approximated with higher convergence rates than the optimal rates for the whole space. Other advances allowed interpolation of target functions which are less smooth, and different norms which measure interpolation error. The current state of the art of error analysis for RBF interpolation treats target functions having smoothness up to twice that of the native space, but error measured in norms which are weaker than that required for membership in the RKHS. Motivated by the fact that the kernels and the approximants they generate are smoother than required by the native space, this article extends the doubling trick to error which measures higher smoothness. This extension holds for a family of kernels satisfying easily checked hypotheses which we describe in this article, and includes many prominent RBFs. In the course of the proof, new convergence rates are obtained for the abstract operator considered by Devore and Ron, and new Bernstein estimates are obtained relating high order smoothness norms to the native space norm.

相關內容

Backward Stochastic Differential Equations (BSDEs) have been widely employed in various areas of social and natural sciences, such as the pricing and hedging of financial derivatives, stochastic optimal control problems, optimal stopping problems and gene expression. Most BSDEs cannot be solved analytically and thus numerical methods must be applied to approximate their solutions. There have been a variety of numerical methods proposed over the past few decades as well as many more currently being developed. For the most part, they exist in a complex and scattered manner with each requiring a variety of assumptions and conditions. The aim of the present work is thus to systematically survey various numerical methods for BSDEs, and in particular, compare and categorize them, for further developments and improvements. To achieve this goal, we focus primarily on the core features of each method based on an extensive collection of 333 references: the main assumptions, the numerical algorithm itself, key convergence properties and advantages and disadvantages, to provide an up-to-date coverage of numerical methods for BSDEs, with insightful summaries of each and a useful comparison and categorization.

We propose algorithms for efficient time integration of large systems of oscillatory second order ordinary differential equations (ODEs) whose solution can be expressed in terms of trigonometric matrix functions. Our algorithms are based on a residual notion for second order ODEs, which allows to extend the ``residual-time restarting'' Krylov subspace framework -- which was recently introduced for exponential and $\varphi$-functions occurring in time integration of first order ODEs -- to our setting. We then show that the computational cost can be further reduced in many cases by using our restarting in the Gautschi cosine scheme. We analyze residual convergence in terms of Faber and Chebyshev series and supplement these theoretical results by numerical experiments illustrating the efficiency of the proposed methods.

For many tasks of data analysis, we may only have the information of the explanatory variable and the evaluation of the response values are quite expensive. While it is impractical or too costly to obtain the responses of all units, a natural remedy is to judiciously select a good sample of units, for which the responses are to be evaluated. In this paper, we adopt the classical criteria in design of experiments to quantify the information of a given sample regarding parameter estimation. Then, we provide a theoretical justification for approximating the optimal sample problem by a continuous problem, for which fast algorithms can be further developed with the guarantee of global convergence. Our results have the following novelties: (i) The statistical efficiency of any candidate sample can be evaluated without knowing the exact optimal sample; (ii) It can be applied to a very wide class of statistical models; (iii) It can be integrated with a broad class of information criteria; (iv) It is much faster than existing algorithms. $(v)$ A geometric interpretation is adopted to theoretically justify the relaxation of the original combinatorial problem to continuous optimization problem.

Quality assessment algorithms can be used to estimate the utility of a biometric sample for the purpose of biometric recognition. "Error versus Discard Characteristic" (EDC) plots, and "partial Area Under Curve" (pAUC) values of curves therein, are generally used by researchers to evaluate the predictive performance of such quality assessment algorithms. An EDC curve depends on an error type such as the "False Non Match Rate" (FNMR), a quality assessment algorithm, a biometric recognition system, a set of comparisons each corresponding to a biometric sample pair, and a comparison score threshold corresponding to a starting error. To compute an EDC curve, comparisons are progressively discarded based on the associated samples' lowest quality scores, and the error is computed for the remaining comparisons. Additionally, a discard fraction limit or range must be selected to compute pAUC values, which can then be used to quantitatively rank quality assessment algorithms. This paper discusses and analyses various details for this kind of quality assessment algorithm evaluation, including general EDC properties, interpretability improvements for pAUC values based on a hard lower error limit and a soft upper error limit, the use of relative instead of discrete rankings, stepwise vs. linear curve interpolation, and normalisation of quality scores to a [0, 100] integer range. We also analyse the stability of quantitative quality assessment algorithm rankings based on pAUC values across varying pAUC discard fraction limits and starting errors, concluding that higher pAUC discard fraction limits should be preferred. The analyses are conducted both with synthetic data and with real data for a face image quality assessment scenario, with a focus on general modality-independent conclusions for EDC evaluations.

In this manuscript, we propose an efficient, practical and easy-to-implement way to approximate actions of $\varphi$-functions for matrices with $d$-dimensional Kronecker sum structure in the context of exponential integrators up to second order. The method is based on a direction splitting of the involved matrix functions, which lets us exploit the highly efficient level 3 BLAS for the actual computation of the required actions in a $\mu$-mode fashion. The approach has been successfully tested on two- and three-dimensional problems with various exponential integrators, resulting in a consistent speedup with respect to a technique designed to compute actions of $\varphi$-functions for Kronecker sums.

Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through a simulation study, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided.

Sensitivity analysis for the unconfoundedness assumption is a crucial component of observational studies. The marginal sensitivity model has become increasingly popular for this purpose due to its interpretability and mathematical properties. As the basis of $L^\infty$-sensitivity analysis, it assumes the logit difference between the observed and full data propensity scores is uniformly bounded. In this article, we introduce a new $L^2$-sensitivity analysis framework which is flexible, sharp and efficient. We allow the strength of unmeasured confounding to vary across units and only require it to be bounded marginally for partial identification. We derive analytical solutions to the optimization problems under our $L^2$-models, which can be used to obtain sharp bounds for the average treatment effect (ATE). We derive efficient influence functions and use them to develop efficient one-step estimators in both analyses. We show that multiplier bootstrap can be applied to construct simultaneous confidence bands for our ATE bounds. In a real-data study, we demonstrate that $L^2$-analysis relaxes the interpretation of $L^\infty$-analysis and provides a much more reliable calibration process using observed covariates. Finally, we provide an extension of our theoretical results to the conditional average treatment effect (CATE).

We consider the measurement model $Y = AX,$ where $X$ and, hence, $Y$ are random variables and $A$ is an a priori known tall matrix. At each time instance, a sample of one of $Y$'s coordinates is available, and the goal is to estimate $\mu := \mathbb{E}[X]$ via these samples. However, the challenge is that a small but unknown subset of $Y$'s coordinates are controlled by adversaries with infinite power: they can return any real number each time they are queried for a sample. For such an adversarial setting, we propose the first asynchronous online algorithm that converges to $\mu$ almost surely. We prove this result using a novel differential inclusion based two-timescale analysis. Two key highlights of our proof include: (a) the use of a novel Lyapunov function for showing that $\mu$ is the unique global attractor for our algorithm's limiting dynamics, and (b) the use of martingale and stopping time theory to show that our algorithm's iterates are almost surely bounded.

In this paper we investigate the stability properties of the so-called gBBKS and GeCo methods, which belong to the class of nonstandard schemes and preserve the positivity as well as all linear invariants of the underlying system of ordinary differential equations for any step size. A stability investigation for these methods, which are outside the class of general linear methods, is challenging since the iterates are always generated by a nonlinear map even for linear problems. Recently, a stability theorem was derived presenting criteria for understanding such schemes. For the analysis, the schemes are applied to general linear equations and proven to be generated by $\mathcal C^1$-maps with locally Lipschitz continuous first derivatives. As a result, the above mentioned stability theorem can be applied to investigate the Lyapunov stability of non-hyperbolic fixed points of the numerical method by analyzing the spectrum of the corresponding Jacobian of the generating map. In addition, if a fixed point is proven to be stable, the theorem guarantees the local convergence of the iterates towards it. In the case of first and second order gBBKS schemes the stability domain coincides with that of the underlying Runge--Kutta method. Furthermore, while the first order GeCo scheme converts steady states to stable fixed points for all step sizes and all linear test problems of finite size, the second order GeCo scheme has a bounded stability region for the considered test problems. Finally, all theoretical predictions from the stability analysis are validated numerically.

We present estimators for smooth Hilbert-valued parameters, where smoothness is characterized by a pathwise differentiability condition. When the parameter space is a reproducing kernel Hilbert space, we provide a means to obtain efficient, root-n rate estimators and corresponding confidence sets. These estimators correspond to generalizations of cross-fitted one-step estimators based on Hilbert-valued efficient influence functions. We give theoretical guarantees even when arbitrary estimators of nuisance functions are used, including those based on machine learning techniques. We show that these results naturally extend to Hilbert spaces that lack a reproducing kernel, as long as the parameter has an efficient influence function. However, we also uncover the unfortunate fact that, when there is no reproducing kernel, many interesting parameters fail to have an efficient influence function, even though they are pathwise differentiable. To handle these cases, we propose a regularized one-step estimator and associated confidence sets. We also show that pathwise differentiability, which is a central requirement of our approach, holds in many cases. Specifically, we provide multiple examples of pathwise differentiable parameters and develop corresponding estimators and confidence sets. Among these examples, four are particularly relevant to ongoing research by the causal inference community: the counterfactual density function, dose-response function, conditional average treatment effect function, and counterfactual kernel mean embedding.

北京阿比特科技有限公司