亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We derive an $L_1$-bound between the coefficients of the optimal causal filter applied to the data-generating process and its approximation based on finite sample observations. Here, we assume that the data-generating process is second-order stationary with either short or long memory autocovariances. To obtain the $L_1$-bound, we first provide an exact expression of the causal filter coefficients and their approximation in terms of the absolute convergent series of the multistep ahead infinite and finite predictor coefficients, respectively. Then, we prove a so-called uniform-type Baxter's inequality to obtain a bound for the difference between the two multistep ahead predictor coefficients (under both short and memory time series). The $L_1$-approximation error bound of the causal filter coefficients can be used to evaluate the quality of the predictions of time series through the mean squared error criterion.

相關內容

In this work, the z-transform is presented to analyze time-discrete solutions for Volterra integrodifferential equations (VIDEs) with nonsmooth multi-term kernels in the Hilbert space, and this class of continuous problem was first considered and analyzed by Hannsgen and Wheeler (SIAM J Math Anal 15 (1984) 579-594). This work discusses three cases of kernels $\beta_q(t)$ included in the integrals for the multi-term VIDEs, from which we use corresponding numerical techniques to approximate the solution of multi-term VIDEs in different cases. Firstly, for the case of $\beta_1(t), \beta_2(t) \in \mathrm{L}_1(\mathbb{R}_+)$, the Crank-Nicolson (CN) method and interpolation quadrature (IQ) rule are applied to time-discrete solutions of the multi-term VIDEs; secondly, for the case of $\beta_1(t)\in \mathrm{L}_1(\mathbb{R}_+)$ and $\beta_2(t)\in \mathrm{L}_{1,\text{loc}}(\mathbb{R}_+)$, second-order backward differentiation formula (BDF2) and second-order convolution quadrature (CQ) are employed to discretize the multi-term problem in the time direction; thirdly, for the case of $\beta_1(t), \beta_2(t)\in \mathrm{L}_{1,\text{loc}}(\mathbb{R}_+)$, we utilize the CN method and trapezoidal CQ (TCQ) rule to approximate temporally the multi-term problem. Then for the discrete solution of three cases, the long-time global stability and convergence are proved based on the z-transform and certain appropriate assumptions. Furthermore, the long-time estimate of the third case is confirmed by the numerical tests.

In this contribution we propose an optimally stable ultraweak Petrov-Galerkin variational formulation and subsequent discretization for stationary reactive transport problems. The discretization is exclusively based on the choice of discrete approximate test spaces, while the trial space is a priori infinite dimensional. The solution in the trial space or even only functional evaluations of the solution are obtained in a post-processing step. We detail the theoretical framework and demonstrate its usage in a numerical experiment that is motivated from modeling of catalytic filters.

This work introduces a method to select linear functional measurements of a vector-valued time series optimized for forecasting distant time-horizons. By formulating and solving the problem of sequential linear measurement design as an infinite-horizon problem with the time-averaged trace of the Cram\'{e}r-Rao lower bound (CRLB) for forecasting as the cost, the most informative data can be collected irrespective of the eventual forecasting algorithm. By introducing theoretical results regarding measurements under additive noise from natural exponential families, we construct an equivalent problem from which a local dimensionality reduction can be derived. This alternative formulation is based on the future collapse of dimensionality inherent in the limiting behavior of many differential equations and can be directly observed in the low-rank structure of the CRLB for forecasting. Implementations of both an approximate dynamic programming formulation and the proposed alternative are illustrated using an extended Kalman filter for state estimation, with results on simulated systems with limit cycles and chaotic behavior demonstrating a linear improvement in the CRLB as a function of the number of collapsing dimensions of the system.

The setting of a right-censored random sample subject to contamination is considered. In various fields, expert information is often available and used to overcome the contamination. This paper integrates expert knowledge into the product-limit estimator in two different ways with distinct interpretations. Strong uniform consistency is proved for both cases under certain assumptions on the kind of contamination and the quality of expert information, which sheds light on the techniques and decisions that practitioners may take. The nuances of the techniques are discussed -- also with a view towards semi-parametric estimation -- and they are illustrated using simulated and real-world insurance data.

We consider identification and inference for the average treatment effect and heterogeneous treatment effect conditional on observable covariates in the presence of unmeasured confounding. Since point identification of these treatment effects is not achievable without strong assumptions, we obtain bounds on these treatment effects by leveraging differential effects, a tool that allows for using a second treatment to learn the effect of the first treatment. The differential effect is the effect of using one treatment in lieu of the other. We provide conditions under which differential treatment effects can be used to point identify or partially identify treatment effects. Under these conditions, we develop a flexible and easy-to-implement semi-parametric framework to estimate bounds and establish asymptotic properties over the support for conducting statistical inference. The proposed method is examined through a simulation study and two case studies that investigate the effect of smoking on the blood level of lead and cadmium using the National Health and Nutrition Examination Survey, and the effect of soft drink consumption on the occurrence of physical fights in teenagers using the Youth Risk Behavior Surveillance System.

We derive a formula for optimal hard thresholding of the singular value decomposition in the presence of correlated additive noise; although it nominally involves unobservables, we show how to apply it even where the noise covariance structure is not a-priori known or is not independently estimable. The proposed method, which we call ScreeNOT, is a mathematically solid alternative to Cattell's ever-popular but vague Scree Plot heuristic from 1966. ScreeNOT has a surprising oracle property: it typically achieves exactly, in large finite samples, the lowest possible MSE for matrix recovery, on each given problem instance - i.e. the specific threshold it selects gives exactly the smallest achievable MSE loss among all possible threshold choices for that noisy dataset and that unknown underlying true low rank model. The method is computationally efficient and robust against perturbations of the underlying covariance structure. Our results depend on the assumption that the singular values of the noise have a limiting empirical distribution of compact support; this model, which is standard in random matrix theory, is satisfied by many models exhibiting either cross-row correlation structure or cross-column correlation structure, and also by many situations where there is inter-element correlation structure. Simulations demonstrate the effectiveness of the method even at moderate matrix sizes. The paper is supplemented by ready-to-use software packages implementing the proposed algorithm: package ScreeNOT in Python (via PyPI) and R (via CRAN).

We present a study of a kernel-based two-sample test statistic related to the Maximum Mean Discrepancy (MMD) in the manifold data setting, assuming that high-dimensional observations are close to a low-dimensional manifold. We characterize the test level and power in relation to the kernel bandwidth, the number of samples, and the intrinsic dimensionality of the manifold. Specifically, we show that when data densities are supported on a $d$-dimensional sub-manifold $\mathcal{M}$ embedded in an $m$-dimensional space, the kernel two-sample test for data sampled from a pair of distributions $p$ and $q$ that are H\"older with order $\beta$ (up to 2) is powerful when the number of samples $n$ is large such that $\Delta_2 \gtrsim n^{- { 2 \beta/( d + 4 \beta ) }}$, where $\Delta_2$ is the squared $L^2$-divergence between $p$ and $q$ on manifold. We establish a lower bound on the test power for finite $n$ that is sufficiently large, where the kernel bandwidth parameter $\gamma$ scales as $n^{-1/(d+4\beta)}$. The analysis extends to cases where the manifold has a boundary, and the data samples contain high-dimensional additive noise. Our results indicate that the kernel two-sample test does not have a curse-of-dimensionality when the data lie on or near a low-dimensional manifold. We validate our theory and the properties of the kernel test for manifold data through a series of numerical experiments.

Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.

In this paper we study the finite sample and asymptotic properties of various weighting estimators of the local average treatment effect (LATE), several of which are based on Abadie's (2003) kappa theorem. Our framework presumes a binary treatment and a binary instrument, which may only be valid after conditioning on additional covariates. We argue that one of the Abadie estimators, which is weight normalized, is preferable in many contexts. Several other estimators, which are unnormalized, do not generally satisfy the properties of scale invariance with respect to the natural logarithm and translation invariance, thereby exhibiting sensitivity to the units of measurement when estimating the LATE in logs and the centering of the outcome variable more generally. On the other hand, when noncompliance is one-sided, certain unnormalized estimators have the advantage of being based on a denominator that is bounded away from zero. To reconcile these findings, we demonstrate that when the instrument propensity score is estimated using an appropriate covariate balancing approach, the resulting normalized estimator also shares this advantage. We use a simulation study and three empirical applications to illustrate our findings. In two cases, the unnormalized estimates are clearly unreasonable, with "incorrect" signs, magnitudes, or both.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司