亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We discuss a pointwise numerical differentiation formula on multivariate scattered data, based on the coefficients of local polynomial interpolation at Discrete Leja Points, written in Taylor's formula monomial basis. Error bounds for the approximation of partial derivatives of any order compatible with the function regularity are provided, as well as sensitivity estimates to functional perturbations, in terms of the inverse Vandermonde coefficients that are active in the differentiation process. Several numerical tests are presented showing the accuracy of the approximation.

相關內容

We present a general approach to constructing permutation tests that are both exact for the null hypothesis of equality of distributions and asymptotically correct for testing equality of parameters of distributions while allowing the distributions themselves to differ. These robust permutation tests transform a given test statistic by a consistent estimator of its limiting distribution function before enumerating its permutation distribution. This transformation, known as prepivoting, aligns the unconditional limiting distribution for the test statistic with the probability limit of its permutation distribution. Through prepivoting, the tests permute one minus an asymptotically valid $p$-value for testing the null of equality of parameters. We describe two approaches for prepivoting within permutation tests, one directly using asymptotic normality and the other using the bootstrap. We further illustrate that permutation tests using bootstrap prepivoting can provide improvements to the order of the error in rejection probability relative to competing transformations when testing equality of parameters, while maintaining exactness under equality of distributions. Simulation studies highlight the versatility of the proposal, illustrating the restoration of asymptotic validity to a wide range of permutation tests conducted when only the parameters of distributions are equal.

We study the problem of estimating non-linear functionals of discrete distributions in the context of local differential privacy. The initial data $x_1,\ldots,x_n \in [K]$ are supposed i.i.d. and distributed according to an unknown discrete distribution $p = (p_1,\ldots,p_K)$. Only $\alpha$-locally differentially private (LDP) samples $z_1,...,z_n$ are publicly available, where the term 'local' means that each $z_i$ is produced using one individual attribute $x_i$. We exhibit privacy mechanisms (PM) that are interactive (i.e. they are allowed to use already published confidential data) or non-interactive. We describe the behavior of the quadratic risk for estimating the power sum functional $F_{\gamma} = \sum_{k=1}^K p_k^{\gamma}$, $\gamma >0$ as a function of $K, \, n$ and $\alpha$. In the non-interactive case, we study two plug-in type estimators of $F_{\gamma}$, for all $\gamma >0$, that are similar to the MLE analyzed by Jiao et al. (2017) in the multinomial model. However, due to the privacy constraint the rates we attain are slower and similar to those obtained in the Gaussian model by Collier et al. (2020). In the interactive case, we introduce for all $\gamma >1$ a two-step procedure which attains the faster parametric rate $(n \alpha^2)^{-1/2}$ when $\gamma \geq 2$. We give lower bounds results over all $\alpha$-LDP mechanisms and all estimators using the private samples.

In this paper we derive stability estimates in $L^{2}$- and $L^{\infty}$- based Sobolev spaces for the $L^{2}$ projection and a family of quasiinterolants in the space of smooth, 1-periodic, polynomial splines defined on a uniform mesh in $[0,1]$. As a result of the assumed periodicity and the uniform mesh, cyclic matrix techniques and suitable decay estimates of the elements of the inverse of a Gram matrix associated with the standard basis of the space of splines, are used to establish the stability results.

We introduce the multivariate decomposition finite element method (MDFEM) for solving elliptic PDEs with uniform random diffusion coefficients. We show that the MDFEM can be used to reduce the computational complexity of estimating the expected value of a linear functional of the solution of the PDE. The proposed algorithm combines the multivariate decomposition method (MDM), to compute infinite dimensional integrals, with the finite element method (FEM), to solve different instances of the PDE. The strategy of the MDFEM is to decompose the infinite-dimensional problem into multiple finite-dimensional ones which lends itself to easier parallelization than to solve a single large dimensional problem. Our first result adjusts the analysis of the multivariate decomposition method to incorporate the log-factor which typically appears in error bounds for multivariate quadrature, i.e., cubature, methods; and we take care of the fact that the number of points $n$ needs to come, e.g., in powers of 2 for higher order approximations. For the further analysis we specialize the cubature methods to be two types of quasi-Monte Carlo (QMC) rules, being digitally shifted polynomial lattice rules and interlaced polynomial lattice rules. The second and main contribution then presents a bound on the error of the MDFEM and shows higher-order convergence w.r.t. the total computational cost in case of the interlaced polynomial lattice rules in combination with a higher-order finite element method.

In this paper, we consider numerical approximation to periodic measure of a time periodic stochastic differential equations (SDEs) under weakly dissipative condition. For this we first study the existence of the periodic measure $\rho_t$ and the large time behaviour of $\mathcal{U}(t+s,s,x) := \mathbb{E}\phi(X_{t}^{s,x})-\int\phi d\rho_t,$ where $X_t^{s,x}$ is the solution of the SDEs and $\phi$ is a test function being smooth and of polynomial growth at infinity. We prove $\mathcal{U}$ and all its spatial derivatives decay to 0 with exponential rate on time $t$ in the sense of average on initial time $s$. We also prove the existence and the geometric ergodicity of the periodic measure of the discretized semi-flow from the Euler-Maruyama scheme and moment estimate of any order when the time step is sufficiently small (uniform for all orders). We thereafter obtain that the weak error for the numerical scheme of infinite horizon is of the order $1$ in terms of the time step. We prove that the choice of step size can be uniform for all test functions $\phi$. Subsequently we are able to estimate the average periodic measure with ergodic numerical schemes.

We consider linear random coefficient regression models, where the regressors are allowed to have a finite support. First, we investigate identifiability, and show that the means and the variances and covariances of the random coefficients are identified from the first two conditional moments of the response given the covariates if the support of the covariates, excluding the intercept, contains a Cartesian product with at least three points in each coordinate. Next we show the variable selection consistency of the adaptive LASSO for the variances and covariances of the random coefficients in finite and moderately high dimensions. This implies that the estimated covariance matrix will actually be positive semidefinite and hence a valid covariance matrix, in contrast to the estimate arising from a simple least squares fit. We illustrate the proposed method in a simulation study.

Identifying dependency in multivariate data is a common inference task that arises in numerous applications. However, existing nonparametric independence tests typically require computation that scales at least quadratically with the sample size, making it difficult to apply them to massive data. Moreover, resampling is usually necessary to evaluate the statistical significance of the resulting test statistics at finite sample sizes, further worsening the computational burden. We introduce a scalable, resampling-free approach to testing the independence between two random vectors by breaking down the task into simple univariate tests of independence on a collection of 2x2 contingency tables constructed through sequential coarse-to-fine discretization of the sample space, transforming the inference task into a multiple testing problem that can be completed with almost linear complexity with respect to the sample size. To address increasing dimensionality, we introduce a coarse-to-fine sequential adaptive procedure that exploits the spatial features of dependency structures to more effectively examine the sample space. We derive a finite-sample theory that guarantees the inferential validity of our adaptive procedure at any given sample size. In particular, we show that our approach can achieve strong control of the family-wise error rate without resampling or large-sample approximation. We demonstrate the substantial computational advantage of the procedure in comparison to existing approaches as well as its decent statistical power under various dependency scenarios through an extensive simulation study, and illustrate how the divide-and-conquer nature of the procedure can be exploited to not just test independence but to learn the nature of the underlying dependency. Finally, we demonstrate the use of our method through analyzing a large data set from a flow cytometry experiment.

We study the existence of polynomial kernels, for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. Our main result is that a polynomial kernel for $k$-Dominating Set on graphs of twin-width at most 4 would contradict a standard complexity-theoretic assumption. The reduction is quite involved, especially to get the twin-width upper bound down to 4, and can be tweaked to work for Connected $k$-Dominating Set and Total $k$-Dominating Set (albeit with a worse upper bound on the twin-width). The $k$-Independent Set problem admits the same lower bound by a much simpler argument, previously observed [ICALP '21], which extends to $k$-Independent Dominating Set, $k$-Path, $k$-Induced Path, $k$-Induced Matching, etc. On the positive side, we obtain a simple quadratic vertex kernel for Connected $k$-Vertex Cover and Capacitated $k$-Vertex Cover on graphs of bounded twin-width. Interestingly the kernel applies to graphs of Vapnik-Chervonenkis density 1, and does not require a witness sequence. We also present a more intricate $O(k^{1.5})$ vertex kernel for Connected $k$-Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most optimization/decision graph problems can be solved in polynomial time on graphs of twin-width at most 1.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司