亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we construct a quadrature scheme to numerically solve the nonlocal diffusion equation $(\mathcal{A}^\alpha+b\mathcal{I})u=f$ with $\mathcal{A}^\alpha$ the $\alpha$-th power of the regularly accretive operator $\mathcal{A}$. Rigorous error analysis is carried out and sharp error bounds (up to some negligible constants) are obtained. The error estimates include a wide range of cases in which the regularity index and spectral angle of $\mathcal{A}$, the smoothness of $f$, the size of $b$ and $\alpha$ are all involved. The quadrature scheme is exponentially convergent with respect to the step size and is root-exponentially convergent with respect to the number of solves. Some numerical tests are presented in the last section to verify the sharpness of our estimates. Furthermore, both the scheme and the error bounds can be utilized directly to solve and analyze time-dependent problems.

相關內容

The aim of this paper is to find the numerical solutions of the second order linear and nonlinear differential equations with Dirichlet, Neumann and Robin boundary conditions. We use the Bernoulli polynomials as linear combination to the approximate solutions of 2nd order boundary value problems. Here the Bernoulli polynomials over the interval [0, 1] are chosen as trial functions so that care has been taken to satisfy the corresponding homogeneous form of the Dirichlet boundary conditions in the Galerkin weighted residual method. In addition to that the given differential equation over arbitrary finite domain [a, b] and the boundary conditions are converted into its equivalent form over the interval [0, 1]. All the formulas are verified by considering numerical examples. The approximate solutions are compared with the exact solutions, and also with the solutions of the existing methods. A reliable good accuracy is obtained in all cases.

It is well known that Empirical Risk Minimization (ERM) with squared loss may attain minimax suboptimal error rates (Birg\'e and Massart, 1993). The key message of this paper is that, under mild assumptions, the suboptimality of ERM must be due to large bias rather than variance. More precisely, in the bias-variance decomposition of the squared error of the ERM, the variance term necessarily enjoys the minimax rate. In the case of fixed design, we provide an elementary proof of this fact using the probabilistic method. Then, we prove this result for various models in the random design setting. In addition, we provide a simple proof of Chatterjee's admissibility theorem (Chatterjee, 2014, Theorem 1.4), which states that ERM cannot be ruled out as an optimal method, in the fixed design setting, and extend this result to the random design setting. We also show that our estimates imply stability of ERM, complementing the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes. Finally, we show that for non-Donsker classes, there are functions close to the ERM, yet far from being almost-minimizers of the empirical loss, highlighting the somewhat irregular nature of the loss landscape.

In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred. The novelty is to introduce a certain dependence between the sparsity level and the factor dimensionality, which leads to adaptive posterior concentration while keeping computational tractability. We show that the posterior distribution asymptotically concentrates on the true factor dimensionality, and more importantly, this posterior consistency is adaptive to the sparsity level of the true loading matrix and the noise variance. We also prove that the proposed Bayesian model attains the optimal detection rate of the factor dimensionality in a more general situation than those found in the literature. Moreover, we obtain a near-optimal posterior concentration rate of the covariance matrix. Numerical studies are conducted and show the superiority of the proposed method compared with other competitors.

In this paper, we present a numerical approach to solve the McKean-Vlasov equations, which are distribution-dependent stochastic differential equations, under some non-globally Lipschitz conditions for both the drift and diffusion coefficients. We establish a propagation of chaos result, based on which the McKean-Vlasov equation is approximated by an interacting particle system. A truncated Euler scheme is then proposed for the interacting particle system allowing for a Khasminskii-type condition on the coefficients. To reduce the computational cost, the random batch approximation proposed in [Jin et al., J. Comput. Phys., 400(1), 2020] is extended to the interacting particle system where the interaction could take place in the diffusion term. An almost half order of convergence is proved in $L^p$ sense. Numerical tests are performed to verify the theoretical results.

In this paper, we present and analyze a linear fully discrete second order scheme with variable time steps for the phase field crystal equation. More precisely, we construct a linear adaptive time stepping scheme based on the second order backward differentiation formulation (BDF2) and use the Fourier spectral method for the spatial discretization. The scalar auxiliary variable approach is employed to deal with the nonlinear term, in which we only adopt a first order method to approximate the auxiliary variable. This treatment is extremely important in the derivation of the unconditional energy stability of the proposed adaptive BDF2 scheme. However, we find for the first time that this strategy will not affect the second order accuracy of the unknown phase function $\phi^{n}$ by setting the positive constant $C_{0}$ large enough such that $C_{0}\geq 1/\Dt.$ The energy stability of the adaptive BDF2 scheme is established with a mild constraint on the adjacent time step radio $\gamma_{n+1}:=\Dt_{n+1}/\Dt_{n}\leq 4.8645$. Furthermore, a rigorous error estimate of the second order accuracy of $\phi^{n}$ is derived for the proposed scheme on the nonuniform mesh by using the uniform $H^{2}$ bound of the numerical solutions. Finally, some numerical experiments are carried out to validate the theoretical results and demonstrate the efficiency of the fully discrete adaptive BDF2 scheme.

Generative data augmentation, which scales datasets by obtaining fake labeled examples from a trained conditional generative model, boosts classification performance in various learning tasks including (semi-)supervised learning, few-shot learning, and adversarially robust learning. However, little work has theoretically investigated the effect of generative data augmentation. To fill this gap, we establish a general stability bound in this not independently and identically distributed (non-i.i.d.) setting, where the learned distribution is dependent on the original train set and generally not the same as the true distribution. Our theoretical result includes the divergence between the learned distribution and the true distribution. It shows that generative data augmentation can enjoy a faster learning rate when the order of divergence term is $o(\max\left( \log(m)\beta_m, 1 / \sqrt{m})\right)$, where $m$ is the train set size and $\beta_m$ is the corresponding stability constant. We further specify the learning setup to the Gaussian mixture model and generative adversarial nets. We prove that in both cases, though generative data augmentation does not enjoy a faster learning rate, it can improve the learning guarantees at a constant level when the train set is small, which is significant when the awful overfitting occurs. Simulation results on the Gaussian mixture model and empirical results on generative adversarial nets support our theoretical conclusions. Our code is available at //github.com/ML-GSAI/Understanding-GDA.

This paper provides a comprehensive error analysis of learning with vector-valued random features (RF). The theory is developed for RF ridge regression in a fully general infinite-dimensional input-output setting, but nonetheless applies to and improves existing finite-dimensional analyses. In contrast to comparable work in the literature, the approach proposed here relies on a direct analysis of the underlying risk functional and completely avoids the explicit RF ridge regression solution formula in terms of random matrices. This removes the need for concentration results in random matrix theory or their generalizations to random operators. The main results established in this paper include strong consistency of vector-valued RF estimators under model misspecification and minimax optimal convergence rates in the well-specified setting. The parameter complexity (number of random features) and sample complexity (number of labeled data) required to achieve such rates are comparable with Monte Carlo intuition and free from logarithmic factors.

In this paper, we consider point sets of finite Desarguesian planes whose multisets of intersection numbers with lines is the same for all but one exceptional parallel class of lines. We call such sets regular of affine type. When the lines of the exceptional parallel class have the same intersection numbers, then we call these sets regular of pointed type. Classical examples are e.g. unitals; a detailed study and constructions of such sets with few intersection numbers is due to Hirschfeld and Sz\H{o}nyi from 1991. We here provide some general construction methods for regular sets and describe a few infinite families. The members of one of these families have the size of a unital and meet affine lines of $\mathrm{PG}(2, q^2)$ in one of $4$ possible intersection numbers, each of them congruent to $1$ modulo $\sqrt{q}$. As a byproduct, we determine the intersection sizes of the Hermitian curve defined over $\mathrm{GF}(q^2)$ with suitable rational curves of degree $\sqrt{q}$ and we obtain $\sqrt{q}$-divisible codes with $5$ non-zero weights. We also determine the weight enumerator of the codes arising from the general constructions modulus some $q$-powers.

This paper introduces a matrix quantile factor model for matrix-valued data with a low-rank structure. We estimate the row and column factor spaces via minimizing the empirical check loss function over all panels. We show the estimates converge at rate $1/\min\{\sqrt{p_1p_2}, \sqrt{p_2T},$ $\sqrt{p_1T}\}$ in average Frobenius norm, where $p_1$, $p_2$ and $T$ are the row dimensionality, column dimensionality and length of the matrix sequence. This rate is faster than that of the quantile estimates via ``flattening" the matrix model into a large vector model. Smoothed estimates are given and their central limit theorems are derived under some mild condition. We provide three consistent criteria to determine the pair of row and column factor numbers. Extensive simulation studies and an empirical study justify our theory.

In Bayesian inference, the approximation of integrals of the form $\psi = \mathbb{E}_{F}{l(X)} = \int_{\chi} l(\mathbf{x}) d F(\mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $\Lambda$ functions, essential for the vertical representation and nested sampling.

北京阿比特科技有限公司