亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We establish optimal error bounds for the exponential wave integrator (EWI) applied to the nonlinear Schr\"odinger equation (NLSE) with $ L^\infty $-potential and/or locally Lipschitz nonlinearity under the assumption of $ H^2 $-solution of the NLSE. For the semi-discretization in time by the first-order Gautschi-type EWI, we prove an optimal $ L^2 $-error bound at $ O(\tau) $ with $ \tau>0 $ being the time step size, together with a uniform $ H^2 $-bound of the numerical solution. For the full-discretization scheme obtained by using the Fourier spectral method in space, we prove an optimal $ L^2 $-error bound at $ O(\tau + h^2) $ without any coupling condition between $ \tau $ and $ h $, where $ h>0 $ is the mesh size. In addition, for $ W^{1, 4} $-potential and a little stronger regularity of the nonlinearity, under the assumption of $ H^3 $-solution, we obtain an optimal $ H^1 $-error bound. Furthermore, when the potential is of low regularity but the nonlinearity is sufficiently smooth, we propose an extended Fourier pseudospectral method which has the same error bound as the Fourier spectral method while its computational cost is similar to the standard Fourier pseudospectral method. Our new error bounds greatly improve the existing results for the NLSE with low regularity potential and/or nonlinearity. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.

相關內容

Physics informed neural network (PINN) based solution methods for differential equations have recently shown success in a variety of scientific computing applications. Several authors have reported difficulties, however, when using PINNs to solve equations with multiscale features. The objective of the present work is to illustrate and explain the difficulty of using standard PINNs for the particular case of divergence-form elliptic partial differential equations (PDEs) with oscillatory coefficients present in the differential operator. We show that if the coefficient in the elliptic operator $a^{\epsilon}(x)$ is of the form $a(x/\epsilon)$ for a 1-periodic coercive function $a(\cdot)$, then the Frobenius norm of the neural tangent kernel (NTK) matrix associated to the loss function grows as $1/\epsilon^2$. This implies that as the separation of scales in the problem increases, training the neural network with gradient descent based methods to achieve an accurate approximation of the solution to the PDE becomes increasingly difficult. Numerical examples illustrate the stiffness of the optimization problem.

We introduce a novel algorithm that converges to level-set convex viscosity solutions of high-dimensional Hamilton-Jacobi equations. The algorithm is applicable to a broad class of curvature motion PDEs, as well as a recently developed Hamilton-Jacobi equation for the Tukey depth, which is a statistical depth measure of data points. A main contribution of our work is a new monotone scheme for approximating the direction of the gradient, which allows for monotone discretizations of pure partial derivatives in the direction of, and orthogonal to, the gradient. We provide a convergence analysis of the algorithm on both regular Cartesian grids and unstructured point clouds in any dimension and present numerical experiments that demonstrate the effectiveness of the algorithm in approximating solutions of the affine flow in two dimensions and the Tukey depth measure of high-dimensional datasets such as MNIST and FashionMNIST.

In multiple hypotheses testing it has become widely popular to make inference on the true discovery proportion (TDP) of a set $\mathcal{M}$ of null hypotheses. This approach is useful for several application fields, such as neuroimaging and genomics. Several procedures to compute simultaneous lower confidence bounds for the TDP have been suggested in prior literature. Simultaneity allows for post-hoc selection of $\mathcal{M}$. If sets of interest are specified a priori, it is possible to gain power by removing the simultaneity requirement. We present an approach to compute lower confidence bounds for the TDP if the set of null hypotheses is defined a priori. The proposed method determines the bounds using the exact distribution of the number of rejections based on a step-up multiple testing procedure under independence assumptions. We assess robustness properties of our procedure and apply it to real data from the field of functional magnetic resonance imaging.

We consider multi-variate signals spanned by the integer shifts of a set of generating functions with distinct frequency profiles and the problem of reconstructing them from samples taken on a random periodic set. We show that such a sampling strategy succeeds with high probability provided that the density of the sampling pattern exceeds the number of frequency profiles by a logarithmic factor. The signal model includes bandlimited functions with multi-band spectra. While in this well-studied setting delicate constructions provide sampling strategies that meet the information theoretic benchmark of Shannon and Landau, the sampling pattern that we consider provides, at the price of a logarithmic oversampling factor, a simple alternative that is accompanied by favorable a priori stability margins (snug frames). More generally, we also treat bandlimited functions with arbitrary compact spectra, and different measures of its complexity and approximation rates by integer tiles. At the technical level, we elaborate on recent work on relevant sampling, with the key difference that the reconstruction guarantees that we provide hold uniformly for all signals, rather than for a subset of well-concentrated ones. This is achieved by methods of concentration of measure formulated on the Zak domain.

We consider wave propagation problems over 2-dimensional domains with piecewise-linear boundaries, possibly including scatterers. Under the assumption that the initial conditions and forcing terms are radially symmetric and compactly supported, we propose an approximation of the propagating wave as the sum of some special space-time functions. Each term in this sum identifies a particular field component, modeling the result of a single reflection or diffraction effect. We describe an algorithm for identifying such components automatically, based on the domain geometry. To showcase our proposed method, we present several numerical examples, such as waves scattering off wedges and waves propagating through a room in presence of obstacles.

Neural operators as novel neural architectures for fast approximating solution operators of partial differential equations (PDEs), have shown considerable promise for future scientific computing. However, the mainstream of training neural operators is still data-driven, which needs an expensive ground-truth dataset from various sources (e.g., solving PDEs' samples with the conventional solvers, real-world experiments) in addition to training stage costs. From a computational perspective, marrying operator learning and specific domain knowledge to solve PDEs is an essential step in reducing dataset costs and label-free learning. We propose a novel paradigm that provides a unified framework of training neural operators and solving PDEs with the variational form, which we refer to as the variational operator learning (VOL). Ritz and Galerkin approach with finite element discretization are developed for VOL to achieve matrix-free approximation of system functional and residual, then direct minimization and iterative update are proposed as two optimization strategies for VOL. Various types of experiments based on reasonable benchmarks about variable heat source, Darcy flow, and variable stiffness elasticity are conducted to demonstrate the effectiveness of VOL. With a label-free training set and a 5-label-only shift set, VOL learns solution operators with its test errors decreasing in a power law with respect to the amount of unlabeled data. To the best of the authors' knowledge, this is the first study that integrates the perspectives of the weak form and efficient iterative methods for solving sparse linear systems into the end-to-end operator learning task.

Neural ordinary differential equations (neural ODEs) are a popular family of continuous-depth deep learning models. In this work, we consider a large family of parameterized ODEs with continuous-in-time parameters, which include time-dependent neural ODEs. We derive a generalization bound for this class by a Lipschitz-based argument. By leveraging the analogy between neural ODEs and deep residual networks, our approach yields in particular a generalization bound for a class of deep residual networks. The bound involves the magnitude of the difference between successive weight matrices. We illustrate numerically how this quantity affects the generalization capability of neural networks.

The area under the receiver-operating characteristic curve (AUC) has become a popular index not only for measuring the overall prediction capacity of a marker but also the association strength between continuous and binary variables. In the current study, it has been used for comparing the association size of four different interventions involving impulsive decision making, studied through an animal model, in which each animal provides several negative (pre-treatment) and positive (post-treatment) measures. The problem of the full comparison of the average AUCs arises therefore in a natural way. We construct an analysis of variance (ANOVA) type test for testing the equality of the impact of these treatments measured through the respective AUCs, and considering the random-effect represented by the animal. The use (and development) of a post-hoc Tukey's HSD type test is also considered. We explore the finite-sample behavior of our proposal via Monte Carlo simulations, and analyze the data generated from the original problem. An R package implementing the procedures is provided as supplementary material.

The Reynolds equation, combined with the Elrod algorithm for including the effect of cavitation, resembles a nonlinear convection-diffusion-reaction (CDR) equation. Its solution by finite elements is prone to oscillations in convection-dominated regions, which are present whenever cavitation occurs. We propose a stabilized finite-element method that is based on the variational multiscale method and exploits the concept of orthogonal subgrid scales. We demonstrate that this approach only requires one additional term in the weak form to obtain a stable method that converges optimally when performing mesh refinement.

This manuscript studies the numerical solution of the time-fractional Burgers-Huxley equation in a reproducing kernel Hilbert space. The analytical solution of the equation is obtained in terms of a convergent series with easily computable components. It is observed that the approximate solution uniformly converges to the exact solution for the aforementioned equation. Also, the convergence of the proposed method is investigated. Numerical examples are given to demonstrate the validity and applicability of the presented method. The numerical results indicate that the proposed method is powerful and effective with a small computational overhead.

北京阿比特科技有限公司