亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, the z-transform is presented to analyze time-discrete solutions for Volterra integrodifferential equations (VIDEs) with nonsmooth multi-term kernels in the Hilbert space, and this class of continuous problem was first considered and analyzed by Hannsgen and Wheeler (SIAM J Math Anal 15 (1984) 579-594). This work discusses three cases of kernels $\beta_q(t)$ included in the integrals for the multi-term VIDEs, from which we use corresponding numerical techniques to approximate the solution of multi-term VIDEs in different cases. Firstly, for the case of $\beta_1(t), \beta_2(t) \in \mathrm{L}_1(\mathbb{R}_+)$, the Crank-Nicolson (CN) method and interpolation quadrature (IQ) rule are applied to time-discrete solutions of the multi-term VIDEs; secondly, for the case of $\beta_1(t)\in \mathrm{L}_1(\mathbb{R}_+)$ and $\beta_2(t)\in \mathrm{L}_{1,\text{loc}}(\mathbb{R}_+)$, second-order backward differentiation formula (BDF2) and second-order convolution quadrature (CQ) are employed to discretize the multi-term problem in the time direction; thirdly, for the case of $\beta_1(t), \beta_2(t)\in \mathrm{L}_{1,\text{loc}}(\mathbb{R}_+)$, we utilize the CN method and trapezoidal CQ (TCQ) rule to approximate temporally the multi-term problem. Then for the discrete solution of three cases, the long-time global stability and convergence are proved based on the z-transform and certain appropriate assumptions. Furthermore, the long-time estimate of the third case is confirmed by the numerical tests.

相關內容

It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension $d$. The derivation of this result is more nuanced than in the classical case due the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. An important consequence of our analysis is that the ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is much worse than previously thought, as we prove that expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$ -- exponentially in the Hilbert space dimension. Finally, we discuss how our results improve our understanding of concentration in $t$-designs.

The Noisy-SGD algorithm is widely used for privately training machine learning models. Traditional privacy analyses of this algorithm assume that the internal state is publicly revealed, resulting in privacy loss bounds that increase indefinitely with the number of iterations. However, recent findings have shown that if the internal state remains hidden, then the privacy loss might remain bounded. Nevertheless, this remarkable result heavily relies on the assumption of (strong) convexity of the loss function. It remains an important open problem to further relax this condition while proving similar convergent upper bounds on the privacy loss. In this work, we address this problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient clipping to limit the impact of individual samples on the training process. Our findings demonstrate that the privacy loss of projected DP-SGD converges exponentially fast, without requiring convexity or smoothness assumptions on the loss function. In addition, we analyze the privacy loss of regularized (unprojected) DP-SGD. To obtain these results, we directly analyze the hockey-stick divergence between coupled stochastic processes by relying on non-linear data processing inequalities.

Solving PDEs with machine learning techniques has become a popular alternative to conventional methods. In this context, Neural networks (NNs) are among the most commonly used machine learning tools, and in those models, the choice of an appropriate loss function is critical. In general, the main goal is to guarantee that minimizing the loss during training translates to minimizing the error in the solution at the same rate. In this work, we focus on the time-harmonic Maxwell's equations, whose weak formulation takes H(curl) as the space of test functions. We propose a NN in which the loss function is a computable approximation of the dual norm of the weak-form PDE residual. To that end, we employ the Helmholtz decomposition of the space H(curl) and construct an orthonormal basis for this space in two and three spatial dimensions. Here, we use the Discrete Sine/Cosine Transform to accurately and efficiently compute the discrete version of our proposed loss function. Moreover, in the numerical examples we show a high correlation between the proposed loss function and the H(curl)-norm of the error, even in problems with low-regularity solutions.

The emergence of a new, open, and free instruction set architecture, RISC-V, has heralded a new era in microprocessor architectures. Starting with low-power, low-performance prototypes, the RISC-V community has a good chance of moving towards fully functional high-end microprocessors suitable for high-performance computing. Achieving progress in this direction requires comprehensive development of the software environment, namely operating systems, compilers, mathematical libraries, and approaches to performance analysis and optimization. In this paper, we analyze the performance of two available RISC-V devices when executing three memory-bound applications: a widely used STREAM benchmark, an in-place dense matrix transposition algorithm, and a Gaussian Blur algorithm. We show that, compared to x86 and ARM CPUs, RISC-V devices are still expected to be inferior in terms of computation time but are very good in resource utilization. We also demonstrate that well-developed memory optimization techniques for x86 CPUs improve the performance on RISC-V CPUs. Overall, the paper shows the potential of RISC-V as an alternative architecture for high-performance computing.

This article proposes and analyzes the generalized weak Galerkin ({\rm g}WG) finite element method for the second order elliptic problem. A generalized discrete weak gradient operator is introduced in the weak Galerkin framework so that the {\rm g}WG methods would not only allow arbitrary combinations of piecewise polynomials defined in the interior and on the boundary of each local finite element, but also work on general polytopal partitions. Error estimates are established for the corresponding numerical functions in the energy norm and the usual $L^2$ norm. A series of numerical experiments are presented to demonstrate the performance of the newly proposed {\rm g}WG method.

We present a mixed finite element method with parallelogram meshes for the Kirchhoff-Love plate bending model. Critical ingredient is the construction of appropriate basis functions that are conforming in terms of a sufficiently large tensor space and allow for any kind of physically relevant Dirichlet and Neumann boundary conditions. For Dirichlet boundary conditions, and polygonal convex or non-convex plates that can be discretized by parallelogram meshes, we prove quasi-optimal convergence of the mixed scheme. Numerical results for regular and singular examples with different boundary conditions illustrate our findings.

For certain materials science scenarios arising in rubber technology, one-dimensional moving boundary problems (MBPs) with kinetic boundary conditions are capable of unveiling the large-time behavior of the diffusants penetration front, giving a direct estimate on the service life of the material. In this paper, we propose a random walk algorithm able to lead to good numerical approximations of both the concentration profile and the location of the sharp front. Essentially, the proposed scheme decouples the target evolution system in two steps: (i) the ordinary differential equation corresponding to the evaluation of the speed of the moving boundary is solved via an explicit Euler method, and (ii) the associated diffusion problem is solved by a random walk method. To verify the correctness of our random walk algorithm we compare the resulting approximations to results based on a finite element approach with a controlled convergence rate. Our numerical experiments recover well penetration depth measurements of an experimental setup targeting dense rubbers.

Under some regularity assumptions, we report an a priori error analysis of a dG scheme for the Poisson and Stokes flow problem in their dual mixed formulation. Both formulations satisfy a Babu\v{s}ka-Brezzi type condition within the space H(div) x L2. It is well known that the lowest order Crouzeix-Raviart element paired with piecewise constants satisfies such a condition on (broken) H1 x L2 spaces. In the present article, we use this pair. The continuity of the normal component is weakly imposed by penalizing jumps of the broken H(div) component. For the resulting methods, we prove well-posedness and convergence with constants independent of data and mesh size. We report error estimates in the methods natural norms and optimal local error estimates for the divergence error. In fact, our finite element solution shares for each triangle one DOF with the CR interpolant and the divergence is locally the best-approximation for any regularity. Numerical experiments support the findings and suggest that the other errors converge optimally even for the lowest regularity solutions and a crack-problem, as long as the crack is resolved by the mesh.

Originally introduced as a neural network for ensemble learning, mixture of experts (MoE) has recently become a fundamental building block of highly successful modern deep neural networks for heterogeneous data analysis in several applications, including those in machine learning, statistics, bioinformatics, economics, and medicine. Despite its popularity in practice, a satisfactory level of understanding of the convergence behavior of Gaussian-gated MoE parameter estimation is far from complete. The underlying reason for this challenge is the inclusion of covariates in the Gaussian gating and expert networks, which leads to their intrinsically complex interactions via partial differential equations with respect to their parameters. We address these issues by designing novel Voronoi loss functions to accurately capture heterogeneity in the maximum likelihood estimator (MLE) for resolving parameter estimation in these models. Our results reveal distinct behaviors of the MLE under two settings: the first setting is when all the location parameters in the Gaussian gating are non-zeros while the second setting is when there exists at least one zero-valued location parameter. Notably, these behaviors can be characterized by the solvability of two different systems of polynomial equations. Finally, we conduct a simulation study to verify our theoretical results.

Physical models with uncertain inputs are commonly represented as parametric partial differential equations (PDEs). That is, PDEs with inputs that are expressed as functions of parameters with an associated probability distribution. Developing efficient and accurate solution strategies that account for errors on the space, time and parameter domains simultaneously is highly challenging. Indeed, it is well known that standard polynomial-based approximations on the parameter domain can incur errors that grow in time. In this work, we focus on advection-diffusion problems with parameter-dependent wind fields. A novel adaptive solution strategy is proposed that allows users to combine stochastic collocation on the parameter domain with off-the-shelf adaptive timestepping algorithms with local error control. This is a non-intrusive strategy that builds a polynomial-based surrogate that is adapted sequentially in time. The algorithm is driven by a so-called hierarchical estimator for the parametric error and balances this against an estimate for the global timestepping error which is derived from a scaling argument.

北京阿比特科技有限公司