亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Code verification plays an important role in establishing the credibility of computational simulations by assessing the correctness of the implementation of the underlying numerical methods. In computational electromagnetics, the numerical solution to integral equations incurs multiple interacting sources of numerical error, as well as other challenges, which render traditional code-verification approaches ineffective. In this paper, we provide approaches to separately measure the numerical errors arising from these different error sources for the method-of-moments implementation of the combined-field integral equation. We demonstrate the effectiveness of these approaches for cases with and without coding errors.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

This study investigates a class of initial-boundary value problems pertaining to the time-fractional mixed sub-diffusion and diffusion-wave equation (SDDWE). To facilitate the development of a numerical method and analysis, the original problem is transformed into a new integro-differential model which includes the Caputo derivatives and the Riemann-Liouville fractional integrals with orders belonging to (0,1). By providing an a priori estimate of the solution, we have established the existence and uniqueness of a numerical solution for the problem. We propose a second-order method to approximate the fractional Riemann-Liouville integral and employ an L2 type formula to approximate the Caputo derivative. This results in a method with a temporal accuracy of second-order for approximating the considered model. The proof of the unconditional stability of the proposed difference scheme is established. Moreover, we demonstrate the proposed method's potential to construct and analyze a second-order L2-type numerical scheme for a broader class of the time-fractional mixed SDDWEs with multi-term time-fractional derivatives. Numerical results are presented to assess the accuracy of the method and validate the theoretical findings.

We study the problem of estimating the convex hull of the image $f(X)\subset\mathbb{R}^n$ of a compact set $X\subset\mathbb{R}^m$ with smooth boundary through a smooth function $f:\mathbb{R}^m\to\mathbb{R}^n$. Assuming that $f$ is a submersion, we derive a new bound on the Hausdorff distance between the convex hull of $f(X)$ and the convex hull of the images $f(x_i)$ of $M$ sampled inputs $x_i$ on the boundary of $X$. When applied to the problem of geometric inference from a random sample, our results give tighter and more general error bounds than the state of the art. We present applications to the problems of robust optimization, of reachability analysis of dynamical systems, and of robust trajectory optimization under bounded uncertainty.

In this paper, we present a coded computation (CC) scheme for distributed computation of the inference phase of machine learning (ML) tasks, specifically, the task of image classification. Building upon Agrawal et al.~2022, the proposed scheme combines the strengths of deep learning and Lagrange interpolation technique to mitigate the effect of straggling workers, and recovers approximate results with reasonable accuracy using outputs from any $R$ out of $N$ workers, where $R\leq N$. Our proposed scheme guarantees a minimum recovery threshold $R$ for non-polynomial problems, which can be adjusted as a tunable parameter in the system. Moreover, unlike existing schemes, our scheme maintains flexibility with respect to worker availability and system design. We propose two system designs for our CC scheme that allows flexibility in distributing the computational load between the master and the workers based on the accessibility of input data. Our experimental results demonstrate the superiority of our scheme compared to the state-of-the-art CC schemes for image classification tasks, and pave the path for designing new schemes for distributed computation of any general ML classification tasks.

Tensor contraction operations in computational chemistry consume significant fractions of computing time on large-scale computing platforms. The widespread use of tensor contractions between large multi-dimensional tensors in describing electronic structure theory has motivated the development of multiple tensor algebra frameworks targeting heterogeneous computing platforms. In this paper, we present Tensor Algebra for Many-body Methods (TAMM), a framework for productive and performance-portable development of scalable computational chemistry methods. The TAMM framework decouples the specification of the computation and the execution of these operations on available high-performance computing systems. With this design choice, the scientific application developers (domain scientists) can focus on the algorithmic requirements using the tensor algebra interface provided by TAMM whereas high-performance computing developers can focus on various optimizations on the underlying constructs such as efficient data distribution, optimized scheduling algorithms, efficient use of intra-node resources (e.g., GPUs). The modular structure of TAMM allows it to be extended to support different hardware architectures and incorporate new algorithmic advances. We describe the TAMM framework and our approach to sustainable development of tensor contraction-based methods in computational chemistry applications. We present case studies that highlight the ease of use as well as the performance and productivity gains compared to other implementations.

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

This work focuses on the temporal average of the backward Euler--Maruyama (BEM) method, which is used to approximate the ergodic limit of stochastic ordinary differential equations with super-linearly growing drift coefficients. We give the central limit theorem (CLT) of the temporal average, which characterizes the asymptotics in distribution of the temporal average. When the deviation order is smaller than the optimal strong order, we directly derive the CLT of the temporal average through that of original equations and the uniform strong order of the BEM method. For the case that the deviation order equals to the optimal strong order, the CLT is established via the Poisson equation associated with the generator of original equations. Numerical experiments are performed to illustrate the theoretical results.

Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples.

We consider additive Schwarz methods for boundary value problems involving the $p$-Laplacian. Although the existing theoretical estimates indicate a sublinear convergence rate for these methods, empirical evidence from numerical experiments demonstrates a linear convergence rate. In this paper, we narrow the gap between these theoretical and empirical results by presenting a novel convergence analysis. Firstly, we present an abstract convergence theory of additive Schwarz methods written in terms of a quasi-norm. This quasi-norm exhibits behavior similar to the Bregman distance of the convex energy functional associated to the problem. Secondly, we provide a quasi-norm version of the Poincar'{e}--Friedrichs inequality, which is essential for deriving a quasi-norm stable decomposition for a two-level domain decomposition setting. By utilizing these two key elements, we establish a new bound for the linear convergence rate of the methods.

Federated learning methods enable model training across distributed data sources without data leaving their original locations and have gained increasing interest in various fields. However, existing approaches are limited, excluding many structured probabilistic models. We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning, adapted for the federated setting. Additionally, we provide a communication-efficient variant analogous to the canonical FedAvg algorithm. The proposed algorithms' effectiveness is demonstrated, and their performance is compared with hierarchical Bayesian neural networks and topic models.

For all the successes in verifying low-level, efficient, security-critical code, little has been said or studied about the structure, architecture and engineering of such large-scale proof developments. We present the design, implementation and evaluation of a set of language-based techniques that allow the programmer to modularly write and verify code at a high level of abstraction, while retaining control over the compilation process and producing high-quality, zero-overhead, low-level code suitable for integration into mainstream software. We implement our techniques within the F* proof assistant, and specifically its shallowly-embedded Low* toolchain that compiles to C. Through our evaluation, we establish that our techniques were critical in scaling the popular HACL* library past 100,000 lines of verified source code, and brought about significant gains in proof engineer productivity. The exposition of our methodology converges on one final, novel case study: the streaming API, a finicky API that has historically caused many bugs in high-profile software. Using our approach, we manage to capture the streaming semantics in a generic way, and apply it ``for free'' to over a dozen use-cases. Six of those have made it into the reference implementation of the Python programming language, replacing the previous CVE-ridden code.

北京阿比特科技有限公司