亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Paterson--Stockmeyer method is an evaluation scheme for matrix polynomials with scalar coefficients that arise in many state-of-the-art algorithms based on polynomial or rational approximation, for example, those for computing transcendental matrix functions. We derive a mixed-precision version of the Paterson--Stockmeyer method that is particularly useful for evaluating matrix polynomials with scalar coefficients of decaying magnitude. The new method is mainly of interest in the arbitrary precision arithmetic, and it is particularly attractive for high-precision computations. The key idea is to perform computations on data of small magnitude in low precision, and rounding error analysis is provided for the use of lower-than-working precisions. We focus on the evaluation of the Taylor approximants of the matrix exponential and show the applicability of our method to the existing scaling and squaring algorithms. We also demonstrate through experiments the general applicability of our method to other problems, such as computing the polynomials from the Pad\'e approximant of the matrix exponential and the Taylor approximant of the matrix cosine. Numerical experiments show our mixed-precision Paterson--Stockmeyer algorithms can be more efficient than its fixed-precision counterpart while delivering the same level of accuracy.

相關內容

We propose a novel Model Order Reduction framework that is able to handle solutions of hyperbolic problems characterized by multiple travelling discontinuities. By means of an optimization based approach, we introduce suitable calibration maps that allow us to transform the original solution manifold into a lower dimensional one. The novelty of the methodology is represented by the fact that the optimization process does not require the knowledge of the discontinuities location. The optimization can be carried out simply by choosing some reference control points, thus avoiding the use of some implicit shock tracking techniques, which would translate into an increased computational effort during the offline phase. In the online phase, we rely on a non-intrusive approach, where the coefficients of the projection of the reduced order solution onto the reduced space are recovered by means of an Artificial Neural Network. To validate the methodology, we present numerical results for the 1D Sod shock tube problem, for the 2D double Mach reflection problem, also in the parametric case, and for the triple point problem.

Independent component analysis (ICA) is a fundamental statistical tool used to reveal hidden generative processes from observed data. However, traditional ICA approaches struggle with the rotational invariance inherent in Gaussian distributions, often necessitating the assumption of non-Gaussianity in the underlying sources. This may limit their applicability in broader contexts. To accommodate Gaussian sources, we develop an identifiability theory that relies on second-order statistics without imposing further preconditions on the distribution of sources, by introducing novel assumptions on the connective structure from sources to observed variables. Different from recent work that focuses on potentially restrictive connective structures, our proposed assumption of structural variability is both considerably less restrictive and provably necessary. Furthermore, we propose two estimation methods based on second-order statistics and sparsity constraint. Experimental results are provided to validate our identifiability theory and estimation methods.

When numerically computing high Reynolds number cavity flow, it is known that by formulating the Navier-Stokes equations using the stream function and vorticity as unknown functions, it is possible to reproduce finer flow structures. Although numerical computations applying methods such as the finite difference method are well known, to the best of our knowledge, there are no examples of applying particle-based methods like the SPH method to this problem. Therefore, we applied the SPH method to the Navier-Stokes equations, formulated with the stream function and vorticity as unknown functions, and conducted numerical computations of high Reynolds number cavity flow. The results confirmed the reproduction of small vortices and demonstrated the effectiveness of the scheme using the stream function and vorticity.

In recent years, general matrix-matrix multiplication with non-regular-shaped input matrices has been widely used in many applications like deep learning and has drawn more and more attention. However, conventional implementations are not suited for non-regular-shaped matrix-matrix multiplications, and few works focus on optimizing tall-and-skinny matrix-matrix multiplication on CPUs. This paper proposes an auto-tuning framework, AutoTSMM, to build high-performance tall-and-skinny matrix-matrix multiplication. AutoTSMM selects the optimal inner kernels in the install-time stage and generates an execution plan for the pre-pack tall-and-skinny matrix-matrix multiplication in the runtime stage. Experiments demonstrate that AutoTSMM achieves competitive performance comparing to state-of-the-art tall-and-skinny matrix-matrix multiplication. And, it outperforms all conventional matrix-matrix multiplication implementations.

An intrinsically causal approach to lifting factorization, called the Causal Complementation Algorithm, is developed for arbitrary two-channel perfect reconstruction FIR filter banks. This addresses an engineering shortcoming of the inherently noncausal strategy of Daubechies and Sweldens for factoring discrete wavelet transforms, which was based on the Extended Euclidean Algorithm for Laurent polynomials. The Causal Complementation Algorithm reproduces all lifting factorizations created by the causal version of the Euclidean Algorithm approach and generates additional causal factorizations, which are not obtainable via the causal Euclidean Algorithm, possessing degree-reducing properties that generalize those furnished by the Euclidean Algorithm. In lieu of the Euclidean Algorithm, the new approach employs Gaussian elimination in matrix polynomials using a slight generalization of polynomial long division. It is shown that certain polynomial degree-reducing conditions are both necessary and sufficient for a causal elementary matrix decomposition to be obtainable using the Causal Complementation Algorithm, yielding a formal definition of ``lifting factorization'' that was missing from the work of Daubechies and Sweldens.

The covariance matrix adaptation evolution strategy (CMA-ES) is a stochastic search algorithm using a multivariate normal distribution for continuous black-box optimization. In addition to strong empirical results, part of the CMA-ES can be described by a stochastic natural gradient method and can be derived from information geometric optimization (IGO) framework. However, there are some components of the CMA-ES, such as the rank-one update, for which the theoretical understanding is limited. While the rank-one update makes the covariance matrix to increase the likelihood of generating a solution in the direction of the evolution path, this idea has been difficult to formulate and interpret as a natural gradient method unlike the rank-$\mu$ update. In this work, we provide a new interpretation of the rank-one update in the CMA-ES from the perspective of the natural gradient with prior distribution. First, we propose maximum a posteriori IGO (MAP-IGO), which is the IGO framework extended to incorporate a prior distribution. Then, we derive the rank-one update from the MAP-IGO by setting the prior distribution based on the idea that the promising mean vector should exist in the direction of the evolution path. Moreover, the newly derived rank-one update is extensible, where an additional term appears in the update for the mean vector. We empirically investigate the properties of the additional term using various benchmark functions.

Theorem proving is a fundamental aspect of mathematics, spanning from informal reasoning in natural language to rigorous derivations in formal systems. In recent years, the advancement of deep learning, especially the emergence of large language models, has sparked a notable surge of research exploring these techniques to enhance the process of theorem proving. This paper presents a comprehensive survey of deep learning for theorem proving by offering (i) a thorough review of existing approaches across various tasks such as autoformalization, premise selection, proofstep generation, and proof search; (ii) an extensive summary of curated datasets and strategies for synthetic data generation; (iii) a detailed analysis of evaluation metrics and the performance of state-of-the-art methods; and (iv) a critical discussion on the persistent challenges and the promising avenues for future exploration. Our survey aims to serve as a foundational reference for deep learning approaches in theorem proving, inspiring and catalyzing further research endeavors in this rapidly growing field. A curated list of papers is available at //github.com/zhaoyu-li/DL4TP.

Bayesian model updating facilitates the calibration of analytical models based on observations and the quantification of uncertainties in model parameters such as stiffness and mass. This process significantly enhances damage assessment and response predictions in existing civil structures. Predominantly, current methods employ modal properties identified from acceleration measurements to evaluate the likelihood of the model parameters. This modal analysis-based likelihood generally involves a prior assumption regarding the mass parameters. In civil structures, accurate determination of mass parameters proves challenging owing to the significant uncertainty and time-varying nature of imposed loads. The resulting inaccuracy potentially introduces biases while estimating the stiffness parameters, which affects the assessment of structural response and associated damage. Addressing this issue, the present study introduces a stress-resultant-based approach for Bayesian model updating independent of mass assumptions. This approach utilizes system identification on strain and acceleration measurements to establish the relationship between nodal displacements and elemental stress resultants. Employing static analysis to depict this relationship aids in assessing the likelihood of stiffness parameters. Integrating this static-analysis-based likelihood with a modal-analysis-based likelihood facilitates the simultaneous estimation of mass and stiffness parameters. The proposed approach was validated using numerical examples on a planar frame and experimental studies on a full-scale moment-resisting steel frame structure.

Auditing the use of data in training machine-learning (ML) models is an increasingly pressing challenge, as myriad ML practitioners routinely leverage the effort of content creators to train models without their permission. In this paper, we propose a general method to audit an ML model for the use of a data-owner's data in training, without prior knowledge of the ML task for which the data might be used. Our method leverages any existing black-box membership inference method, together with a sequential hypothesis test of our own design, to detect data use with a quantifiable, tunable false-detection rate. We show the effectiveness of our proposed framework by applying it to audit data use in two types of ML models, namely image classifiers and foundation models.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司