亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems. Estimators such as principal component regression (PCR) or reduced rank regression (RRR) in kernel spaces can be shown to provably learn Koopman operators from finite empirical observations of the system's time evolution. Scaling these approaches to very long trajectories is a challenge and requires introducing suitable approximations to make computations feasible. In this paper, we boost the efficiency of different kernel-based Koopman operator estimators using random projections (sketching). We derive, implement and test the new "sketched" estimators with extensive experiments on synthetic and large-scale molecular dynamics datasets. Further, we establish non asymptotic error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency. Our empirical and theoretical analysis shows that the proposed estimators provide a sound and efficient way to learn large scale dynamical systems. In particular our experiments indicate that the proposed estimators retain the same accuracy of PCR or RRR, while being much faster.

相關內容

We consider a one-dimensional singularly perturbed 4th order problem with the additional feature of a shift term. An expansion into a smooth term, boundary layers and an inner layer yields a formal solution decomposition, and together with a stability result we have estimates for the subsequent numerical analysis. With classical layer adapted meshes we present a numerical method, that achieves supercloseness and optimal convergence orders in the associated energy norm. We also consider coarser meshes in view of the weak layers. Some numerical examples conclude the paper and support the theory.

This work presents a systematic methodology for describing the transient dynamics of coarse-grained molecular systems inferred from all-atom simulated data. We suggest Langevin-type dynamics where the coarse-grained interaction potential depends explicitly on time to efficiently approximate the transient coarse-grained dynamics. We apply the path-space force matching approach at the transient dynamics regime to learn the proposed model parameters. In particular, we parameterize the coarse-grained potential both with respect to the pair distance of the CG particles and the time, and we obtain an evolution model that is explicitly time-dependent. Moreover, we follow a data-driven approach to estimate the friction kernel, given by appropriate correlation functions directly from the underlying all-atom molecular dynamics simulations. To explore and validate the proposed methodology we study a benchmark system of a moving particle in a box. We examine the suggested model's effectiveness in terms of the system's correlation time and find that the model can approximate well the transient time regime of the system, depending on the correlation time of the system. As a result, in the less correlated case, it can represent the dynamics for a longer time interval. We present an extensive study of our approach to a realistic high-dimensional water molecular system. Posing the water system initially out of thermal equilibrium we collect trajectories of all-atom data for the, empirically estimated, transient time regime. Then, we infer the suggested model and strengthen the model's validity by comparing it with simplified Markovian models.

We give a short survey of recent results on sparse-grid linear algorithms of approximate recovery and integration of functions possessing a unweighted or weighted Sobolev mixed smoothness based on their sampled values at a certain finite set. Some of them are extended to more general cases.

We propose a method to modify a polygonal mesh in order to fit the zero-isoline of a level set function by extending a standard body-fitted strategy to a tessellation with arbitrarily-shaped elements. The novel level set-fitted approach, in combination with a Discontinuous Galerkin finite element approximation, provides an ideal setting to model physical problems characterized by embedded or evolving complex geometries, since it allows skipping any mesh post-processing in terms of grid quality. The proposed methodology is firstly assessed on the linear elasticity equation, by verifying the approximation capability of the level set-fitted approach when dealing with configurations with heterogeneous material properties. Successively, we combine the level set-fitted methodology with a minimum compliance topology optimization technique, in order to deliver optimized layouts exhibiting crisp boundaries and reliable mechanical performances. An extensive numerical test campaign confirms the effectiveness of the proposed method.

Ordinary state-based peridynamic (OSB-PD) models have an unparalleled capability to simulate crack propagation phenomena in solids with arbitrary Poisson's ratio. However, their non-locality also leads to prohibitively high computational cost. In this paper, a fast solution scheme for OSB-PD models based on matrix operation is introduced, with which, the graphics processing units (GPUs) are used to accelerate the computation. For the purpose of comparison and verification, a commonly used solution scheme based on loop operation is also presented. An in-house software is developed in MATLAB. Firstly, the vibration of a cantilever beam is solved for validating the loop- and matrix-based schemes by comparing the numerical solutions to those produced by a FEM software. Subsequently, two typical dynamic crack propagation problems are simulated to illustrate the effectiveness of the proposed schemes in solving dynamic fracture problems. Finally, the simulation of the Brokenshire torsion experiment is carried out by using the matrix-based scheme, and the similarity in the shapes of the experimental and numerical broken specimens further demonstrates the ability of the proposed approach to deal with 3D non-planar fracture problems. In addition, the speed-up of the matrix-based scheme with respect to the loop-based scheme and the performance of the GPU acceleration are investigated. The results emphasize the high computational efficiency of the matrix-based implementation scheme.

Parametric mathematical models such as parameterizations of partial differential equations with random coefficients have received a lot of attention within the field of uncertainty quantification. The model uncertainties are often represented via a series expansion in terms of the parametric variables. In practice, this series expansion needs to be truncated to a finite number of terms, introducing a dimension truncation error to the numerical simulation of a parametric mathematical model. There have been several studies of the dimension truncation error corresponding to different models of the input random field in recent years, but many of these analyses have been carried out within the context of numerical integration. In this paper, we study the $L^2$ dimension truncation error of the parametric model problem. Estimates of this kind arise in the assessment of the dimension truncation error for function approximation in high dimensions. In addition, we show that the dimension truncation error rate is invariant with respect to certain transformations of the parametric variables. Numerical results are presented which showcase the sharpness of the theoretical results.

Multiagent systems aim to accomplish highly complex learning tasks through decentralised consensus seeking dynamics and their use has garnered a great deal of attention in the signal processing and computational intelligence societies. This article examines the behaviour of multiagent networked systems with nonlinear filtering/learning dynamics. To this end, a general formulation for the actions of an agent in multiagent networked systems is presented and conditions for achieving a cohesive learning behaviour is given. Importantly, application of the so derived framework in distributed and federated learning scenarios are presented.

Quantum neural networks (QNNs) and quantum kernels stand as prominent figures in the realm of quantum machine learning, poised to leverage the nascent capabilities of near-term quantum computers to surmount classical machine learning challenges. Nonetheless, the training efficiency challenge poses a limitation on both QNNs and quantum kernels, curbing their efficacy when applied to extensive datasets. To confront this concern, we present a unified approach: coreset selection, aimed at expediting the training of QNNs and quantum kernels by distilling a judicious subset from the original training dataset. Furthermore, we analyze the generalization error bounds of QNNs and quantum kernels when trained on such coresets, unveiling the comparable performance with those training on the complete original dataset. Through systematic numerical simulations, we illuminate the potential of coreset selection in expediting tasks encompassing synthetic data classification, identification of quantum correlations, and quantum compiling. Our work offers a useful way to improve diverse quantum machine learning models with a theoretical guarantee while reducing the training cost.

Many modern discontinuous Galerkin (DG) methods for conservation laws make use of summation by parts operators and flux differencing to achieve kinetic energy preservation or entropy stability. While these techniques increase the robustness of DG methods significantly, they are also computationally more demanding than standard weak form nodal DG methods. We present several implementation techniques to improve the efficiency of flux differencing DG methods that use tensor product quadrilateral or hexahedral elements, in 2D or 3D respectively. Focus is mostly given to CPUs and DG methods for the compressible Euler equations, although these techniques are generally also useful for other physical systems including the compressible Navier-Stokes and magnetohydrodynamics equations. We present results using two open source codes, Trixi.jl written in Julia and FLUXO written in Fortran, to demonstrate that our proposed implementation techniques are applicable to different code bases and programming languages.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

北京阿比特科技有限公司