亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper studies a class of convolution quadratures, well-known numerical methods for calculation of convolution integrals. In contrast to the existing counterpart, which uses the linear multistep formula or Runge-Kutta method, we employ the block generalized Adams method to discretize the underlying initial value problem. Similar to the convolution quadrature method based on the linear multistep formula, the proposed method can also be implemented on an equispaced grid. In addition, the proposed approach is as stable as the convolution quadrature based on the Runge-Kutta method, which indicates that it can accurately solve a wide range of problems without becoming unstable. We provide a detailed convergence analysis for the proposed convolution quadrature method and numerically illustrate our theoretical findings for convolution integrals with smooth and weakly singular kernels.

相關內容

在數學(特別是功能分析)中,卷積是對兩個函數(f和g)的數學運算,產生三個函數,表示第一個函數的形狀如何被另一個函數修改。 卷積一詞既指結果函數,又指計算結果的過程。 它定義為兩個函數的乘積在一個函數反轉和移位后的積分。 并針對所有shift值評估積分,從而生成卷積函數。

A novel regression method is introduced and studied. The procedure weights squared residuals based on their magnitude. Unlike the classic least squares which treats every squared residual equally important, the new procedure exponentially down-weights squared-residuals that lie far away from the cloud of all residuals and assigns a constant weight (one) to squared-residuals that lie close to the center of the squared-residual cloud. The new procedure can keep a good balance between robustness and efficiency, it possesses the highest breakdown point robustness for any regression equivariant procedure, much more robust than the classic least squares, yet much more efficient than the benchmark of robust method, the least trimmed squares (LTS) of Rousseeuw (1984). With a smooth weight function, the new procedure could be computed very fast by the first-order (first-derivative) method and the second-order (second-derivative) method. Assertions and other theoretical findings are verified in simulated and real data examples.

In this paper we explore the concept of sequential inductive prediction intervals using theory from sequential testing. We furthermore introduce a 3-parameter PAC definition of prediction intervals that allows us via simulation to achieve almost sharp bounds with high probability.

This paper considers the problem of robust iterative Bayesian smoothing in nonlinear state-space models with additive noise using Gaussian approximations. Iterative methods are known to improve smoothed estimates but are not guaranteed to converge, motivating the development of more robust versions of the algorithms. The aim of this article is to present Levenberg-Marquardt (LM) and line-search extensions of the classical iterated extended Kalman smoother (IEKS) as well as the iterated posterior linearisation smoother (IPLS). The IEKS has previously been shown to be equivalent to the Gauss-Newton (GN) method. We derive a similar GN interpretation for the IPLS. Furthermore, we show that an LM extension for both iterative methods can be achieved with a simple modification of the smoothing iterations, enabling algorithms with efficient implementations. Our numerical experiments show the importance of robust methods, in particular for the IEKS-based smoothers. The computationally expensive IPLS-based smoothers are naturally robust but can still benefit from further regularisation.

Deep learning methods are emerging as popular computational tools for solving forward and inverse problems in traffic flow. In this paper, we study a neural operator framework for learning solutions to nonlinear hyperbolic partial differential equations with applications in macroscopic traffic flow models. In this framework, an operator is trained to map heterogeneous and sparse traffic input data to the complete macroscopic traffic state in a supervised learning setting. We chose a physics-informed Fourier neural operator ($\pi$-FNO) as the operator, where an additional physics loss based on a discrete conservation law regularizes the problem during training to improve the shock predictions. We also propose to use training data generated from random piecewise constant input data to systematically capture the shock and rarefied solutions. From experiments using the LWR traffic flow model, we found superior accuracy in predicting the density dynamics of a ring-road network and urban signalized road. We also found that the operator can be trained using simple traffic density dynamics, e.g., consisting of $2-3$ vehicle queues and $1-2$ traffic signal cycles, and it can predict density dynamics for heterogeneous vehicle queue distributions and multiple traffic signal cycles $(\geq 2)$ with an acceptable error. The extrapolation error grew sub-linearly with input complexity for a proper choice of the model architecture and training data. Adding a physics regularizer aided in learning long-term traffic density dynamics, especially for problems with periodic boundary data.

As a surrogate for computationally intensive meso-scale simulation of woven composites, this article presents Recurrent Neural Network (RNN) models. Leveraging the power of transfer learning, the initialization challenges and sparse data issues inherent in cyclic shear strain loads are addressed in the RNN models. A mean-field model generates a comprehensive data set representing elasto-plastic behavior. In simulations, arbitrary six-dimensional strain histories are used to predict stresses under random walking as the source task and cyclic loading conditions as the target task. Incorporating sub-scale properties enhances RNN versatility. In order to achieve accurate predictions, the model uses a grid search method to tune network architecture and hyper-parameter configurations. The results of this study demonstrate that transfer learning can be used to effectively adapt the RNN to varying strain conditions, which establishes its potential as a useful tool for modeling path-dependent responses in woven composites.

We describe a novel algorithm for solving general parametric (nonlinear) eigenvalue problems. Our method has two steps: first, high-accuracy solutions of non-parametric versions of the problem are gathered at some values of the parameters; these are then combined to obtain global approximations of the parametric eigenvalues. To gather the non-parametric data, we use non-intrusive contour-integration-based methods, which, however, cannot track eigenvalues that migrate into/out of the contour as the parameter changes. Special strategies are described for performing the combination-over-parameter step despite having only partial information on such migrating eigenvalues. Moreover, we dedicate a special focus to the approximation of eigenvalues that undergo bifurcations. Finally, we propose an adaptive strategy that allows one to effectively apply our method even without any a priori information on the behavior of the sought-after eigenvalues. Numerical tests are performed, showing that our algorithm can achieve remarkably high approximation accuracy.

This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms. The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in $L_{\psi_p}$ Orlicz spaces. Using the decorrelation lemma in combination with other techniques, such as symmetrization, couplings, and chaining in the space of probability measures, we obtain new upper bounds on the generalization error, both in expectation and in high probability, and recover as special cases many of the existing generalization bounds, including the ones based on mutual information, conditional mutual information, stochastic chaining, and PAC-Bayes inequalities. In addition, the Fernique-Talagrand upper bound on the expected supremum of a subgaussian process emerges as a special case.

Computer-aided molecular design (CAMD) studies quantitative structure-property relationships and discovers desired molecules using optimization algorithms. With the emergence of machine learning models, CAMD score functions may be replaced by various surrogates to automatically learn the structure-property relationships. Due to their outstanding performance on graph domains, graph neural networks (GNNs) have recently appeared frequently in CAMD. But using GNNs introduces new optimization challenges. This paper formulates GNNs using mixed-integer programming and then integrates this GNN formulation into the optimization and machine learning toolkit OMLT. To characterize and formulate molecules, we inherit the well-established mixed-integer optimization formulation for CAMD and propose symmetry-breaking constraints to remove symmetric solutions caused by graph isomorphism. In two case studies, we investigate fragment-based odorant molecular design with more practical requirements to test the compatibility and performance of our approaches.

This paper focuses on the randomized Milstein scheme for approximating solutions to stochastic Volterra integral equations with weakly singular kernels, where the drift coefficients are non-differentiable. An essential component of the error analysis involves the utilization of randomized quadrature rules for stochastic integrals to avoid the Taylor expansion in drift coefficient functions. Finally, we implement the simulation of multiple singular stochastic integral in the numerical experiment by applying the Riemann-Stieltjes integral.

Networks in machine learning offer examples of complex high-dimensional dynamical systems reminiscent of biological systems. Here, we study the learning dynamics of Generalized Hopfield networks, which permit a visualization of internal memories. These networks have been shown to proceed through a 'feature-to-prototype' transition, as the strength of network nonlinearity is increased, wherein the learned, or terminal, states of internal memories transition from mixed to pure states. Focusing on the prototype learning dynamics of the internal memories we observe a strong resemblance to the canalized, or low-dimensional, dynamics of cells as they differentiate within a Waddingtonian landscape. Dynamically, we demonstrate that learning in a Generalized Hopfield Network proceeds through sequential 'splits' in memory space. Furthermore, order of splitting is interpretable and reproducible. The dynamics between the splits are canalized in the Waddington sense -- robust to variations in detailed aspects of the system. In attempting to make the analogy a rigorous equivalence, we study smaller subsystems that exhibit similar properties to the full system. We combine analytical calculations with numerical simulations to study the dynamical emergence of the feature-to-prototype transition, and the behaviour of splits in the landscape, saddles points, visited during learning. We exhibit regimes where saddles appear and disappear through saddle-node bifurcations, qualitatively changing the distribution of learned memories as the strength of the nonlinearity is varied -- allowing us to systematically investigate the mechanisms that underlie the emergence of Waddingtonian dynamics. Memories can thus differentiate in a predictive and controlled way, revealing new bridges between experimental biology, dynamical systems theory, and machine learning.

北京阿比特科技有限公司