亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Compared with conventional numerical approaches to solving partial differential equations (PDEs), physics-informed neural networks (PINN) have manifested the capability to save development effort and computational cost, especially in scenarios of reconstructing the physics field and solving the inverse problem. Considering the advantages of parameter sharing, spatial feature extraction and low inference cost, convolutional neural networks (CNN) are increasingly used in PINN. However, some challenges still remain as follows. To adapt convolutional PINN to solve different PDEs, considerable effort is usually needed for tuning critical hyperparameters. Furthermore, the effects of the finite difference accuracy, and the mesh resolution on the predictivity of convolutional PINN are not settled. To fill the gaps above, we propose three initiatives in this paper: (1) A Multi-Receptive-Field PINN (MRF-PINN) model is established to solve different types of PDEs on various mesh resolutions without manual tuning; (2) The dimensional balance method is used to estimate the loss weights when solving Navier-Stokes equations; (3) The Taylor polynomial is used to pad the virtual nodes near the boundaries for implementing high-order finite difference. The proposed MRF-PINN is tested for solving three typical linear PDEs (elliptic, parabolic, hyperbolic) and a series of nonlinear PDEs (Navier-Stokes PDEs) to demonstrate its generality and superiority. This paper shows that MRF-PINN can adapt to completely different equation types and mesh resolutions without any hyperparameter tuning. The dimensional balance method saves computational time and improves the convergence for solving Navier-Stokes PDEs. Further, the solving error is significantly decreased under high-order finite difference, large channel number, and high mesh resolution, which is expected to be a general convolutional PINN scheme.

相關內容

神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)(Neural Networks)是世界上(shang)三個(ge)(ge)最古老的(de)(de)(de)神經(jing)(jing)建(jian)模(mo)學(xue)會(hui)的(de)(de)(de)檔案期刊:國(guo)際(ji)神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)學(xue)會(hui)(INNS)、歐洲(zhou)神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)學(xue)會(hui)(ENNS)和(he)日本神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)學(xue)會(hui)(JNNS)。神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)提供了一(yi)個(ge)(ge)論(lun)(lun)壇,以發展和(he)培(pei)育一(yi)個(ge)(ge)國(guo)際(ji)社(she)會(hui)的(de)(de)(de)學(xue)者和(he)實踐者感興趣的(de)(de)(de)所有(you)(you)方面的(de)(de)(de)神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)和(he)相關(guan)方法(fa)的(de)(de)(de)計算(suan)(suan)智能。神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)歡迎(ying)高(gao)質(zhi)量論(lun)(lun)文的(de)(de)(de)提交(jiao),有(you)(you)助(zhu)于全面的(de)(de)(de)神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)研究,從行為和(he)大腦建(jian)模(mo),學(xue)習(xi)算(suan)(suan)法(fa),通過數學(xue)和(he)計算(suan)(suan)分析(xi),系統的(de)(de)(de)工(gong)(gong)程(cheng)(cheng)和(he)技術(shu)(shu)應用,大量使用神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)的(de)(de)(de)概念和(he)技術(shu)(shu)。這一(yi)獨特(te)而廣(guang)泛的(de)(de)(de)范(fan)圍(wei)促(cu)進了生物(wu)和(he)技術(shu)(shu)研究之間的(de)(de)(de)思想交(jiao)流(liu),并有(you)(you)助(zhu)于促(cu)進對生物(wu)啟發的(de)(de)(de)計算(suan)(suan)智能感興趣的(de)(de)(de)跨學(xue)科(ke)(ke)社(she)區的(de)(de)(de)發展。因此(ci),神經(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(luo)編(bian)委會(hui)代表(biao)的(de)(de)(de)專(zhuan)家(jia)領域包括心理學(xue),神經(jing)(jing)生物(wu)學(xue),計算(suan)(suan)機科(ke)(ke)學(xue),工(gong)(gong)程(cheng)(cheng),數學(xue),物(wu)理。該雜志發表(biao)文章(zhang)、信(xin)件(jian)和(he)評論(lun)(lun)以及給編(bian)輯的(de)(de)(de)信(xin)件(jian)、社(she)論(lun)(lun)、時(shi)事、軟件(jian)調查(cha)和(he)專(zhuan)利信(xin)息。文章(zhang)發表(biao)在(zai)五個(ge)(ge)部(bu)分之一(yi):認知(zhi)科(ke)(ke)學(xue),神經(jing)(jing)科(ke)(ke)學(xue),學(xue)習(xi)系統,數學(xue)和(he)計算(suan)(suan)分析(xi)、工(gong)(gong)程(cheng)(cheng)和(he)應用。 官網(wang)地址(zhi):

Deep convolutional neural networks (CNNs) with a large number of parameters require intensive computational resources, and thus are hard to be deployed in resource-constrained platforms. Decomposition-based methods, therefore, have been utilized to compress CNNs in recent years. However, since the compression factor and performance are negatively correlated, the state-of-the-art works either suffer from severe performance degradation or have relatively low compression factors. To overcome this problem, we propose to compress CNNs and alleviate performance degradation via joint matrix decomposition, which is different from existing works that compressed layers separately. The idea is inspired by the fact that there are lots of repeated modules in CNNs. By projecting weights with the same structures into the same subspace, networks can be jointly compressed with larger ranks. In particular, three joint matrix decomposition schemes are developed, and the corresponding optimization approaches based on Singular Value Decomposition are proposed. Extensive experiments are conducted across three challenging compact CNNs for different benchmark data sets to demonstrate the superior performance of our proposed algorithms. As a result, our methods can compress the size of ResNet-34 by 22X with slighter accuracy degradation compared with several state-of-the-art methods.

We present FO-PINNs, physics-informed neural networks that are trained using the first-order formulation of the Partial Differential Equation (PDE) losses. We show that FO-PINNs offer significantly higher accuracy in solving parameterized systems compared to traditional PINNs, and reduce time-per-iteration by removing the extra backpropagations needed to compute the second or higher-order derivatives. Additionally, unlike standard PINNs, FO-PINNs can be used with exact imposition of boundary conditions using approximate distance functions, and can be trained using Automatic Mixed Precision (AMP) to further speed up the training. Through two Helmholtz and Navier-Stokes examples, we demonstrate the advantages of FO-PINNs over traditional PINNs in terms of accuracy and training speedup.

There has been a growing interest in the use of Deep Neural Networks (DNNs) to solve Partial Differential Equations (PDEs). Despite the promise that such approaches hold, there are various aspects where they could be improved. Two such shortcomings are (i) their computational inefficiency relative to classical numerical methods, and (ii) the non-interpretability of a trained DNN model. In this work we present ASPINN, an anisotropic extension of our earlier work called SPINN--Sparse, Physics-informed, and Interpretable Neural Networks--to solve PDEs that addresses both these issues. ASPINNs generalize radial basis function networks. We demonstrate using a variety of examples involving elliptic and hyperbolic PDEs that the special architecture we propose is more efficient than generic DNNs, while at the same time being directly interpretable. Further, they improve upon the SPINN models we proposed earlier in that fewer nodes are require to capture the solution using ASPINN than using SPINN, thanks to the anisotropy of the local zones of influence of each node. The interpretability of ASPINN translates to a ready visualization of their weights and biases, thereby yielding more insight into the nature of the trained model. This in turn provides a systematic procedure to improve the architecture based on the quality of the computed solution. ASPINNs thus serve as an effective bridge between classical numerical algorithms and modern DNN based methods to solve PDEs. In the process, we also streamline the training of ASPINNs into a form that is closer to that of supervised learning algorithms.

Physics-Informed Neural Networks (PINNs) have become a kind of attractive machine learning method for obtaining solutions of partial differential equations (PDEs). Training PINNs can be seen as a semi-supervised learning task, in which only exact values of initial and boundary points can be obtained in solving forward problems, and in the whole spatio-temporal domain collocation points are sampled without exact labels, which brings training difficulties. Thus the selection of collocation points and sampling methods are quite crucial in training PINNs. Existing sampling methods include fixed and dynamic types, and in the more popular latter one, sampling is usually controlled by PDE residual loss. We point out that it is not sufficient to only consider the residual loss in adaptive sampling and sampling should obey temporal causality. We further introduce temporal causality into adaptive sampling and propose a novel adaptive causal sampling method to improve the performance and efficiency of PINNs. Numerical experiments of several PDEs with high-order derivatives and strong nonlinearity, including Cahn Hilliard and KdV equations, show that the proposed sampling method can improve the performance of PINNs with few collocation points. We demonstrate that by utilizing such a relatively simple sampling method, prediction performance can be improved up to two orders of magnitude compared with state-of-the-art results with almost no extra computation cost, especially when points are limited.

This study presents a novel unsupervised convolutional Neural Network (NN) architecture with nonlocal interactions for solving Partial Differential Equations (PDEs). The nonlocal Peridynamic Differential Operator (PDDO) is employed as a convolutional filter for evaluating derivatives the field variable. The NN captures the time-dynamics in smaller latent space through encoder-decoder layers with a Convolutional Long-short Term Memory (ConvLSTM) layer between them. The ConvLSTM architecture is modified by employing a novel activation function to improve the predictive capability of the learning architecture for physics with periodic behavior. The physics is invoked in the form of governing equations at the output of the NN and in the latent (reduced) space. By considering a few benchmark PDEs, we demonstrate the training performance and extrapolation capability of this novel NN architecture by comparing against Physics Informed Neural Networks (PINN) type solvers. It is more capable of extrapolating the solution for future timesteps than the other existing architectures.

Time-dependent Partial Differential Equations with given initial conditions are considered in this paper. New differentiation techniques of the unknown solution with respect to time variable are proposed. It is shown that the proposed techniques allow to generate accurate higher order derivatives simultaneously for a set of spatial points. The calculated derivatives can then be used for data-driven solution in different ways. An application for Physics Informed Neural Networks by the well-known DeepXDE software solution in Python under Tensorflow background framework has been presented for three real-life PDEs: Burgers', Allen-Cahn and Schrodinger equations.

Physics-informed neural networks (PINNs) have been widely applied in different fields due to their effectiveness in solving partial differential equations (PDEs). However, the accuracy and efficiency of PINNs need to be considerably improved for scientific and commercial use. To address this issue, we systematically propose a novel dimension-augmented physics-informed neural network (DaPINN), which simultaneously and significantly improves the accuracy and efficiency of the PINN. In the DaPINN model, we introduce inductive bias in the neural network to enhance network generalizability by adding a special regularization term to the loss function. Furthermore, we manipulate the network input dimension by inserting additional sample features and incorporating the expanded dimensionality in the loss function. Moreover, we verify the effectiveness of power series augmentation, Fourier series augmentation and replica augmentation, in both forward and backward problems. In most experiments, the error of DaPINN is 1$\sim$2 orders of magnitude lower than that of PINN. The results show that the DaPINN outperforms the original PINN in terms of both accuracy and efficiency with a reduced dependence on the number of sample points. We also discuss the complexity of the DaPINN and its compatibility with other methods.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

This paper addresses the difficulty of forecasting multiple financial time series (TS) conjointly using deep neural networks (DNN). We investigate whether DNN-based models could forecast these TS more efficiently by learning their representation directly. To this end, we make use of the dynamic factor graph (DFG) from that we enhance by proposing a novel variable-length attention-based mechanism to render it memory-augmented. Using this mechanism, we propose an unsupervised DNN architecture for multivariate TS forecasting that allows to learn and take advantage of the relationships between these TS. We test our model on two datasets covering 19 years of investment funds activities. Our experimental results show that our proposed approach outperforms significantly typical DNN-based and statistical models at forecasting their 21-day price trajectory.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

北京阿比特科技有限公司