亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a general framework for solving forward and inverse problems constrained by partial differential equations, where we interpolate neural networks onto finite element spaces to represent the (partial) unknowns. The framework overcomes the challenges related to the imposition of boundary conditions, the choice of collocation points in physics-informed neural networks, and the integration of variational physics-informed neural networks. A numerical experiment set confirms the framework's capability of handling various forward and inverse problems. In particular, the trained neural network generalises well for smooth problems, beating finite element solutions by some orders of magnitude. We finally propose an effective one-loop solver with an initial data fitting step (to obtain a cheap initialisation) to solve inverse problems.

相關內容

神經網絡(Neural Networks)是世界上三個最古老的神經建模學會的檔案期刊:國際神經網絡學會(INNS)、歐洲神經網絡學會(ENNS)和日本神經網絡學會(JNNS)。神經網絡提供了一個論壇,以發展和培育一個國際社會的學者和實踐者感興趣的所有方面的神經網絡和相關方法的計算智能。神經網絡歡迎高質量論文的提交,有助于全面的神經網絡研究,從行為和大腦建模,學習算法,通過數學和計算分析,系統的工程和技術應用,大量使用神經網絡的概念和技術。這一獨特而廣泛的范圍促進了生物和技術研究之間的思想交流,并有助于促進對生物啟發的計算智能感興趣的跨學科社區的發展。因此,神經網絡編委會代表的專家領域包括心理學,神經生物學,計算機科學,工程,數學,物理。該雜志發表文章、信件和評論以及給編輯的信件、社論、時事、軟件調查和專利信息。文章發表在五個部分之一:認知科學,神經科學,學習系統,數學和計算分析、工程和應用。 官網地址:

In this study, we consider a class of non-autonomous time-fractional partial advection-diffusion-reaction (TF-ADR) equations with Caputo type fractional derivative. To obtain the numerical solution of the model problem, we apply the non-symmetric interior penalty Galerkin (NIPG) method in space on a uniform mesh and the L1-scheme in time on a graded mesh. It is demonstrated that the computed solution is discretely stable. Superconvergence of error estimates for the proposed method are obtained using the discrete energy-norm. Also, we have applied the proposed method to solve semilinear problems after linearizing by the Newton linearization process. The theoretical results are verified through numerical experiments.

Stochastic sequential quadratic optimization (SQP) methods for solving continuous optimization problems with nonlinear equality constraints have attracted attention recently, such as for solving large-scale data-fitting problems subject to nonconvex constraints. However, for a recently proposed subclass of such methods that is built on the popular stochastic-gradient methodology from the unconstrained setting, convergence guarantees have been limited to the asymptotic convergence of the expected value of a stationarity measure to zero. This is in contrast to the unconstrained setting in which almost-sure convergence guarantees (of the gradient of the objective to zero) can be proved for stochastic-gradient-based methods. In this paper, new almost-sure convergence guarantees for the primal iterates, Lagrange multipliers, and stationarity measures generated by a stochastic SQP algorithm in this subclass of methods are proved. It is shown that the error in the Lagrange multipliers can be bounded by the distance of the primal iterate to a primal stationary point plus the error in the latest stochastic gradient estimate. It is further shown that, subject to certain assumptions, this latter error can be made to vanish by employing a running average of the Lagrange multipliers that are computed during the run of the algorithm. The results of numerical experiments are provided to demonstrate the proved theoretical guarantees.

Blow-up solutions to a heat equation with spatial periodicity and a quadratic nonlinearity are studied through asymptotic analyses and a variety of numerical methods. The focus is on the dynamics of the singularities in the complexified space domain. Blow up in finite time is caused by these singularities eventually reaching the real axis. The analysis provides a distinction between small and large nonlinear effects, as well as insight into the various time scales on which blow up is approached. It is shown that an ordinary differential equation with quadratic nonlinearity plays a central role in the asymptotic analysis. This equation is studied in detail, including its numerical computation on multiple Riemann sheets, and the far-field solutions are shown to be given at leading order by a Weierstrass elliptic function.

The solutions of scalar ordinary differential equations become more complex as their coefficients increase in magnitude. As a consequence, when a standard solver is applied to such an equation, its running time grows with the magnitudes of the equation's coefficients. It is well known, however, that scalar ordinary differential equations with slowly-varying coefficients admit slowly-varying phase functions whose cost to represent via standard techniques is largely independent of the magnitude of the equation's coefficients. This observation is the basis of most methods for the asymptotic approximation of the solutions of ordinary differential equations, including the WKB method. Here, we introduce two numerical algorithms for constructing phase functions for scalar ordinary differential equations inspired by the classical Levin method for the calculation of oscillatory integrals. In the case of a large class of scalar ordinary differential equations with slowly-varying coefficients, their running times are independent of the magnitude of the equation's coefficients. The results of extensive numerical experiments demonstrating the properties of our algorithms are presented.

In this work, we solve inverse problems of nonlinear Schr\"{o}dinger equations that can be formulated as a learning process of a special convolutional neural network. Instead of attempting to approximate functions in the inverse problems, we embed a library as a low dimensional manifold in the network such that unknowns can be reduced to some scalars. The nonlinear Schr\"{o}dinger equation (NLSE) is $i\frac{\partial \psi}{\partial t}-\beta\frac{\partial^2 \psi}{\partial x^2}+\gamma|\psi|^2\psi+V(x)\psi=0,$ where the wave function $\psi(x,t)$ is the solution to the forward problem and the potential $V(x)$ is the quantity of interest of the inverse problem. The main contributions of this work come from two aspects. First, we construct a special neural network directly from the Schr\"{o}dinger equation, which is motivated by a splitting method. The physics behind the construction enhances explainability of the neural network. The other part is using a library-search algorithm to project the solution space of the inverse problem to a lower-dimensional space. The way to seek the solution in a reduced approximation space can be traced back to the compressed sensing theory. The motivation of this part is to alleviate the training burden in estimating functions. Instead, with a well-chosen library, one can greatly simplify the training process. A brief analysis is given, which focuses on well-possedness of some mentioned inverse problems and convergence of the neural network approximation. To show the effectiveness of the proposed method, we explore in some representative problems including simple equations and a couple equation. The results can well verify the theory part. In the future, we can further explore manifold learning to enhance the approximation effect of the library-search algorithm.

Forward and inverse models are used throughout different engineering fields to predict and understand the behaviour of systems and to find parameters from a set of observations. These models use root-finding and minimisation techniques respectively to achieve their goals. This paper introduces improvements to these mathematical methods to then improve the convergence behaviour of the overarching models when used in highly non-linear systems. The performance of the new techniques is examined in detail and compared to that of the standard methods. The improved techniques are also tested with FEM models to show their practical application. Depending on the specific configuration of the problem, the improved models yielded larger convergence basins and/or took fewer steps to converge.

The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into non-commutative multivectors. The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. The algorithm is a variation of the Faddeev-LeVerrier-Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima. Symbolic and numerical examples in different Clifford algebras are presented.

An established normative approach for understanding the algorithmic basis of neural computation is to derive online algorithms from principled computational objectives and evaluate their compatibility with anatomical and physiological observations. Similarity matching objectives have served as successful starting points for deriving online algorithms that map onto neural networks (NNs) with point neurons and Hebbian/anti-Hebbian plasticity. These NN models account for many anatomical and physiological observations; however, the objectives have limited computational power and the derived NNs do not explain multi-compartmental neuronal structures and non-Hebbian forms of plasticity that are prevalent throughout the brain. In this article, we unify and generalize recent extensions of the similarity matching approach to address more complex objectives, including a large class of unsupervised and self-supervised learning tasks that can be formulated as symmetric generalized eigenvalue problems or nonnegative matrix factorization problems. Interestingly, the online algorithms derived from these objectives naturally map onto NNs with multi-compartmental neurons and local, non-Hebbian learning rules. Therefore, this unified extension of the similarity matching approach provides a normative framework that facilitates understanding multi-compartmental neuronal structures and non-Hebbian plasticity found throughout the brain.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

北京阿比特科技有限公司