亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations. Despite recent progress on expanding the class of problems that can be tackled by PINNs, most of existing use-cases involve simple geometric domains. To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved. In this work, we propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator. This technique allows to create an input space for the neural network that represents the geometry of a given object. We approximate the eigenfunctions as well as the operators involved in the partial differential equations with finite elements. We extensively test and compare the proposed methodology against traditional PINNs in complex shapes, such as a coil, a heat sink and a bunny, with different physics, such as the Eikonal equation and heat transfer. We also study the sensitivity of our method to the number of eigenfunctions used, as well as the discretization used for the eigenfunctions and the underlying operators. Our results show excellent agreement with the ground truth data in cases where traditional PINNs fail to produce a meaningful solution. We envision this new technique will expand the effectiveness of PINNs to more realistic applications.

相關內容

神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)(Neural Networks)是世界(jie)上(shang)三個最古老的(de)神(shen)(shen)(shen)經(jing)(jing)建模學(xue)(xue)(xue)會的(de)檔案(an)期(qi)刊:國(guo)際神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(INNS)、歐洲神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(ENNS)和日本神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(JNNS)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)提(ti)(ti)供了(le)一(yi)個論壇,以發展和培育一(yi)個國(guo)際社(she)會的(de)學(xue)(xue)(xue)者(zhe)和實踐(jian)者(zhe)感(gan)(gan)興趣(qu)的(de)所有方面(mian)的(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)和相關方法(fa)的(de)計(ji)算智(zhi)能(neng)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)歡迎高質量論文(wen)的(de)提(ti)(ti)交,有助(zhu)于全(quan)面(mian)的(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)研(yan)究,從行為和大腦建模,學(xue)(xue)(xue)習算法(fa),通過數學(xue)(xue)(xue)和計(ji)算分析,系(xi)統的(de)工(gong)程(cheng)和技術應用(yong),大量使(shi)用(yong)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)的(de)概念和技術。這一(yi)獨特而廣泛的(de)范圍促進(jin)了(le)生(sheng)物和技術研(yan)究之間(jian)的(de)思想交流,并有助(zhu)于促進(jin)對生(sheng)物啟發的(de)計(ji)算智(zhi)能(neng)感(gan)(gan)興趣(qu)的(de)跨學(xue)(xue)(xue)科社(she)區(qu)的(de)發展。因此,神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)編委會代表(biao)的(de)專家領(ling)域包括心理學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)生(sheng)物學(xue)(xue)(xue),計(ji)算機(ji)科學(xue)(xue)(xue),工(gong)程(cheng),數學(xue)(xue)(xue),物理。該雜志發表(biao)文(wen)章、信件和評論以及(ji)給編輯的(de)信件、社(she)論、時事(shi)、軟(ruan)件調查和專利信息(xi)。文(wen)章發表(biao)在五個部分之一(yi):認(ren)知科學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)科學(xue)(xue)(xue),學(xue)(xue)(xue)習系(xi)統,數學(xue)(xue)(xue)和計(ji)算分析、工(gong)程(cheng)和應用(yong)。 官網(wang)(wang)地址:

Cellular sheaves equip graphs with a "geometrical" structure by assigning vector spaces and linear maps to nodes and edges. Graph Neural Networks (GNNs) implicitly assume a graph with a trivial underlying sheaf. This choice is reflected in the structure of the graph Laplacian operator, the properties of the associated diffusion equation, and the characteristics of the convolutional models that discretise this equation. In this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in heterophilic settings and their oversmoothing behaviour. By considering a hierarchy of increasingly general sheaves, we study how the ability of the sheaf diffusion process to achieve linear separation of the classes in the infinite time limit expands. At the same time, we prove that when the sheaf is non-trivial, discretised parametric diffusion processes have greater control than GNNs over their asymptotic behaviour. On the practical side, we study how sheaves can be learned from data. The resulting sheaf diffusion models have many desirable properties that address the limitations of classical graph diffusion equations (and corresponding GNN models) and obtain competitive results in heterophilic settings. Overall, our work provides new connections between GNNs and algebraic topology and would be of interest to both fields.

Neural ordinary differential equations (Neural ODEs) model continuous time dynamics as differential equations parametrized with neural networks. Thanks to their modeling flexibility, they have been adopted for multiple tasks where the continuous time nature of the process is specially relevant, as in system identification and time series analysis. When applied in a control setting, it is possible to adapt their use to approximate optimal nonlinear feedback policies. This formulation follows the same approach as policy gradients in reinforcement learning, covering the case where the environment consists of known deterministic dynamics given by a system of differential equations. The white box nature of the model specification allows the direct calculation of policy gradients through sensitivity analysis, avoiding the inexact and inefficient gradient estimation through sampling. In this work we propose the use of a neural control policy posed as a Neural ODE to solve general nonlinear optimal control problems while satisfying both state and control constraints, which are crucial for real world scenarios. Since the state feedback policy partially modifies the model dynamics, the whole space phase of the system is reshaped upon the optimization. This approach is a sensible approximation to the historically intractable closed loop solution of nonlinear control problems that efficiently exploits the availability of a dynamical system model.

An expeditious development of graph learning in recent years has found innumerable applications in several diversified fields. Of the main associated challenges are the volume and complexity of graph data. The graph learning models suffer from the inability to efficiently learn graph information. In order to indemnify this inefficacy, physics-informed graph learning (PIGL) is emerging. PIGL incorporates physics rules while performing graph learning, which has enormous benefits. This paper presents a systematic review of PIGL methods. We begin with introducing a unified framework of graph learning models followed by examining existing PIGL methods in relation to the unified framework. We also discuss several future challenges for PIGL. This survey paper is expected to stimulate innovative research and development activities pertaining to PIGL.

The goal of cryptocurrencies is decentralization. In principle, all currencies have equal status. Unlike traditional stock markets, there is no default currency of denomination (fiat), thus the trading pairs can be set freely. However, it is impractical to set up a trading market between every two currencies. In order to control management costs and ensure sufficient liquidity, we must give priority to covering those large-volume trading pairs and ensure that all coins are reachable. We note that this is an optimization problem. Its particularity lies in: 1) the trading volume between most (>99.5%) possible trading pairs cannot be directly observed. 2) It satisfies the connectivity constraint, that is, all currencies are guaranteed to be tradable. To solve this problem, we use a two-stage process: 1) Fill in missing values based on a regularized, truncated eigenvalue decomposition, where the regularization term is used to control what extent missing values should be limited to zero. 2) Search for the optimal trading pairs, based on a branch and bound process, with heuristic search and pruning strategies. The experimental results show that: 1) If the number of denominated coins is not limited, we will get a more decentralized trading pair settings, which advocates the establishment of trading pairs directly between large currency pairs. 2) There is a certain room for optimization in all exchanges. The setting of inappropriate trading pairs is mainly caused by subjectively setting small coins to quote, or failing to track emerging big coins in time. 3) Too few trading pairs will lead to low coverage; too many trading pairs will need to be adjusted with markets frequently. Exchanges should consider striking an appropriate balance between them.

We derive normal approximation results for a class of stabilizing functionals of binomial or Poisson point process, that are not necessarily expressible as sums of certain score functions. Our approach is based on a flexible notion of the add-one cost operator, which helps one to deal with the second-order cost operator via suitably appropriate first-order operators. We combine this flexible notion with the theory of strong stabilization to establish our results. We illustrate the applicability of our results by establishing normal approximation results for certain geometric and topological statistics arising frequently in practice. Several existing results also emerge as special cases of our approach.

Physics-informed neural networks (PINNs) have been proposed to solve two main classes of problems: data-driven solutions and data-driven discovery of partial differential equations. This task becomes prohibitive when such data is highly corrupted due to the possible sensor mechanism failing. We propose the Least Absolute Deviation based PINN (LAD-PINN) to reconstruct the solution and recover unknown parameters in PDEs - even if spurious data or outliers corrupt a large percentage of the observations. To further improve the accuracy of recovering hidden physics, the two-stage Median Absolute Deviation based PINN (MAD-PINN) is proposed, where LAD-PINN is employed as an outlier detector followed by MAD screening out the highly corrupted data. Then the vanilla PINN or its variants can be subsequently applied to exploit the remaining normal data. Through several examples, including Poisson's equation, wave equation, and steady or unsteady Navier-Stokes equations, we illustrate the generalizability, accuracy and efficiency of the proposed algorithms for recovering governing equations from noisy and highly corrupted measurement data.

Interferometry can measure the shape or the material density of a system that could not be measured otherwise by recording the difference between the phase change of a signal and a reference phase. This difference is always between $-\pi$ and $\pi$ while it is the absolute phase that is required to get a true measurement. There is a long history of methods designed to recover accurately this phase from the phase "wrapped" inside $]-\pi,\pi]$. However, noise and under-sampling limit the effectiveness of most techniques and require highly sophisticated algorithms that can process imperfect measurements. Ultimately, analysing successfully an interferogram amounts to pattern recognition, a task where radial basis function neural networks truly excel at. The proposed neural network is designed to unwrap the phase from two-dimensional interferograms, where aliasing, stemming from under-resolved regions, and noise levels are significant. The neural network can be trained in parallel and in three stages, using gradient-based supervised learning. Parallelism allows to handle relatively large data sets, but requires a supplemental step to synchronized the fully unwrapped phase across the different networks.

Deep learning with deep neural networks (DNNs) has attracted tremendous attention from various fields of science and technology recently. Activation functions for a DNN define the output of a neuron given an input or set of inputs. They are essential and inevitable in learning non-linear transformations and performing diverse computations among successive neuron layers. Thus, the design of activation functions is still an important topic in deep learning research. Meanwhile, theoretical studies on the approximation ability of DNNs with activation functions have been investigated within the last few years. In this paper, we propose a new activation function, named as "DLU", and investigate its approximation ability for functions with various smoothness and structures. Our theoretical results show that DLU networks can process competitive approximation performance with rational and ReLU networks, and have some advantages. Numerical experiments are conducted comparing DLU with the existing activations-ReLU, Leaky ReLU, and ELU, which illustrate the good practical performance of DLU.

The adaptive processing of structured data is a long-standing research topic in machine learning that investigates how to automatically learn a mapping from a structured input to outputs of various nature. Recently, there has been an increasing interest in the adaptive processing of graphs, which led to the development of different neural network-based methodologies. In this thesis, we take a different route and develop a Bayesian Deep Learning framework for graph learning. The dissertation begins with a review of the principles over which most of the methods in the field are built, followed by a study on graph classification reproducibility issues. We then proceed to bridge the basic ideas of deep learning for graphs with the Bayesian world, by building our deep architectures in an incremental fashion. This framework allows us to consider graphs with discrete and continuous edge features, producing unsupervised embeddings rich enough to reach the state of the art on several classification tasks. Our approach is also amenable to a Bayesian nonparametric extension that automatizes the choice of almost all model's hyper-parameters. Two real-world applications demonstrate the efficacy of deep learning for graphs. The first concerns the prediction of information-theoretic quantities for molecular simulations with supervised neural models. After that, we exploit our Bayesian models to solve a malware-classification task while being robust to intra-procedural code obfuscation techniques. We conclude the dissertation with an attempt to blend the best of the neural and Bayesian worlds together. The resulting hybrid model is able to predict multimodal distributions conditioned on input graphs, with the consequent ability to model stochasticity and uncertainty better than most works. Overall, we aim to provide a Bayesian perspective into the articulated research field of deep learning for graphs.

Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this fast-growing field.

北京阿比特科技有限公司