亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physics Informed Neural Networks (PINNs) are shown to be a promising method for the approximation of Partial Differential Equations (PDEs). PINNs approximate the PDE solution by minimizing physics-based loss functions over a given domain. Despite substantial progress in the application of PINNs to a range of problem classes, investigation of error estimation and convergence properties of PINNs, which is important for establishing the rationale behind their good empirical performance, has been lacking. This paper presents convergence analysis and error estimates of PINNs for a multi-physics problem of thermally coupled incompressible Navier-Stokes equations. Through a model problem of Beltrami flow it is shown that a small training error implies a small generalization error. Posteriori convergence rates of total error with respect to the training residual and collocation points are presented. This is of practical significance in determining appropriate number of training parameters and training residual thresholds to get good PINNs prediction of thermally coupled steady state laminar flows. These convergence rates are then generalized to different spatial geometries as well as to different flow parameters that lie in the laminar regime. A pressure stabilization term in the form of pressure Poisson equation is added to the PDE residuals for PINNs. This physics informed augmentation is shown to improve accuracy of the pressure field by an order of magnitude as compared to the case without augmentation. Results from PINNs are compared to the ones obtained from stabilized finite element method and good properties of PINNs are highlighted.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 求逆 · Performer · Networking · MoDELS ·
2022 年 10 月 21 日

Despite unconditional feature inversion being the foundation of many image synthesis applications, training an inverter demands a high computational budget, large decoding capacity and imposing conditions such as autoregressive priors. To address these limitations, we propose the use of adversarially robust representations as a perceptual primitive for feature inversion. We train an adversarially robust encoder to extract disentangled and perceptually-aligned image representations, making them easily invertible. By training a simple generator with the mirror architecture of the encoder, we achieve superior reconstruction quality and generalization over standard models. Based on this, we propose an adversarially robust autoencoder and demonstrate its improved performance on style transfer, image denoising and anomaly detection tasks. Compared to recent ImageNet feature inversion methods, our model attains improved performance with significantly less complexity.

We study a discrete-time model where each packet has a cost of not being sent -- this cost might depend on the packet content. We study the tradeoff between the age and the cost where the sender is confined to packet-based strategies. The optimal tradeoff is found by an appropriate formulation of the problem as a Markov Decision Process (MDP). We show that the optimal tradeoff can be attained with finite-memory policies and we devise an efficient policy iteration algorithm to find these optimal policies. We further study a related problem where the transmitted packets are subject to erasures. We show that the optimal policies for our problem are also optimal for this new setup. Allowing coding across packets significantly extends the packet-based strategies. We show that when the packet payloads are small, the performance can be improved by coding.

Time-dependent Partial Differential Equations with given initial conditions are considered in this paper. New differentiation techniques of the unknown solution with respect to time variable are proposed. It is shown that the proposed techniques allow to generate accurate higher order derivatives simultaneously for a set of spatial points. The calculated derivatives can then be used for data-driven solution in different ways. An application for Physics Informed Neural Networks by the well-known DeepXDE software solution in Python under Tensorflow background framework has been presented for three real-life PDEs: Burgers', Allen-Cahn and Schrodinger equations.

Deep neural networks are powerful tools to model observations over time with non-linear patterns. Despite the widespread use of neural networks in such settings, most theoretical developments of deep neural networks are under the assumption of independent observations, and theoretical results for temporally dependent observations are scarce. To bridge this gap, we study theoretical properties of deep neural networks on modeling non-linear time series data. Specifically, non-asymptotic bounds for prediction error of (sparse) feed-forward neural network with ReLU activation function is established under mixing-type assumptions. These assumptions are mild such that they include a wide range of time series models including auto-regressive models. Compared to independent observations, established convergence rates have additional logarithmic factors to compensate for additional complexity due to dependence among data points. The theoretical results are supported via various numerical simulation settings as well as an application to a macroeconomic data set.

The goal of cryptocurrencies is decentralization. In principle, all currencies have equal status. Unlike traditional stock markets, there is no default currency of denomination (fiat), thus the trading pairs can be set freely. However, it is impractical to set up a trading market between every two currencies. In order to control management costs and ensure sufficient liquidity, we must give priority to covering those large-volume trading pairs and ensure that all coins are reachable. We note that this is an optimization problem. Its particularity lies in: 1) the trading volume between most (>99.5%) possible trading pairs cannot be directly observed. 2) It satisfies the connectivity constraint, that is, all currencies are guaranteed to be tradable. To solve this problem, we use a two-stage process: 1) Fill in missing values based on a regularized, truncated eigenvalue decomposition, where the regularization term is used to control what extent missing values should be limited to zero. 2) Search for the optimal trading pairs, based on a branch and bound process, with heuristic search and pruning strategies. The experimental results show that: 1) If the number of denominated coins is not limited, we will get a more decentralized trading pair settings, which advocates the establishment of trading pairs directly between large currency pairs. 2) There is a certain room for optimization in all exchanges. The setting of inappropriate trading pairs is mainly caused by subjectively setting small coins to quote, or failing to track emerging big coins in time. 3) Too few trading pairs will lead to low coverage; too many trading pairs will need to be adjusted with markets frequently. Exchanges should consider striking an appropriate balance between them.

In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much can hinder ground truth performance, in accordance with Goodhart's law. This effect has been frequently observed, but not carefully measured due to the expense of collecting human preference data. In this work, we use a synthetic setup in which a fixed "gold-standard" reward model plays the role of humans, providing labels used to train a proxy reward model. We study how the gold reward model score changes as we optimize against the proxy reward model using either reinforcement learning or best-of-$n$ sampling. We find that this relationship follows a different functional form depending on the method of optimization, and that in both cases its coefficients scale smoothly with the number of reward model parameters. We also study the effect on this relationship of the size of the reward model dataset, the number of reward model and policy parameters, and the coefficient of the KL penalty added to the reward in the reinforcement learning setup. We explore the implications of these empirical results for theoretical considerations in AI alignment.

We derive normal approximation results for a class of stabilizing functionals of binomial or Poisson point process, that are not necessarily expressible as sums of certain score functions. Our approach is based on a flexible notion of the add-one cost operator, which helps one to deal with the second-order cost operator via suitably appropriate first-order operators. We combine this flexible notion with the theory of strong stabilization to establish our results. We illustrate the applicability of our results by establishing normal approximation results for certain geometric and topological statistics arising frequently in practice. Several existing results also emerge as special cases of our approach.

Physics-informed neural networks (PINNs) have been proposed to solve two main classes of problems: data-driven solutions and data-driven discovery of partial differential equations. This task becomes prohibitive when such data is highly corrupted due to the possible sensor mechanism failing. We propose the Least Absolute Deviation based PINN (LAD-PINN) to reconstruct the solution and recover unknown parameters in PDEs - even if spurious data or outliers corrupt a large percentage of the observations. To further improve the accuracy of recovering hidden physics, the two-stage Median Absolute Deviation based PINN (MAD-PINN) is proposed, where LAD-PINN is employed as an outlier detector followed by MAD screening out the highly corrupted data. Then the vanilla PINN or its variants can be subsequently applied to exploit the remaining normal data. Through several examples, including Poisson's equation, wave equation, and steady or unsteady Navier-Stokes equations, we illustrate the generalizability, accuracy and efficiency of the proposed algorithms for recovering governing equations from noisy and highly corrupted measurement data.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

北京阿比特科技有限公司