亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Time-dependent Partial Differential Equations with given initial conditions are considered in this paper. New differentiation techniques of the unknown solution with respect to time variable are proposed. It is shown that the proposed techniques allow to generate accurate higher order derivatives simultaneously for a set of spatial points. The calculated derivatives can then be used for data-driven solution in different ways. An application for Physics Informed Neural Networks by the well-known DeepXDE software solution in Python under Tensorflow background framework has been presented for three real-life PDEs: Burgers', Allen-Cahn and Schrodinger equations.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

We consider an optimal control problem constrained by a parabolic partial differential equation (PDE) with Robin boundary conditions. We use a well-posed space-time variational formulation in Lebesgue--Bochner spaces with minimal regularity. The abstract formulation of the optimal control problem yields the Lagrange function and Karush--Kuhn--Tucker (KKT) conditions in a natural manner. This results in space-time variational formulations of the adjoint and gradient equation in Lebesgue--Bochner spaces with minimal regularity. Necessary and sufficient optimality conditions are formulated and the optimality system is shown to be well-posed. Next, we introduce a conforming uniformly stable simultaneous space-time (tensorproduct) discretization of the optimality system in these Lebesgue--Boch\-ner spaces. Using finite elements of appropriate orders in space and time for trial and test spaces, this setting is known to be equivalent to a Crank--Nicolson time-stepping scheme for parabolic problems. Differences to existing methods are detailed. We show numerical comparisons with time-stepping methods. The space-time method shows good stability properties and requires fewer degrees of freedom in time to reach the same accuracy.

In recent years, methods based on deep neural networks, and especially Neural Improvement (NI) models, have led to a revolution in the field of combinatorial optimization. Given an instance of a graph-based problem and a candidate solution, they are able to propose a modification rule that improves its quality. However, existing NI approaches only consider node features and node-wise positional encodings to extract the instance and solution information, respectively. Thus, they are not suitable for problems where the essential information is encoded in the edges. In this paper, we present a NI model to solve graph-based problems where the information is stored either in the nodes, in the edges, or in both of them. We incorporate the NI model as a building block of hill-climbing-based algorithms to efficiently guide the election of neighborhood operations considering the solution at that iteration. Conducted experiments show that the model is able to recommend neighborhood operations that are in the $99^{th}$ percentile for the Preference Ranking Problem. Moreover, when incorporated to hill-climbing algorithms, such as Iterated or Multi-start Local Search, the NI model systematically outperforms the conventional versions. Finally, we demonstrate the flexibility of the model by extending the application to two well-known problems: the Traveling Salesman Problem and the Graph Partitioning Problem.

Over the last years, topic modeling has emerged as a powerful technique for organizing and summarizing big collections of documents or searching for particular patterns in them. However, privacy concerns arise when cross-analyzing data from different sources is required. Federated topic modeling solves this issue by allowing multiple parties to jointly train a topic model without sharing their data. While several federated approximations of classical topic models do exist, no research has been carried out on their application for neural topic models. To fill this gap, we propose and analyze a federated implementation based on state-of-the-art neural topic modeling implementations, showing its benefits when there is a diversity of topics across the nodes' documents and the need to build a joint model. Our approach is by construction theoretically and in practice equivalent to a centralized approach but preserves the privacy of the nodes.

In deep learning, neural networks serve as noisy channels between input data and its representation. This perspective naturally relates deep learning with the pursuit of constructing channels with optimal performance in information transmission and representation. While considerable efforts are concentrated on realizing optimal channel properties during network optimization, we study a frequently overlooked possibility that neural networks can be initialized toward optimal channels. Our theory, consistent with experimental validation, identifies primary mechanics underlying this unknown possibility and suggests intrinsic connections between statistical physics and deep learning. Unlike the conventional theories that characterize neural networks applying the classic mean-filed approximation, we offer analytic proof that this extensively applied simplification scheme is not valid in studying neural networks as information channels. To fill this gap, we develop a corrected mean-field framework applicable for characterizing the limiting behaviors of information propagation in neural networks without strong assumptions on inputs. Based on it, we propose an analytic theory to prove that mutual information maximization is realized between inputs and propagated signals when neural networks are initialized at dynamic isometry, a case where information transmits via norm-preserving mappings. These theoretical predictions are validated by experiments on real neural networks, suggesting the robustness of our theory against finite-size effects. Finally, we analyze our findings with information bottleneck theory to confirm the precise relations among dynamic isometry, mutual information maximization, and optimal channel properties in deep learning.

This work formulates a new approach to reduced modeling of parameterized, time-dependent partial differential equations (PDEs). The method employs Operator Inference, a scientific machine learning framework combining data-driven learning and physics-based modeling. The parametric structure of the governing equations is embedded directly into the reduced-order model, and parameterized reduced-order operators are learned via a data-driven linear regression problem. The result is a reduced-order model that can be solved rapidly to map parameter values to approximate PDE solutions. Such parameterized reduced-order models may be used as physics-based surrogates for uncertainty quantification and inverse problems that require many forward solves of parametric PDEs. Numerical issues such as well-posedness and the need for appropriate regularization in the learning problem are considered, and an algorithm for hyperparameter selection is presented. The method is illustrated for a parametric heat equation and demonstrated for the FitzHugh-Nagumo neuron model.

Multivariate Hawkes processes are temporal point processes extensively applied to model event data with dependence on past occurrences and interaction phenomena. In the generalised nonlinear model, positive and negative interactions between the components of the process are allowed, therefore accounting for so-called excitation and inhibition effects. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is often a computationally expensive task, all the more with Bayesian estimation methods. In general, the posterior distribution in the nonlinear Hawkes model is non-conjugate and doubly intractable. Moreover, existing Monte-Carlo Markov Chain methods are often slow and not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we unify existing variational Bayes inference approaches under a general framework, that we theoretically analyse under easily verifiable conditions on the prior, the variational class, and the model. We notably apply our theory to a novel spike-and-slab variational class, that can induce sparsity through the connectivity graph parameter of the multivariate Hawkes model. Then, in the context of the popular sigmoid Hawkes model, we leverage existing data augmentation technique and design adaptive and sparsity-inducing mean-field variational methods. In particular, we propose a two-step algorithm based on a thresholding heuristic to select the graph parameter. Through an extensive set of numerical simulations, we demonstrate that our approach enjoys several benefits: it is computationally efficient, can reduce the dimensionality of the problem by selecting the graph parameter, and is able to adapt to the smoothness of the underlying parameter.

Variational Bayes methods are a scalable estimation approach for many complex state space models. However, existing methods exhibit a trade-off between accurate estimation and computational efficiency. This paper proposes a variational approximation that mitigates this trade-off. This approximation is based on importance densities that have been proposed in the context of efficient importance sampling. By directly conditioning on the observed data, the proposed method produces an accurate approximation to the exact posterior distribution. Because the steps required for its calibration are computationally efficient, the approach is faster than existing variational Bayes methods. The proposed method can be applied to any state space model that has a closed-form measurement density function and a state transition distribution that belongs to the exponential family of distributions. We illustrate the method in numerical experiments with stochastic volatility models and a macroeconomic empirical application using a high-dimensional state space model.

Two combined numerical methods for solving time-varying semilinear differential-algebraic equations (DAEs) are obtained. These equations are also called degenerate DEs, descriptor systems, operator-differential equations and DEs on manifolds. The convergence and correctness of the methods are proved. When constructing methods we use, in particular, time-varying spectral projectors which can be numerically found. This enables to numerically solve and analyze the considered DAE in the original form without additional analytical transformations. To improve the accuracy of the second method, recalculation (a ``predictor-corrector'' scheme) is used. Note that the developed methods are applicable to the DAEs with the continuous nonlinear part which may not be continuously differentiable in $t$, and that the restrictions of the type of the global Lipschitz condition, including the global condition of contractivity, are not used in the theorems on the global solvability of the DAEs and on the convergence of the numerical methods. This enables to use the developed methods for the numerical solution of more general classes of mathematical models. For example, the functions of currents and voltages in electric circuits may not be differentiable or may be approximated by nondifferentiable functions. Presented conditions for the global solvability of the DAEs ensure the existence of an unique exact global solution for the corresponding initial value problem, which enables to compute approximate solutions on any given time interval (provided that the conditions of theorems or remarks on the convergence of the methods are fulfilled). In the paper, the numerical analysis of the mathematical model for a certain electrical circuit, which demonstrates the application of the presented theorems and numerical methods, is carried out.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

Deep Learning has revolutionized the fields of computer vision, natural language understanding, speech recognition, information retrieval and more. However, with the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have all have increased significantly. Consequently, it has become important to pay attention to these footprint metrics of a model as well, not just its quality. We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency (spanning modeling techniques, infrastructure, and hardware) and the seminal work there. We also present an experiment-based guide along with code, for practitioners to optimize their model training and deployment. We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support. Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.

北京阿比特科技有限公司