亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nonlinear time fractional partial differential equations are widely used in modeling and simulations. In many applications, there are high contrast changes in media properties. For solving these problems, one often uses coarse spatial grid for spatial resolution. For temporal discretization, implicit methods are often used. For implicit methods, though the time step can be relatively large, the equations are difficult to compute due to the nonlinearity and the fact that one deals with large-scale systems. On the other hand, the discrete system in explicit methods are easier to compute but it requires small time steps. In this work, we propose the partially explicit scheme following earlier works on developing partially explicit methods for nonlinear diffusion equations. In this scheme, the diffusion term is treated partially explicitly and the reaction term is treated fully explicitly. With the appropriate construction of spaces and stability analysis, we find that the required time step in our proposed scheme scales as the coarse mesh size, which creates a great saving in computing. The main novelty of this work is the extension of our earlier works for diffusion equations to time fractional diffusion equations. For the case of fractional diffusion equations, the constraints on time steps are more severe and the proposed methods alleviate this since the time step in partially explicit method scales as the coarse mesh size. We present stability results. Numerical results are presented where we compare our proposed partially explicit methods with a fully implicit approach. We show that our proposed approach provides similar results, while treating many degrees of freedom in nonlinear terms explicitly.

相關內容

We develop a probabilistic characterisation of trajectorial expansion rates in non-autonomous stochastic dynamical systems that can be defined over a finite time interval and used for the subsequent uncertainty quantification in Lagrangian (trajectory-based) predictions. These expansion rates are quantified via certain divergences (pre-metrics) between probability measures induced by the laws of the stochastic flow associated with the underlying dynamics. We construct scalar fields of finite-time divergence/expansion rates, show their existence and space-time continuity for general stochastic flows. Combining these divergence rate fields with our 'information inequalities' derived in allows for quantification and mitigation of the uncertainty in path-based observables estimated from simplified models in a way that is amenable to algorithmic implementations, and it can be utilised in information-geometric analysis of statistical estimation and inference, as well as in a data-driven machine/deep learning of coarse-grained models. We also derive a link between the divergence rates and finite-time Lyapunov exponents for probability measures and for path-based observables.

This paper studies fully discrete finite element approximations to the Navier-Stokes equations using inf-sup stable elements and grad-div stabilization. For the time integration two implicit-explicit second order backward differentiation formulae (BDF2) schemes are applied. In both the laplacian is implicit while the nonlinear term is explicit, in the first one, and semi-implicit, in the second one. The grad-div stabilization allow us to prove error bounds in which the constants are independent of inverse powers of the viscosity. Error bounds of order $r$ in space are obtained for the $L^2$ error of the velocity using piecewise polynomials of degree $r$ to approximate the velocity together with second order bounds in time, both for fixed time step methods and for methods with variable time steps. A CFL-type condition is needed for the method in which the nonlinear term is explicit relating time step and spatial mesh sizes parameters.

In this paper, we apply discontinuous finite element Galerkin method to the time-dependent $2D$ incompressible Navier-Stokes model. We derive optimal error estimates in $L^\infty(\textbf{L}^2)$-norm for the velocity and in $L^\infty(L^2)$-norm for the pressure with the initial data $\textbf{u}_0\in \textbf{H}_0^1\cap \textbf{H}^2$ and the source function $\textbf{f}$ in $L^\infty(\textbf{L}^2)$ space. These estimates are established with the help of a new $L^2$-projection and modified Stokes operator on appropriate broken Sobolev space and with standard parabolic or elliptic duality arguments. Estimates are shown to be uniform under the smallness assumption on data. Then, a completely discrete scheme based on the backward Euler method is analyzed, and fully discrete error estimates are derived. We would like to highlight here that the estiablished semi-discrete error estimates related to the $L^\infty(\textbf{L}^2)$-norm of velocity and $L^\infty(L^2)$-norm of pressure are optimal and sharper than those derived in the earlier articles. Finally, numerical examples validate our theoretical findings.

A framework is presented for fitting inverse problem models via variational Bayes approximations. This methodology guarantees flexibility to statistical model specification for a broad range of applications, good accuracy performances and reduced model fitting times, when compared with standard Markov chain Monte Carlo methods. The message passing and factor graph fragment approach to variational Bayes we describe facilitates streamlined implementation of approximate inference algorithms and forms the basis to software development. Such approach allows for supple inclusion of numerous response distributions and penalizations into the inverse problem model. Even though our work is circumscribed to one- and two-dimensional response variables, we lay down an infrastructure where efficient algorithm updates based on nullifying weak interactions between variables can also be derived for inverse problems in higher dimensions. Image processing applications motivated by biomedical and archaeological problems are included as illustrations.

Multivariate Analysis (MVA) comprises a family of well-known methods for feature extraction which exploit correlations among input variables representing the data. One important property that is enjoyed by most such methods is uncorrelation among the extracted features. Recently, regularized versions of MVA methods have appeared in the literature, mainly with the goal to gain interpretability of the solution. In these cases, the solutions can no longer be obtained in a closed manner, and more complex optimization methods that rely on the iteration of two steps are frequently used. This paper recurs to an alternative approach to solve efficiently this iterative problem. The main novelty of this approach lies in preserving several properties of the original methods, most notably the uncorrelation of the extracted features. Under this framework, we propose a novel method that takes advantage of the l-21 norm to perform variable selection during the feature extraction process. Experimental results over different problems corroborate the advantages of the proposed formulation in comparison to state of the art formulations.

For the first time, a nonlinear interface problem on an unbounded domain with nonmonotone set-valued transmission conditions is analyzed. The investigated problem involves a nonlinear monotone partial differential equation in the interior domain and the Laplacian in the exterior domain. Such a scalar interface problem models nonmonotone frictional contact of elastic infinite media. The variational formulation of the interface problem leads to a hemivariational inequality, which lives on the unbounded domain, and so cannot be treated numerically in a direct way. By boundary integral methods the problem is transformed and a novel hemivariational inequality (HVI) is obtained that lives on the interior domain and on the coupling boundary, only. Thus for discretization the coupling of finite elements and boundary elements is the method of choice. In addition smoothing techniques of nondifferentiable optimization are adapted and the nonsmooth part in the HVI is regularized. Thus we reduce the original variational problem to a finite dimensional problem that can be solved by standard optimization tools. We establish not only convergence results for the total approximation procedure, but also an asymptotic error estimate for the regularized HVI.

The difficulty in specifying rewards for many real-world problems has led to an increased focus on learning rewards from human feedback, such as demonstrations. However, there are often many different reward functions that explain the human feedback, leaving agents with uncertainty over what the true reward function is. While most policy optimization approaches handle this uncertainty by optimizing for expected performance, many applications demand risk-averse behavior. We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. To the best of our knowledge, PG-BROIL is the first policy optimization algorithm robust to a distribution of reward hypotheses which can scale to continuous MDPs. Results suggest that PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse and outperforms state-of-the-art imitation learning algorithms when learning from ambiguous demonstrations by hedging against uncertainty, rather than seeking to uniquely identify the demonstrator's reward function.

Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

In this paper, we develop the continuous time dynamic topic model (cDTM). The cDTM is a dynamic topic model that uses Brownian motion to model the latent topics through a sequential collection of documents, where a "topic" is a pattern of word use that we expect to evolve over the course of the collection. We derive an efficient variational approximate inference algorithm that takes advantage of the sparsity of observations in text, a property that lets us easily handle many time points. In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized. Moreover, the complexity of variational inference for the dDTM grows quickly as time granularity increases, a drawback which limits fine-grained discretization. We demonstrate the cDTM on two news corpora, reporting both predictive perplexity and the novel task of time stamp prediction.

北京阿比特科技有限公司