亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we formulate and study substructuring type algorithm for the Cahn-Hilliard (CH) equation, which was originally proposed to describe the phase separation phenomenon for binary melted alloy below the critical temperature and since then it has appeared in many fields ranging from tumour growth simulation, image processing, thin liquid films, population dynamics etc. Being a non-linear equation, it is important to develop robust numerical techniques to solve the CH equation. Here we present the formulation of Dirichlet-Neumann (DN) and Neumann-Neumann (NN) methods applied to CH equation and study their convergence behaviour. We consider the domain-decomposition based DN and NN methods in one and two space dimension for two subdomains and extend the study for multi-subdomain setting for NN method. We verify our findings with numerical results.

相關內容

An evolving surface finite element discretisation is analysed for the evolution of a closed two-dimensional surface governed by a system coupling a generalised forced mean curvature flow and a reaction--diffusion process on the surface, inspired by a gradient flow of a coupled energy. Two algorithms are proposed, both based on a system coupling the diffusion equation to evolution equations for geometric quantities in the velocity law for the surface. One of the numerical methods is proved to be convergent in the $H^1$ norm with optimal-order for finite elements of degree at least two. We present numerical experiments illustrating the convergence behaviour and demonstrating the qualitative properties of the flow: preservation of mean convexity, loss of convexity, weak maximum principles, and the occurrence of self-intersections.

In this paper we propose a deep learning based numerical scheme for strongly coupled FBSDE, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the means square error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDE, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.

Computations of incompressible flows with velocity boundary conditions require solution of a Poisson equation for pressure with all Neumann boundary conditions. Discretization of such a Poisson equation results in a rank-deficient matrix of coefficients. When a non-conservative discretization method such as finite difference, finite element, or spectral scheme is used, such a matrix also generates an inconsistency which makes the residuals in the iterative solution to saturate at a threshold level that depends on the spatial resolution and order of the discretization scheme. In this paper, we examine inconsistency for a high-order meshless discretization scheme suitable for solving the equations on a complex domain. The high order meshless method uses polyharmonic spline radial basis functions (PHS-RBF) with appended polynomials to interpolate scattered data and constructs the discrete equations by collocation. The PHS-RBF provides the flexibility to vary the order of discretization by increasing the degree of the appended polynomial. In this study, we examine the convergence of the inconsistency for different spatial resolutions and for different degrees of the appended polynomials by solving the Poisson equation for a manufactured solution as well as the Navier-Stokes equations for several fluid flows. We observe that the inconsistency decreases faster than the error in the final solution, and eventually becomes vanishing small at sufficient spatial resolution. The rate of convergence of the inconsistency is observed to be similar or better than the rate of convergence of the discretization errors. This beneficial observation makes it unnecessary to regularize the Poisson equation by fixing either the mean pressure or pressure at an arbitrary point. A simple point solver such as the SOR is seen to be well-convergent, although it can be further accelerated using multilevel methods.

In this paper we are concerned with Trefftz discretizations of the time-dependent linear wave equation in anisotropic media in arbitrary space dimensional domains $\Omega \subset \mathbb{R}^d~ (d\in \mathbb{N})$. We propose two variants of the Trefftz DG method, define novel plane wave basis functions based on rigorous choices of scaling transformations and coordinate transformations, and prove that the corresponding approximate solutions possess optimal-order error estimates with respect to the meshwidth $h$ and the condition number of the coefficient matrices, respectively. Besides, we propose the global Trefftz DG method combined with local DG methods to solve the time-dependent linear nonhomogeneous wave equation in anisotropic media. In particular, the error analysis holds for the (nonhomogeneous) Dirichlet, Neumann, and mixed boundary conditions from the original PDEs. Furthermore, a strategy to discretize the model in heterogeneous media is proposed. The numerical results verify the validity of the theoretical results, and show that the resulting approximate solutions possess high accuracy.

Pre-integration is an extension of conditional Monte Carlo to quasi-Monte Carlo and randomized quasi-Monte Carlo. It can reduce but not increase the variance in Monte Carlo. For quasi-Monte Carlo it can bring about improved regularity of the integrand with potentially greatly improved accuracy. Pre-integration is ordinarily done by integrating out one of $d$ input variables to a function. In the common case of a Gaussian integral one can also pre-integrate over any linear combination of variables. We propose to do that and we choose the first eigenvector in an active subspace decomposition to be the pre-integrated linear combination. We find in numerical examples that this active subspace pre-integration strategy is competitive with pre-integrating the first variable in the principal components construction on the Asian option where principal components are known to be very effective. It outperforms other pre-integration methods on some basket options where there is no well established default. We show theoretically that, just as in Monte Carlo, pre-integration can reduce but not increase the variance when one uses scrambled net integration. We show that the lead eigenvector in an active subspace decomposition is closely related to the vector that maximizes a less computationally tractable criterion using a Sobol' index to find the most important linear combination of Gaussian variables. They optimize similar expectations involving the gradient. We show that the Sobol' index criterion for the leading eigenvector is invariant to the way that one chooses the remaining $d-1$ eigenvectors with which to sample the Gaussian vector.

In this article, we have considered a nonlinear nonlocal time dependent fourth order equation demonstrating the deformation of a thin and narrow rectangular plate. We propose $C^1$ conforming virtual element method (VEM) of arbitrary order, $k\ge2$, to approximate the model problem numerically. We employ VEM to discretize the space variable and fully implicit scheme for temporal variable. Well-posedness of the fully discrete scheme is proved under certain conditions on the physical parameters, and we derive optimal order of convergence in both space and time variable. Finally, numerical experiments are presented to illustrate the behaviour of the proposed numerical scheme.

This paper studies the caching system of multiple cache-enabled users with random demands. Under nonuniform file popularity, we thoroughly characterize the optimal uncoded cache placement structure for the coded caching scheme (CCS). Formulating the cache placement as an optimization problem to minimize the average delivery rate, we identify the file group structure in the optimal solution. We show that, regardless of the file popularity distribution, there are \emph{at most three file groups} in the optimal cache placement{, where files within a group have the same cache placement}. We further characterize the complete structure of the optimal cache placement and obtain the closed-form solution in each of the three file group structures. A simple algorithm is developed to obtain the final optimal cache placement by comparing a set of candidate closed-form solutions computed in parallel. We provide insight into the file groups formed by the optimal cache placement. The optimal placement solution also indicates that coding between file groups may be explored during delivery, in contrast to the existing suboptimal file grouping schemes. Using the file group structure in the optimal cache placement for the CCS, we propose a new information-theoretic converse bound for coded caching that is tighter than the existing best one. Moreover, we characterize the file subpacketization in the CCS with the optimal cache placement solution and show that the maximum subpacketization level in the worst case scales as $\mathcal{O}(2^K/\sqrt{K})$ for $K$ users.

In this article, we deal with the efficient computation of the Wright function in the cases of interest for the expression of solutions of some fractional differential equations. The proposed algorithm is based on the inversion of the Laplace transform of a particular expression of the Wright function for which we discuss in detail the error analysis. We also present a code package that implements the algorithm proposed here in different programming languages. The analysis and implementation are accompanied by an extensive set of numerical experiments that validate both the theoretical estimates of the error and the applicability of the proposed method for representing the solutions of fractional differential equations.

Optimal-order uniform-in-time $H^1$-norm error estimates are given for semi- and full discretizations of mean curvature flow of surfaces in arbitrarily high codimension. The proposed and studied numerical method is based on a parabolic system coupling the surface flow to evolution equations for the mean curvature vector and for the orthogonal projection onto the tangent space. The algorithm uses evolving surface finite elements and linearly implicit backward difference formulae. This numerical method admits a convergence analysis in the case of finite elements of polynomial degree at least two and backward difference formulae of orders two to five. Numerical experiments in codimension 2 illustrate and complement our theoretical results.

During recent years, active learning has evolved into a popular paradigm for utilizing user's feedback to improve accuracy of learning algorithms. Active learning works by selecting the most informative sample among unlabeled data and querying the label of that point from user. Many different methods such as uncertainty sampling and minimum risk sampling have been utilized to select the most informative sample in active learning. Although many active learning algorithms have been proposed so far, most of them work with binary or multi-class classification problems and therefore can not be applied to problems in which only samples from one class as well as a set of unlabeled data are available. Such problems arise in many real-world situations and are known as the problem of learning from positive and unlabeled data. In this paper we propose an active learning algorithm that can work when only samples of one class as well as a set of unlabelled data are available. Our method works by separately estimating probability desnity of positive and unlabeled points and then computing expected value of informativeness to get rid of a hyper-parameter and have a better measure of informativeness./ Experiments and empirical analysis show promising results compared to other similar methods.

北京阿比特科技有限公司