亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The mixed form of the Cahn-Hilliard equations is discretized by the hybridizable discontinuous Galerkin method. For any chemical energy density, existence and uniqueness of the numerical solution is obtained. The scheme is proved to be unconditionally stable. Convergence of the method is obtained by deriving a priori error estimates that are valid for the Ginzburg-Lindau chemical energy density and for convex domains. The paper also contains discrete functional tools, namely discrete Agmon and Gagliardo-Nirenberg inequalities, which are proved to be valid in the hybridizable discontinuous Galerkin spaces.

相關內容

This work deals with the numerical solution of systems of oscillatory second-order differential equations which often arise from the semi-discretization in space of partial differential equations. Since these differential equations exhibit pronounced or highly) oscillatory behavior, standard numerical methods are known to perform poorly. Our approach consists in directly discretizing the problem by means of Gautschi-type integrators based on $\operatorname{sinc}$ matrix functions. The novelty contained here is that of using a suitable rational approximation formula for the $\operatorname{sinc}$ matrix function to apply a rational Krylov-like approximation method with suitable choices of poles. In particular, we discuss the application of the whole strategy to a finite element discretization of the wave equation.

We introduce an $hp$-version discontinuous Galerkin finite element method (DGFEM) for the linear Boltzmann transport problem. A key feature of this new method is that, while offering arbitrary order convergence rates, it may be implemented in an almost identical form to standard multigroup discrete ordinates methods, meaning that solutions can be computed efficiently with high accuracy and in parallel within existing software. This method provides a unified discretisation of the space, angle, and energy domains of the underlying integro-differential equation and naturally incorporates both local mesh and local polynomial degree variation within each of these computational domains. Moreover, general polytopic elements can be handled by the method, enabling efficient discretisations of problems posed on complicated spatial geometries. We study the stability and $hp$-version a priori error analysis of the proposed method, by deriving suitable $hp$-approximation estimates together with a novel inf-sup bound. Numerical experiments highlighting the performance of the method for both polyenergetic and monoenergetic problems are presented.

This paper is devoted to the study of Bingham flow with variable density. We propose a local bi-viscosity regularization of the stress tensor based on a Huber smoothing step. Next, our computational approach is based on a second-order, divergence-conforming discretization of the Huber regularized Bingham constitutive equations, coupled with a discontinuous Galerkin scheme for the mass density. We take advantage of the properties of the divergence conforming and discontinuous Galerkin formulations to incorporate upwind discretizations to stabilize the formulation. The stability of the continuous problem and the full-discrete scheme are analyzed. Further, a semismooth Newton method is proposed for solving the obtained fully-discretized system of equations at each time step. Finally, several numerical examples that illustrate the main features of the problem and the properties of the numerical scheme are presented.

We consider additive Schwarz methods for boundary value problems involving the $p$-Laplacian. While the existing theoretical estimates for the convergence rate of additive Schwarz methods for the $p$-Laplacian are sublinear, the actual convergence rate observed by numerical experiments is linear. In this paper, we bridge the gap between these theoretical and numerical results by analyzing the linear convergence rate of additive Schwarz methods for the $p$-Laplacian. In order to estimate the linear convergence rate of the methods, we present two essential components. Firstly, we present a new abstract convergence theory of additive Schwarz methods written in terms of a quasi-norm. This quasi-norm exhibits behavior similar to the Bregman distance of the convex energy functional associated to the problem. Secondly, we provide a quasi-norm version of the Poincar'{e}--Friedrichs inequality, which is essential for deriving a quasi-norm stable decomposition for a two-level domain decomposition setting.

The Multiscale Hierarchical Decomposition Method (MHDM) was introduced as an iterative method for total variation regularization, with the aim of recovering details at various scales from images corrupted by additive or multiplicative noise. Given its success beyond image restoration, we extend the MHDM iterates in order to solve larger classes of linear ill-posed problems in Banach spaces. Thus, we define the MHDM for more general convex or even non-convex penalties, and provide convergence results for the data fidelity term. We also propose a flexible version of the method using adaptive convex functionals for regularization, and show an interesting multiscale decomposition of the data. This decomposition result is highlighted for the Bregman iteration method that can be expressed as an adaptive MHDM. Furthermore, we state necessary and sufficient conditions when the MHDM iteration agrees with the variational Tikhonov regularization, which is the case, for instance, for one-dimensional total variation denoising. Finally, we investigate several particular instances and perform numerical experiments that point out the robust behavior of the MHDM.

In this work, we propose a high-order multiscale method for an elliptic model problem with rough and possibly highly oscillatory coefficients. Convergence rates of higher order are obtained using the regularity of the right-hand side only. Hence, no restrictive assumptions on the coefficient, the domain, or the exact solution are required. In the spirit of the Localized Orthogonal Decomposition, the method constructs coarse problem-adapted ansatz spaces by solving auxiliary problems on local subdomains. More precisely, our approach is based on the strategy presented by Maier [SIAM J. Numer. Anal. 59(2), 2021]. The unique selling point of the proposed method is an improved localization strategy curing the effect of deteriorating errors with respect to the mesh size when the local subdomains are not large enough. We present a rigorous a priori error analysis and demonstrate the performance of the method in a series of numerical experiments.

This paper is devoted to the analysis of a numerical scheme based on the Finite Element Method for approximating the solution of Koiter's model for a linearly elastic elliptic membrane shell subjected to remaining confined in a prescribed half-space. First, we show that the solution of the obstacle problem under consideration is uniquely determined and satisfies a set of variational inequalities which are governed by a fourth order elliptic operator, and which are posed over a non-empty, closed, and convex subset of a suitable space. Second, we show that the solution of the obstacle problem under consideration can be approximated by means of the penalty method. Third, we show that the solution of the corresponding penalised problem is more regular up to the boundary. Fourth, we write down the mixed variational formulation corresponding to the penalised problem under consideration, and we show that the solution of the mixed variational formulation is more regular up to the boundary as well. In view of this result concerning the augmentation of the regularity of the solution of the mixed penalised problem, we are able to approximate the solution of the one such problem by means of a Finite Element scheme. Finally, we present numerical experiments corroborating the validity of the mathematical results we obtained.

Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website //pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

北京阿比特科技有限公司