亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper studies adaptive least-squares finite element methods for convection-dominated diffusion-reaction problems. The least-squares methods are based on the first-order system of the primal and dual variables with various ways of imposing outflow boundary conditions. The coercivity of the homogeneous least-squares functionals are established, and the a priori error estimates of the least-squares methods are obtained in a norm that incorporates the streamline derivative. All methods have the same convergence rate provided that meshes in the layer regions are fine enough. To increase computational accuracy and reduce computational cost, adaptive least-squares methods are implemented and numerical results are presented for some test problems.

相關內容

The extensive-form game has been studied considerably in recent years. It can represent games with multiple decision points and incomplete information, and hence it is helpful in formulating games with uncertain inputs, such as poker. We consider an extended-form game with two players and zero-sum, i.e., the sum of their payoffs is always zero. In such games, the problem of finding the optimal strategy can be formulated as a bilinear saddle-point problem. This formulation grows huge depending on the size of the game, since it has variables representing the strategies at all decision points for each player. To solve such large-scale bilinear saddle-point problems, the excessive gap technique (EGT), a smoothing method, has been studied. This method generates a sequence of approximate solutions whose error is guaranteed to converge at $\mathcal{O}(1/k)$, where $k$ is the number of iterations. However, it has the disadvantage of having poor theoretical bounds on the error related to the game size. This makes it inapplicable to large games. Our goal is to improve the smoothing method for solving extensive-form games so that it can be applied to large-scale games. To this end, we make two contributions in this work. First, we slightly modify the strongly convex function used in the smoothing method in order to improve the theoretical bounds related to the game size. Second, we propose a heuristic called centering trick, which allows the smoothing method to be combined with other methods and consequently accelerates the convergence in practice. As a result, we combine EGT with CFR+, a state-of-the-art method for extensive-form games, to achieve good performance in games where conventional smoothing methods do not perform well. The proposed smoothing method is shown to have the potential to solve large games in practice.

In this paper we analyze a pressure-robust method based on divergence-free mixed finite element methods with continuous interior penalty stabilization. The main goal is to prove an $O(h^{k+1/2})$ error estimate for the $L^2$ norm of the velocity in the convection dominated regime. This bound is pressure robust (the error bound of the velocity does not depend on the pressure) and also convection robust (the constants in the error bounds are independent of the Reynolds number).

A finite element discretization is developed for the Cai-Hu model, describing the formation of biological networks. The model consists of a non linear elliptic equation for the pressure $p$ and a non linear reaction-diffusion equation for the conductivity tensor $\mathbb{C}$. The problem requires high resolution due to the presence of multiple scales, the stiffness in all its components and the non linearities. We propose a low order finite element discretization in space coupled with a semi-implicit time advancing scheme. The code is validated with several numerical tests performed with various choices for the parameters involved in the system. In absence of the exact solution, we apply Richardson extrapolation technique to estimate the order of the method.

Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP. However, common practice fine-tunes all of the parameters in a pre-trained model, which becomes prohibitive when a large number of downstream tasks are present. Therefore, many fine-tuning methods are proposed to learn incremental updates of pre-trained weights in a parameter efficient way, e.g., low-rank increments. These methods often evenly distribute the budget of incremental updates across all pre-trained weight matrices, and overlook the varying importance of different weight parameters. As a consequence, the fine-tuning performance is suboptimal. To bridge this gap, we propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows us to effectively prune the singular values of unimportant updates, which is essentially to reduce their parameter budget but circumvent intensive exact SVD computations. We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA. Results demonstrate that AdaLoRA manifests notable improvement over baselines, especially in the low budget settings. Our code is publicly available at //github.com/QingruZhang/AdaLoRA .

We present and analyze a high-order discontinuous Galerkin method for the space discretization of the wave propagation model in thermo-poroelastic media. The proposed scheme supports general polytopal grids. Stability analysis and $hp$-version error estimates in suitable energy norms are derived for the semi-discrete problem. The fully-discrete scheme is then obtained based on employing an implicit Newmark-$\beta$ time integration scheme. A wide set of numerical simulations is reported, both for the verification of the theoretical estimates and for examples of physical interest. A comparison with the results of the poroelastic model is provided too, highlighting the differences between the predictive capabilities of the two models.

Using typical solution strategies to compute the solution curve of challenging problems often leads to the break down of the algorithm. To improve the solution process, numerical continuation methods have proved to be a very efficient tool. However, these methods can still lead to undesired results. In particular, near severe limit points and cusps, the solution process frequently encounters one of the following situations : divergence of the algorithm, a change in direction which makes the algorithm backtrack on a part of the solution curve that has already been obtained and omitting important regions of the solution curve by converging to a point that is much farther than the one anticipated. Detecting these situations is not an easy task when solving practical problems since the shape of the solution curve is not known in advance. This paper will therefore present a modified Moore-Penrose continuation method that will include two key aspects to solve challenging problems : detection of problematic regions during the solution process and additional steps to deal with them. The proposed approach can either be used as a basic continuation method or simply activated when difficulties occur. Numerical examples will be presented to show the efficiency of the new approach.

The present study is an extension of the work done in Parareal convergence for oscillatory pdes with finite time-scale separation (2019), A. G. Peddle, T. Haut, and B. Wingate, [16], and An asymptotic parallel-in-time method for highly oscillatory pdes (2014), T. Haut and B. Wingate, [10], where a two-level Parareal method with averaging is examined. The method proposed in this paper is a multi-level Parareal method with arbitrarily many levels, which is not restricted to the two-level case. We give an asymptotic error estimate which reduces to the two-level estimate for the case when only two levels are considered. Introducing more than two levels has important consequences for the averaging procedure, as we choose separate averaging windows for each of the different levels, which is an additional new feature of the present study. The different averaging windows make the proposed method especially appropriate for multi-scale problems, because we can introduce a level for each intrinsic scale of the problem and adapt the averaging procedure such that we reproduce the behavior of the model on the particular scale resolved by the level. The computational complexity of the new method is investigated and the efficiency is studied on several examples.

We investigate the randomized Kaczmarz method that adaptively updates the stepsize using readily available information for solving inconsistent linear systems. A novel geometric interpretation is provided which shows that the proposed method can be viewed as an orthogonal projection method in some sense. We prove that this method converges linearly in expectation to the unique minimum Euclidean norm least-squares solution of the linear system, and provide a tight upper bound for the convergence of the proposed method. Numerical experiments are also given to illustrate the theoretical results.

Gaussian graphical models typically assume a homogeneous structure across all subjects, which is often restrictive in applications. In this article, we propose a weighted pseudo-likelihood approach for graphical modeling which allows different subjects to have different graphical structures depending on extraneous covariates. The pseudo-likelihood approach replaces the joint distribution by a product of the conditional distributions of each variable. We cast the conditional distribution as a heteroscedastic regression problem, with covariate-dependent variance terms, to enable information borrowing directly from the data instead of a hierarchical framework. This allows independent graphical modeling for each subject, while retaining the benefits of a hierarchical Bayes model and being computationally tractable. An efficient embarrassingly parallel variational algorithm is developed to approximate the posterior and obtain estimates of the graphs. Using a fractional variational framework, we derive asymptotic risk bounds for the estimate in terms of a novel variant of the $\alpha$-R\'{e}nyi divergence. We theoretically demonstrate the advantages of information borrowing across covariates over independent modeling. We show the practical advantages of the approach through simulation studies and illustrate the dependence structure in protein expression levels on breast cancer patients using CNV information as covariates.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

北京阿比特科技有限公司