亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The property that the velocity $\boldsymbol{u}$ belongs to $L^\infty(0,T;L^2(\Omega)^d)$ is an essential requirement in the definition of energy solutions of models for incompressible fluids; it is, therefore, highly desirable that the solutions produced by discretisation methods are uniformly stable in the $L^\infty(0,T;L^2(\Omega)^d)$-norm. In this work, we establish that this is indeed the case for Discontinuous Galerkin (DG) discretisations (in time and space) of non-Newtonian implicitly constituted models with $p$-structure, in general, assuming that $p\geq \frac{3d+2}{d+2}$; the time discretisation is equivalent to a RadauIIA Implicit Runge-Kutta method. To aid in the proof, we derive Gagliardo-Nirenberg-type inequalities on DG spaces, which might be of independent interest

相關內容

In the semi-supervised setting where labeled data are largely limited, it remains to be a big challenge for message passing based graph neural networks (GNNs) to learn feature representations for the nodes with the same class label that is distributed discontinuously over the graph. To resolve the discontinuous information transmission problem, we propose a control principle to supervise representation learning by leveraging the prototypes (i.e., class centers) of labeled data. Treating graph learning as a discrete dynamic process and the prototypes of labeled data as "desired" class representations, we borrow the pinning control idea from automatic control theory to design learning feedback controllers for the feature learning process, attempting to minimize the differences between message passing derived features and the class prototypes in every round so as to generate class-relevant features. Specifically, we equip every node with an optimal controller in each round through learning the matching relationships between nodes and the class prototypes, enabling nodes to rectify the aggregated information from incompatible neighbors in a graph with strong heterophily. Our experiments demonstrate that the proposed PCGCN model achieves better performances than deep GNNs and other competitive heterophily-oriented methods, especially when the graph has very few labels and strong heterophily.

We study the generalization properties of unregularized gradient methods applied to separable linear classification -- a setting that has received considerable attention since the pioneering work of Soudry et al. (2018). We establish tight upper and lower (population) risk bounds for gradient descent in this setting, for any smooth loss function, expressed in terms of its tail decay rate. Our bounds take the form $\Theta(r_{\ell,T}^2 / \gamma^2 T + r_{\ell,T}^2 / \gamma^2 n)$, where $T$ is the number of gradient steps, $n$ is size of the training set, $\gamma$ is the data margin, and $r_{\ell,T}$ is a complexity term that depends on the (tail decay rate) of the loss function (and on $T$). Our upper bound matches the best known upper bounds due to Shamir (2021); Schliserman and Koren (2022), while extending their applicability to virtually any smooth loss function and relaxing technical assumptions they impose. Our risk lower bounds are the first in this context and establish the tightness of our upper bounds for any given tail decay rate and in all parameter regimes. The proof technique used to show these results is also markedly simpler compared to previous work, and is straightforward to extend to other gradient methods; we illustrate this by providing analogous results for Stochastic Gradient Descent.

Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse, and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off? Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple $\ell_2$ regularization is already positively contributing to such a perspective.

We propose a novel framework for the regularised inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalises these variables with tailored Bregman distances. We propose a family of variational regularisations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularised inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularisation operator, and that shows that the regularised inverse provably converges to the true inverse if measurement errors converge to zero.

We present a cut finite element method for the heat equation on two overlapping meshes: a stationary background mesh and an overlapping mesh that moves around inside/"on top" of it. Here the overlapping mesh is prescribed a simple continuous motion, meaning that its location as a function of time is continuous and piecewise linear. For the discrete function space, we use continuous Galerkin in space and discontinuous Galerkin in time, with the addition of a discontinuity on the boundary between the two meshes. The finite element formulation is based on Nitsche's method and also includes an integral term over the space-time boundary between the two meshes that mimics the standard discontinuous Galerkin time-jump term. The simple continuous mesh motion results in a space-time discretization for which standard analysis methodologies either fail or are unsuitable. We therefore employ what seems to be a relatively new energy analysis framework that is general and robust enough to be applicable to the current setting. The energy analysis consists of a stability estimate that is slightly stronger than the standard basic one and an a priori error estimate that is of optimal order with respect to both time step and mesh size. We also present numerical results for a problem in one spatial dimension that verify the analytic error convergence orders.

We present a cut finite element method for the heat equation on two overlapping meshes: a stationary background mesh and an overlapping mesh that evolves inside/"on top" of it. Here the overlapping mesh is prescribed a simple discontinuous evolution, meaning that its location, size, and shape as functions of time are discontinuous and piecewise constant. For the discrete function space, we use continuous Galerkin in space and discontinuous Galerkin in time, with the addition of a discontinuity on the boundary between the two meshes. The finite element formulation is based on Nitsche's method. The simple discontinuous mesh evolution results in a space-time discretization with a slabwise product structure between space and time which allows for existing analysis methodologies to be applied with only minor modifications. We follow the analysis methodology presented by Eriksson and Johnson in [1, 2]. The greatest modification is the introduction of a Ritzlike "shift operator" that is used to obtain the discrete strong stability needed for the error analysis. The shift operator generalizes the original analysis to some methods for which the discrete subspace at one time does not lie in the space of the stiffness form at the subsequent time. The error analysis consists of an a priori error estimate that is of optimal order with respect to both time step and mesh size. We also present numerical results for a problem in one spatial dimension that verify the analytic error convergence orders.

The approximate solution of the Cauchy problem for second-order evolution equations is performed, first of all, using three-level time approximations. Such approximations are easily constructed and relatively uncomplicated to investigate when using uniform time grids. When solving applied problems numerically, we should focus on approximations with variable time steps. When using multilevel schemes on non-uniform grids, we should maintain accuracy by choosing appropriate approximations and ensuring the approximate solution's stability. In this paper, we construct unconditionally stable first- and second-order accuracy schemes on a non-uniform time grid for the approximate solution of the Cauchy problem for a second-order evolutionary equation. We use a special transformation of the original second-order differential-operator equation to a system of first-order equations. For the system of first-order equations, we apply standard two-level time approximations. We obtained stability estimates for the initial data and the right-hand side in finite-dimensional Hilbert space. Eliminating auxiliary variables leads to three-level schemes for the initial second-order evolution equation. Numerical experiments were performed for the test problem for a one-dimensional in space bi-parabolic equation. The accuracy and stability properties of the constructed schemes are demonstrated on non-uniform grids with randomly varying grid steps.

In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise). The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method.

The ground states of Bose-Einstein condensates in a rotating frame can be described as constrained minimizers of the Gross-Pitaevskii energy functional with an angular momentum term. In this paper we consider the corresponding discrete minimization problem in Lagrange finite element spaces of arbitrary polynomial order and we investigate the approximation properties of discrete ground states. In particular, we prove a priori error estimates of optimal order in the $L^2$- and $H^1$-norm, as well as for the ground state energy and the corresponding chemical potential. A central issue in the analysis of the problem is the missing uniqueness of ground states, which is mainly caused by the invariance of the energy functional under complex phase shifts. Our error analysis is therefore based on an Euler-Lagrange functional that we restrict to certain tangent spaces in which we have local uniqueness of ground states. This gives rise to error identities that are ultimately used to derive the desired a priori error estimates. We also present numerical experiments to illustrate various aspects of the problem structure.

We extend the monolithic convex limiting (MCL) methodology to nodal discontinuous Galerkin spectral element methods (DGSEM). The use of Legendre-Gauss-Lobatto (LGL) quadrature endows collocated DGSEM space discretizations of nonlinear hyperbolic problems with properties that greatly simplify the design of invariant domain preserving high-resolution schemes. Compared to many other continuous and discontinuous Galerkin method variants, a particular advantage of the LGL spectral operator is the availability of a natural decomposition into a compatible subcell flux discretization. Representing a high-order spatial semi-discretization in terms of intermediate states, we perform flux limiting in a manner that keeps these states and the results of Runge-Kutta stages in convex invariant domains. Additionally, local bounds may be imposed on scalar quantities of interest. In contrast to limiting approaches based on predictor-corrector algorithms, our MCL procedure for LGL-DGSEM yields nonlinear flux approximations that are independent of the time-step size and can be further modified to enforce entropy stability. To demonstrate the robustness of MCL/DGSEM schemes for the compressible Euler equations, we run simulations for challenging setups featuring strong shocks, steep density gradients and vortex dominated flows.

北京阿比特科技有限公司