亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, a generalized finite element method (GFEM) with optimal local approximation spaces for solving high-frequency heterogeneous Helmholtz problems is systematically studied. The local spaces are built from selected eigenvectors of carefully designed local eigenvalue problems defined on generalized harmonic spaces. At both continuous and discrete levels, $(i)$ wavenumber explicit and nearly exponential decay rates for local and global approximation errors are obtained without any assumption on the size of subdomains; $(ii)$ a quasi-optimal convergence of the method is established by assuming that the size of subdomains is $O(1/k)$ ($k$ is the wavenumber). A novel resonance effect between the wavenumber and the dimension of local spaces on the decay of error with respect to the oversampling size is implied by the analysis. Furthermore, for fixed dimensions of local spaces, the discrete local errors are proved to converge as $h\rightarrow 0$ ($h$ denoting the mesh size) towards the continuous local errors. The method at the continuous level extends the plane wave partition of unity method [I. Babuska and J. M. Melenk, Int.\;J.\;Numer.\;Methods Eng., 40 (1997), pp.~727--758] to the heterogeneous-coefficients case, and at the discrete level, it delivers an efficient non-iterative domain decomposition method for solving discrete Helmholtz problems resulting from standard FE discretizations. Numerical results are provided to confirm the theoretical analysis and to validate the proposed method.

相關內容

We propose a verified computation method for eigenvalues in a region and the corresponding eigenvectors of generalized Hermitian eigenvalue problems. The proposed method uses complex moments to extract the eigencomponents of interest from a random matrix and uses the Rayleigh$\unicode{x2013}$Ritz procedure to project a given eigenvalue problem into a reduced eigenvalue problem. The complex moment is given by contour integral and approximated using numerical quadrature. We split the error in the complex moment into the truncation error of the quadrature and rounding errors and evaluate each. This idea for error evaluation inherits our previous Hankel matrix approach, whereas the proposed method enables verification of eigenvectors and requires half the number of quadrature points for the previous approach to reduce the truncation error to the same order. Moreover, the Rayleigh$\unicode{x2013}$Ritz procedure approach forms a transformation matrix that enables verification of the eigenvectors. Numerical experiments show that the proposed method is faster than previous methods while maintaining verification performance and works even for nearly singular matrix pencils and in the presence of multiple and nearly multiple eigenvalues.

We develop a hybrid spatial discretization for the wave equation in second order form, based on high-order accurate finite difference methods and discontinuous Galerkin methods. The hybridization combines computational efficiency of finite difference methods on Cartesian grids and geometrical flexibility of discontinuous Galerkin methods on unstructured meshes. The two spatial discretizations are coupled by a penalty technique at the interface such that the overall semidiscretization satisfies a discrete energy estimate to ensure stability. In addition, optimal convergence is obtained in the sense that when combining a fourth order finite difference method with a discontinuous Galerkin method using third order local polynomials, the overall convergence rate is fourth order. Furthermore, we use a novel approach to derive an error estimate for the semidiscretization by combining the energy method and the normal mode analysis for a corresponding one dimensional model problem. The stability and accuracy analysis are verified in numerical experiments.

Global spectral methods offer the potential to compute solutions of partial differential equations numerically to very high accuracy. In this work, we develop a novel global spectral method for linear partial differential equations on cubes by extending ideas of Chebop2 [Townsend and Olver, J. Comput. Phys., 299 (2015)] to the three-dimensional setting utilizing expansions in tensorized polynomial bases. Solving the discretized PDE involves a linear system that can be recast as a linear tensor equation. Under suitable additional assumptions, the structure of these equations admits for an efficient solution via the blocked recursive solver [Chen and Kressner, Numer. Algorithms, 84 (2020)]. In the general case, when these assumptions are not satisfied, this solver is used as a preconditioner to speed up computations.

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of a \textit{convex-concave} unconstrained min-max optimization problem. Compared to their first-order counterparts, investigations of second-order methods for min-max optimization are relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we highlight how second-order information can be used to speed up the dynamics of dual extrapolation methods {despite inexactness}. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without requiring any compactness assumptions. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.

We propose estimators based on kernel ridge regression for nonparametric causal functions such as dose, heterogeneous, and incremental response curves. Treatment and covariates may be discrete or continuous in general spaces. Due to a decomposition property specific to the RKHS, our estimators have simple closed form solutions. We prove uniform consistency with finite sample rates via original analysis of generalized kernel ridge regression. We extend our main results to counterfactual distributions and to causal functions identified by front and back door criteria. We achieve state-of-the-art performance in nonlinear simulations with many covariates, and conduct a policy evaluation of the US Job Corps training program for disadvantaged youths.

In this paper, we present algorithms and implementations for the end-to-end GPU acceleration of matrix-free low-order-refined preconditioning of high-order finite element problems. The methods described here allow for the construction of effective preconditioners for high-order problems with optimal memory usage and computational complexity. The preconditioners are based on the construction of a spectrally equivalent low-order discretization on a refined mesh, which is then amenable to, for example, algebraic multigrid preconditioning. The constants of equivalence are independent of mesh size and polynomial degree. For vector finite element problems in $H({\rm curl})$ and $H({\rm div})$ (e.g. for electromagnetic or radiation diffusion problems) a specially constructed interpolation-histopolation basis is used to ensure fast convergence. Detailed performance studies are carried out to analyze the efficiency of the GPU algorithms. The kernel throughput of each of the main algorithmic components is measured, and the strong and weak parallel scalability of the methods is demonstrated. The different relative weighting and significance of the algorithmic components on GPUs and CPUs is discussed. Results on problems involving adaptively refined nonconforming meshes are shown, and the use of the preconditioners on a large-scale magnetic diffusion problem using all spaces of the finite element de Rham complex is illustrated.

We give a systematic and self-contained account of the construction of geometrically decomposed bases and degrees of freedom in finite element exterior calculus. In particular, we elaborate upon a previously overlooked basis for one of the families of finite element spaces, which is of interest for implementations. Moreover, we give details for the construction of isomorphisms and duality pairings between finite element spaces. These structural results show, for example, how to transfer linear dependencies between canonical spanning sets, or give a new derivation of the degrees of freedom.

In this paper, we propose new geometrically unfitted space-time Finite Element methods for partial differential equations posed on moving domains of higher order accuracy in space and time. As a model problem, the convection-diffusion problem on a moving domain is studied. For geometrically higher order accuracy, we apply a parametric mapping on a background space-time tensor-product mesh. Concerning discretisation in time, we consider discontinuous Galerkin, as well as related continuous (Petrov-)Galerkin and Galerkin collocation methods. For stabilisation with respect to bad cut configurations and as an extension mechanism that is required for the latter two schemes, a ghost penalty stabilisation is employed. The article puts an emphasis on the techniques that allow to achieve a robust but higher order geometry handling for smooth domains. We investigate the computational properties of the respective methods in a series of numerical experiments. These include studies in different dimensions for different polynomial degrees in space and time, validating the higher order accuracy in both variables.

Cellular sheaves equip graphs with a "geometrical" structure by assigning vector spaces and linear maps to nodes and edges. Graph Neural Networks (GNNs) implicitly assume a graph with a trivial underlying sheaf. This choice is reflected in the structure of the graph Laplacian operator, the properties of the associated diffusion equation, and the characteristics of the convolutional models that discretise this equation. In this paper, we use cellular sheaf theory to show that the underlying geometry of the graph is deeply linked with the performance of GNNs in heterophilic settings and their oversmoothing behaviour. By considering a hierarchy of increasingly general sheaves, we study how the ability of the sheaf diffusion process to achieve linear separation of the classes in the infinite time limit expands. At the same time, we prove that when the sheaf is non-trivial, discretised parametric diffusion processes have greater control than GNNs over their asymptotic behaviour. On the practical side, we study how sheaves can be learned from data. The resulting sheaf diffusion models have many desirable properties that address the limitations of classical graph diffusion equations (and corresponding GNN models) and obtain competitive results in heterophilic settings. Overall, our work provides new connections between GNNs and algebraic topology and would be of interest to both fields.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司