亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We design in this work a discrete de Rham complex on manifolds. This complex, written in the framework of exterior calculus, is applicable on meshes on the manifold with generic elements, and has the same cohomology as the continuous de Rham complex. Notions of local (full and trimmed) polynomial spaces are developed, with compatibility requirements between polynomials on mesh entities of various dimensions. Explicit examples of polynomials spaces are presented. The discrete de Rham complex is then used to set up a scheme for the Maxwell equations on a 2D manifold without boundary, and we show that a natural discrete version of the constraint linking the electric field and the electric charge density is satisfied. Numerical examples are provided on the sphere and the torus, based on a bespoke analytical solution and mesh design on each manifold.

相關內容

Various methods in statistical learning build on kernels considered in reproducing kernel Hilbert spaces. In applications, the kernel is often selected based on characteristics of the problem and the data. This kernel is then employed to infer response variables at points, where no explanatory data were observed. The data considered here are located in compact sets in higher dimensions and the paper addresses approximations of the kernel itself. The new approach considers Taylor series approximations of radial kernel functions. For the Gauss kernel on the unit cube, the paper establishes an upper bound of the associated eigenfunctions, which grows only polynomially with respect to the index. The novel approach substantiates smaller regularization parameters than considered in the literature, overall leading to better approximations. This improvement confirms low rank approximation methods such as the Nystr\"om method.

First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows.

Existence of sufficient conditions for unisolvence of Kansa unsymmetric collocation for PDEs is still an open problem. In this paper we make a first step in this direction, proving that unsymmetric collocation matrices with Thin-Plate Splines for the 2D Poisson equation are almost surely nonsingular, when the discretization points are chosen randomly on domains with analytic boundary.

Generative models for multimodal data permit the identification of latent factors that may be associated with important determinants of observed data heterogeneity. Common or shared factors could be important for explaining variation across modalities whereas other factors may be private and important only for the explanation of a single modality. Multimodal Variational Autoencoders, such as MVAE and MMVAE, are a natural choice for inferring those underlying latent factors and separating shared variation from private. In this work, we investigate their capability to reliably perform this disentanglement. In particular, we highlight a challenging problem setting where modality-specific variation dominates the shared signal. Taking a cross-modal prediction perspective, we demonstrate limitations of existing models, and propose a modification how to make them more robust to modality-specific variation. Our findings are supported by experiments on synthetic as well as various real-world multi-omics data sets.

In this letter, we give a characterization for a generic construction of bent functions. This characterization enables us to obtain another efficient construction of bent functions and to give a positive answer on a problem of bent functions.

This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.

Functional Differential Equations (FDEs) play a fundamental role in many areas of mathematical physics, including fluid dynamics (Hopf characteristic functional equation), quantum field theory (Schwinger-Dyson equation), and statistical physics. Despite their significance, computing solutions to FDEs remains a longstanding challenge in mathematical physics. In this paper we address this challenge by introducing new approximation theory and high-performance computational algorithms designed for solving FDEs on tensor manifolds. Our approach involves approximating FDEs using high-dimensional partial differential equations (PDEs), and then solving such high-dimensional PDEs on a low-rank tensor manifold leveraging high-performance parallel tensor algorithms. The effectiveness of the proposed approach is demonstrated through its application to the Burgers-Hopf FDE, which governs the characteristic functional of the stochastic solution to the Burgers equation evolving from a random initial state.

In this article, we introduce a notion of depth functions for data types that are not given in statistical standard data formats. Data depth functions have been intensively studied for normed vector spaces. However, a discussion on depth functions on data where one specific data structure cannot be presupposed is lacking. We call such data non-standard data. To define depth functions for non-standard data, we represent the data via formal concept analysis which leads to a unified data representation. Besides introducing these depth functions, we give a systematic basis of depth functions for non-standard using formal concept analysis by introducing structural properties. Furthermore, we embed the generalised Tukey depth into our concept of data depth and analyse it using the introduced structural properties. Thus, this article provides the mathematical formalisation of centrality and outlyingness for non-standard data. Thereby, we increase the number of spaces in which centrality can be discussed. In particular, it gives a basis to define further depth functions and statistical inference methods for non-standard data.

Charts, figures, and text derived from data play an important role in decision making, from data-driven policy development to day-to-day choices informed by online articles. Making sense of, or fact-checking, outputs means understanding how they relate to the underlying data. Even for domain experts with access to the source code and data sets, this poses a significant challenge. In this paper we introduce a new program analysis framework which supports interactive exploration of fine-grained I/O relationships directly through computed outputs, making use of dynamic dependence graphs. Our main contribution is a novel notion in data provenance which we call related inputs, a relation of mutual relevance or "cognacy" which arises between inputs when they contribute to common features of the output. Queries of this form allow readers to ask questions like "What outputs use this data element, and what other data elements are used along with it?". We show how Jonsson and Tarski's concept of conjugate operators on Boolean algebras appropriately characterises the notion of cognacy in a dependence graph, and give a procedure for computing related inputs over such a graph.

In this work we introduce a memory-efficient method for computing the action of a Hermitian matrix function on a vector. Our method consists of a rational Lanczos algorithm combined with a basis compression procedure based on rational Krylov subspaces that only involve small matrices. The cost of the compression procedure is negligible with respect to the cost of the Lanczos algorithm. This enables us to avoid storing the whole Krylov basis, leading to substantial reductions in memory requirements. This method is particularly effective when the rational Lanczos algorithm needs a significant number of iterations to converge and each iteration involves a low computational effort. This scenario often occurs when polynomial Lanczos, as well as extended and shift-and-invert Lanczos are employed. Theoretical results prove that, for a wide variety of functions, the proposed algorithm differs from rational Lanczos by an error term that is usually negligible. The algorithm is compared with other low-memory Krylov methods from the literature on a variety of test problems, showing competitive performance.

北京阿比特科技有限公司