亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 2006, Arnold, Falk, and Winther developed finite element exterior calculus, using the language of differential forms to generalize the Lagrange, Raviart--Thomas, Brezzi--Douglas--Marini, and N\'ed\'elec finite element spaces for simplicial triangulations. In a recent paper, Licht asks whether, on a single simplex, one can construct bases for these spaces that are invariant with respect to permuting the vertices of the simplex. For scalar fields, standard bases all have this symmetry property, but for vector fields, this question is more complicated: such invariant bases may or may not exist, depending on the polynomial degree of the element. In dimensions two and three, Licht constructs such invariant bases for certain values of the polynomial degree $r$, and he conjectures that his list is complete, that is, that no such basis exists for other values of $r$. In this paper, we show that Licht's conjecture is true in dimension two. However, in dimension three, we show that Licht's ideas can be extended to give invariant bases for many more values of $r$; we then show that this new larger list is complete. Along the way, we develop a more general framework for the geometric decomposition ideas of Arnold, Falk, and Winther.

相關內容

For Hamiltonian systems with non-canonical structure matrices, a new family of fourth-order energy-preserving integrators is presented. The integrators take a form of a combination of Runge--Kutta methods and continuous-stage Runge--Kutta methods and feature a set of free parameters that offer greater flexibility and efficiency. Specifically, we demonstrate that by carefully choosing these free parameters a simplified Newton iteration applied to the integrators of order four can be parallelizable. This results in faster and more efficient integrators compared with existing fourth-order energy-preserving integrators.

Regent is an implicitly parallel programming language that allows the development of a single codebase for heterogeneous platforms targeting CPUs and GPUs. This paper presents the development of a parallel meshfree solver in Regent for two-dimensional inviscid compressible flows. The meshfree solver is based on the least squares kinetic upwind method. Example codes are presented to show the difference between the Regent and CUDA-C implementations of the meshfree solver on a GPU node. For CPU parallel computations, details are presented on how the data communication and synchronisation are handled by Regent and Fortran+MPI codes. The Regent solver is verified by applying it to the standard test cases for inviscid flows. Benchmark simulations are performed on coarse to very fine point distributions to assess the solver's performance. The computational efficiency of the Regent solver on an A100 GPU is compared with an equivalent meshfree solver written in CUDA-C. The codes are then profiled to investigate the differences in their performance. The performance of the Regent solver on CPU cores is compared with an equivalent explicitly parallel Fortran meshfree solver based on MPI. Scalability results are shown to offer insights into performance.

The first-principles-based effective Hamiltonian is widely used to predict and simulate the properties of ferroelectrics and relaxor ferroelectrics. However, the parametrization method of the effective Hamiltonian is complicated and hardly can resolve the systems with complex interactions and/or complex components. Here, we developed an on-the-fly active machine learning approach to parametrize the effective Hamiltonian based on Bayesian linear regression. The parametrization is completed in molecular dynamics simulations, with the energy, forces and stress predicted at each step along with their uncertainties. First-principles calculations are executed when the uncertainties are large to retrain the parameters. This approach provides a universal and automatic way to compute the effective Hamiltonian parameters for any considered systems including complex systems which previous methods can not handle. The form of the effective Hamiltonian is also revised to include some complex terms. BaTiO3, CsPbI3 and SrTiO3/PbTiO3 surface are taken as examples to show the accurateness of this approach comparing with conventional first-principles parametrization method.

The Fisher-Rao distance between two probability distributions of a statistical model is defined as the Riemannian geodesic distance induced by the Fisher information metric. In order to calculate the Fisher-Rao distance in closed-form, we need (1) to elicit a formula for the Fisher-Rao geodesics, and (2) to integrate the Fisher length element along those geodesics. We consider several numerically robust approximation and bounding techniques for the Fisher-Rao distances: First, we report generic upper bounds on Fisher-Rao distances based on closed-form 1D Fisher-Rao distances of submodels. Second, we describe several generic approximation schemes depending on whether the Fisher-Rao geodesics or pregeodesics are available in closed-form or not. In particular, we obtain a generic method to guarantee an arbitrarily small additive error on the approximation provided that Fisher-Rao pregeodesics and tight lower and upper bounds are available. Third, we consider the case of Fisher metrics being Hessian metrics, and report generic tight upper bounds on the Fisher-Rao distances using techniques of information geometry. Uniparametric and biparametric statistical models always have Fisher Hessian metrics, and in general a simple test allows to check whether the Fisher information matrix yields a Hessian metric or not. Fourth, we consider elliptical distribution families and show how to apply the above techniques to these models. We also propose two new distances based either on the Fisher-Rao lengths of curves serving as proxies of Fisher-Rao geodesics, or based on the Birkhoff/Hilbert projective cone distance. Last, we consider an alternative group-theoretic approach for statistical transformation models based on the notion of maximal invariant which yields insights on the structures of the Fisher-Rao distance formula which may be used fruitfully in applications.

Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.

We propose a new Riemannian gradient descent method for computing spherical area-preserving mappings of topological spheres using a Riemannian retraction-based framework with theoretically guaranteed convergence. The objective function is based on the stretch energy functional, and the minimization is constrained on a power manifold of unit spheres embedded in 3-dimensional Euclidean space. Numerical experiments on several mesh models demonstrate the accuracy and stability of the proposed framework. Comparisons with two existing state-of-the-art methods for computing area-preserving mappings demonstrate that our algorithm is both competitive and more efficient. Finally, we present a concrete application to the problem of landmark-aligned surface registration of two brain models.

For statistical inference on an infinite-dimensional Hilbert space $\H $ with no moment conditions we introduce a new class of energy distances on the space of probability measures on $\H$. The proposed distances consist of the integrated squared modulus of the corresponding difference of the characteristic functionals with respect to a reference probability measure on the Hilbert space. Necessary and sufficient conditions are established for the reference probability measure to be {\em characteristic}, the property that guarantees that the distance defines a metric on the space of probability measures on $\H$. We also use these results to define new distance covariances, which can be used to measure the dependence between the marginals of a two dimensional distribution of $\H^2$ without existing moments. On the basis of the new distances we develop statistical inference for Hilbert space valued data, which does not require any moment assumptions. As a consequence, our methods are robust with respect to heavy tails in finite dimensional data. In particular, we consider the problem of comparing the distributions of two samples and the problem of testing for independence and construct new minimax optimal tests for the corresponding hypotheses. We also develop aggregated (with respect to the reference measure) procedures for power enhancement and investigate the finite-sample properties by means of a simulation study.

Importance Sampling (IS), an effective variance reduction strategy in Monte Carlo (MC) simulation, is frequently utilized for Bayesian inference and other statistical challenges. Quasi-Monte Carlo (QMC) replaces the random samples in MC with low discrepancy points and has the potential to substantially enhance error rates. In this paper, we integrate IS with a randomly shifted rank-1 lattice rule, a widely used QMC method, to approximate posterior expectations arising from Bayesian Inverse Problems (BIPs) where the posterior density tends to concentrate as the intensity of noise diminishes. Within the framework of weighted Hilbert spaces, we first establish the convergence rate of the lattice rule for a large class of unbounded integrands. This method extends to the analysis of QMC combined with IS in BIPs. Furthermore, we explore the robustness of the IS-based randomly shifted rank-1 lattice rule by determining the quadrature error rate with respect to the noise level. The effects of using Gaussian distributions and $t$-distributions as the proposal distributions on the error rate of QMC are comprehensively investigated. We find that the error rate may deteriorate at low intensity of noise when using improper proposals, such as the prior distribution. To reclaim the effectiveness of QMC, we propose a new IS method such that the lattice rule with $N$ quadrature points achieves an optimal error rate close to $O(N^{-1})$, which is insensitive to the noise level. Numerical experiments are conducted to support the theoretical results.

Graph-based approximation methods are of growing interest in many areas, including transportation, biological and chemical networks, financial models, image processing, network flows, and more. In these applications, often a basis for the approximation space is not available analytically and must be computed. We propose perturbations of Lagrange bases on graphs, where the Lagrange functions come from a class of functions analogous to classical splines. The basis functions we consider have local support, with each basis function obtained by solving a small energy minimization problem related to a differential operator on the graph. We present $\ell_\infty$ error estimates between the local basis and the corresponding interpolatory Lagrange basis functions in cases where the underlying graph satisfies a mild assumption on the connections of vertices where the function is not known, and the theoretical bounds are examined further in numerical experiments. Included in our analysis is a mixed-norm inequality for positive definite matrices that is tighter than the general estimate $\|A\|_{\infty} \leq \sqrt{n} \|A\|_{2}$.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司