亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The most fundamental model of a molecule is a cloud of unordered atoms, even without chemical bonds that can depend on thresholds for distances and angles. The strongest equivalence between clouds of atoms is rigid motion, which is a composition of translations and rotations. The existing datasets of experimental and simulated molecules require a continuous quantification of similarity in terms of a distance metric. While clouds of m ordered points were continuously classified by Lagrange's quadratic forms (distance matrices or Gram matrices), their extensions to m unordered points are impractical due to the exponential number of m! permutations. We propose new metrics that are continuous in general position and are computable in a polynomial time in the number m of unordered points in any Euclidean space of a fixed dimension n.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

This paper develops a notion of geometric quantiles on Hadamard spaces, also known as global non-positive curvature spaces. After providing some definitions and basic properties, including scaled isometry equivariance and a necessary condition on the gradient of the quantile loss function at quantiles on Hadamard manifolds, we investigate asymptotic properties of sample quantiles on Hadamard manifolds, such as strong consistency and joint asymptotic normality. We provide a detailed description of how to compute quantiles using a gradient descent algorithm in hyperbolic space and, in particular, an explicit formula for the gradient of the quantile loss function, along with experiments using simulated and real single-cell RNA sequencing data.

In this paper, a two-sided variable-coefficient space-fractional diffusion equation with fractional Neumann boundary condition is considered. To conquer the weak singularity caused by the nonlocal space-fractional differential operators, by introducing an auxiliary fractional flux variable and using piecewise linear interpolations, a fractional block-centered finite difference (BCFD) method on general nonuniform grids is proposed. However, like other numerical methods, the proposed method still produces linear algebraic systems with unstructured dense coefficient matrices under the general nonuniform grids.Consequently, traditional direct solvers such as Gaussian elimination method shall require $\mathcal{O}(M^2)$ memory and $\mathcal{O}(M^3)$ computational work per time level, where $M$ is the number of spatial unknowns in the numerical discretization. To address this issue, we combine the well-known sum-of-exponentials (SOE) approximation technique with the fractional BCFD method to propose a fast version fractional BCFD algorithm. Based upon the Krylov subspace iterative methods, fast matrix-vector multiplications of the resulting coefficient matrices with any vector are developed, in which they can be implemented in only $\mathcal{O}(MN_{exp})$ operations per iteration, where $N_{exp}\ll M$ is the number of exponentials in the SOE approximation. Moreover, the coefficient matrices do not necessarily need to be generated explicitly, while they can be stored in $\mathcal{O}(MN_{exp})$ memory by only storing some coefficient vectors. Numerical experiments are provided to demonstrate the efficiency and accuracy of the method.

Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.

Programs with a continuous state space or that interact with physical processes often require notions of equivalence going beyond the standard binary setting in which equivalence either holds or does not hold. In this paper we explore the idea of equivalence taking values in a quantale V, which covers the cases of (in)equations and (ultra)metric equations among others. Our main result is the introduction of a V-equational deductive system for linear {\lambda}-calculus together with a proof that it is sound and complete. In fact we go further than this, by showing that linear {\lambda}-theories based on this V-equational system form a category that is equivalent to a category of autonomous categories enriched over 'generalised metric spaces'. If we instantiate this result to inequations, we get an equivalence with autonomous categories enriched over partial orders. In the case of (ultra)metric equations, we get an equivalence with autonomous categories enriched over (ultra)metric spaces. We additionally show that this syntax-semantics correspondence extends to the affine setting. We use our results to develop examples of inequational and metric equational systems for higher-order programming in the setting of real-time, probabilistic, and quantum computing.

In this paper, we consider variational autoencoders (VAE) for general state space models. We consider a backward factorization of the variational distributions to analyze the excess risk associated with VAE. Such backward factorizations were recently proposed to perform online variational learning and to obtain upper bounds on the variational estimation error. When independent trajectories of sequences are observed and under strong mixing assumptions on the state space model and on the variational distribution, we provide an oracle inequality explicit in the number of samples and in the length of the observation sequences. We then derive consequences of this theoretical result. In particular, when the data distribution is given by a state space model, we provide an upper bound for the Kullback-Leibler divergence between the data distribution and its estimator and between the variational posterior and the estimated state space posterior distributions.Under classical assumptions, we prove that our results can be applied to Gaussian backward kernels built with dense and recurrent neural networks.

This paper develops some theory of the matrix Dyson equation (MDE) for correlated linearizations and uses it to solve a problem on asymptotic deterministic equivalent for the test error in random features regression. The theory developed for the correlated MDE includes existence-uniqueness, spectral support bounds, and stability properties of the MDE. This theory is new for constructing deterministic equivalents for pseudoresolvents of a class of correlated linear pencils. In the application, this theory is used to give a deterministic equivalent of the test error in random features ridge regression, in a proportional scaling regime, wherein we have conditioned on both training and test datasets.

We give a structure preserving spatio-temporal discretization for incompressible magnetohydrodynamics (MHD) on the sphere. Discretization in space is based on the theory of geometric quantization, which yields a spatially discretized analogue of the MHD equations as a finite-dimensional Lie--Poisson system on the dual of the magnetic extension Lie algebra $\mathfrak{f}=\mathfrak{su}(N)\ltimes\mathfrak{su}(N)^{*}$. We also give accompanying structure preserving time discretizations for Lie--Poisson systems on the dual of semidirect product Lie algebras of the form $\mathfrak{f}=\mathfrak{g}\ltimes\mathfrak{g^{*}}$, where $\mathfrak{g}$ is a $J$-quadratic Lie algebra. Critically, the time integration method is free of computationally costly matrix exponentials. We prove that the full method preserves the underlying geometry, namely the Lie--Poisson structure and all the Casimirs. To showcase the method, we apply it to two models for magnetic fluids: incompressible magnetohydrodynamics and Hazeltine's model.

Quantum computing devices are believed to be powerful in solving the prime factorization problem, which is at the heart of widely deployed public-key cryptographic tools. However, the implementation of Shor's quantum factorization algorithm requires significant resources scaling linearly with the number size; taking into account an overhead that is required for quantum error correction the estimation is that 20 millions of (noisy) physical qubits are required for factoring 2048-bit RSA key in 8 hours. Recent proposal by Yan et al. claims a possibility of solving the factorization problem with sublinear quantum resources. As we demonstrate in our work, this proposal lacks systematic analysis of the computational complexity of the classical part of the algorithm, which exploits the Schnorr's lattice-based approach. We provide several examples illustrating the need in additional resource analysis for the proposed quantum factorization algorithm.

This manuscript is devoted to investigating the conservation laws of incompressible Navier-Stokes equations(NSEs), written in the energy-momentum-angular momentum conserving(EMAC) formulation, after being linearized by the two-level methods. With appropriate correction steps(e.g., Stoke/Newton corrections), we show that the two-level methods, discretized from EMAC NSEs, could preserve momentum, angular momentum, and asymptotically preserve energy. Error estimates and (asymptotic) conservative properties are analyzed and obtained, and numerical experiments are conducted to validate the theoretical results, mainly confirming that the two-level linearized methods indeed possess the property of (almost) retainability on conservation laws. Moreover, experimental error estimates and optimal convergence rates of two newly defined types of pressure approximation in EMAC NSEs are also obtained.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司