亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This document describes an algorithm to scale a complex vector by the reciprocal of a complex value. The algorithm computes the reciprocal of the complex value and then scales the vector by the reciprocal. Some scaling may be necessary due to this 2-step strategy, and the proposed algorithm takes scaling into account. This algorithm is supposed to be faster than the naive approach of dividing each entry of the vector by the complex value, without losing much accuracy. It also serves as a single strategy for scaling vectors by the reciprocal of a complex value, which improves the software maintainability.

相關內容

Group equivariant non-expansive operators have been recently proposed as basic components in topological data analysis and deep learning. In this paper we study some geometric properties of the spaces of group equivariant operators and show how a space $\mathcal{F}$ of group equivariant non-expansive operators can be endowed with the structure of a Riemannian manifold, so making available the use of gradient descent methods for the minimization of cost functions on $\mathcal{F}$. As an application of this approach, we also describe a procedure to select a finite set of representative group equivariant non-expansive operators in the considered manifold.

We define positive and strictly positive definite functions on a domain and study these functions on a list of regular domains. The list includes the unit ball, conic surface, hyperbolic surface, solid hyperboloid, and simplex. Each of these domains is embedded in a quadrant or a union of quadrants of the unit sphere by a distance preserving map, from which characterizations of positive definite and strictly positive definite functions are derived for these regular domains.

Laguerre spectral approximations play an important role in the development of efficient algorithms for problems in unbounded domains. In this paper, we present a comprehensive convergence rate analysis of Laguerre spectral approximations for analytic functions. By exploiting contour integral techniques from complex analysis, we prove that Laguerre projection and interpolation methods of degree $n$ converge at the root-exponential rate $O(\exp(-2\rho\sqrt{n}))$ with $\rho>0$ when the underlying function is analytic inside and on a parabola with focus at the origin and vertex at $z=-\rho^2$. As far as we know, this is the first rigorous proof of root-exponential convergence of Laguerre approximations for analytic functions. Several important applications of our analysis are also discussed, including Laguerre spectral differentiations, Gauss-Laguerre quadrature rules, the scaling factor and the Weeks method for the inversion of Laplace transform, and some sharp convergence rate estimates are derived. Numerical experiments are presented to verify the theoretical results.

We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.

We construct the first rigorously justified probabilistic algorithm for recovering the solution operator of a hyperbolic partial differential equation (PDE) in two variables from input-output training pairs. The primary challenge of recovering the solution operator of hyperbolic PDEs is the presence of characteristics, along which the associated Green's function is discontinuous. Therefore, a central component of our algorithm is a rank detection scheme that identifies the approximate location of the characteristics. By combining the randomized singular value decomposition with an adaptive hierarchical partition of the domain, we construct an approximant to the solution operator using $O(\Psi_\epsilon^{-1}\epsilon^{-7}\log(\Xi_\epsilon^{-1}\epsilon^{-1}))$ input-output pairs with relative error $O(\Xi_\epsilon^{-1}\epsilon)$ in the operator norm as $\epsilon\to0$, with high probability. Here, $\Psi_\epsilon$ represents the existence of degenerate singular values of the solution operator, and $\Xi_\epsilon$ measures the quality of the training data. Our assumptions on the regularity of the coefficients of the hyperbolic PDE are relatively weak given that hyperbolic PDEs do not have the ``instantaneous smoothing effect'' of elliptic and parabolic PDEs, and our recovery rate improves as the regularity of the coefficients increases.

We present and analyze an algorithm designed for addressing vector-valued regression problems involving possibly infinite-dimensional input and output spaces. The algorithm is a randomized adaptation of reduced rank regression, a technique to optimally learn a low-rank vector-valued function (i.e. an operator) between sampled data via regularized empirical risk minimization with rank constraints. We propose Gaussian sketching techniques both for the primal and dual optimization objectives, yielding Randomized Reduced Rank Regression (R4) estimators that are efficient and accurate. For each of our R4 algorithms we prove that the resulting regularized empirical risk is, in expectation w.r.t. randomness of a sketch, arbitrarily close to the optimal value when hyper-parameteres are properly tuned. Numerical expreriments illustrate the tightness of our bounds and show advantages in two distinct scenarios: (i) solving a vector-valued regression problem using synthetic and large-scale neuroscience datasets, and (ii) regressing the Koopman operator of a nonlinear stochastic dynamical system.

We prove the convergence of meshfree method for solving the elliptic Monge-Ampere equation with Dirichlet boundary on the bounded domain. L2 error is obtained based on the kernel-based trial spaces generated by the compactly supported radial basis functions. We obtain the convergence result when the testing discretization is finer than the trial discretization. The convergence rate depend on the regularity of the solution, the smoothness of the computing domain, and the approximation of scaled kernel-based spaces. The presented convergence theory covers a wide range of kernel-based trial spaces including stationary approximation and non-stationary approximation. An extension to non-Dirichlet boundary condition is in a forthcoming paper.

We define a model of predicate logic in which every term and predicate, open or closed, has an absolute denotation independently of a valuation of the variables. For each variable a, the domain of the model contains an element [[a]] which is the denotation of the term a (which is also a variable symbol). Similarly, the algebra interpreting predicates in the model directly interprets open predicates. Because of this models must also incorporate notions of substitution and quantification. These notions are axiomatic, and need not be applied only to sets of syntax. We prove soundness and show how every 'ordinary' model (i.e. model based on sets and valuations) can be translated to one of our nominal models, and thus also prove completeness.

In recent literature, for modeling reasons, fractional differential problems have been considered equipped with anti-symmetric boundary conditions. Twenty years ago the anti-reflective boundary conditions were introduced in a context of signal processing and imaging for increasing the quality of the reconstruction of a blurred signal/image contaminated by noise and for reducing the overall complexity to that of few fast sine transforms i.e. to $O(N\log N)$ real arithmetic operations, where $N$ is the number of pixels. Here we consider the anti-symmetric boundary conditions and we introduce the anti-reflective boundary conditions in the context of nonlocal problems of fractional differential type. In the latter context, we study both types of boundary conditions, which in reality are similar in the essentials, from the perspective of computational efficiency, by considering nontruncated and truncated versions. Several numerical tests, tables, and visualizations are provided and critically discussed.

There has been recently a lot of interest in the analysis of the Stein gradient descent method, a deterministic sampling algorithm. It is based on a particle system moving along the gradient flow of the Kullback-Leibler divergence towards the asymptotic state corresponding to the desired distribution. Mathematically, the method can be formulated as a joint limit of time $t$ and number of particles $N$ going to infinity. We first observe that the recent work of Lu, Lu and Nolen (2019) implies that if $t \approx \log \log N$, then the joint limit can be rigorously justified in the Wasserstein distance. Not satisfied with this time scale, we explore what happens for larger times by investigating the stability of the method: if the particles are initially close to the asymptotic state (with distance $\approx 1/N$), how long will they remain close? We prove that this happens in algebraic time scales $t \approx \sqrt{N}$ which is significantly better. The exploited method, developed by Caglioti and Rousset for the Vlasov equation, is based on finding a functional invariant for the linearized equation. This allows to eliminate linear terms and arrive at an improved Gronwall-type estimate.

北京阿比特科技有限公司