亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a fast and approximate multifrontal solver for large-scale sparse linear systems arising from finite-difference, finite-volume or finite-element discretization of high-frequency wave equations. The proposed solver leverages the butterfly algorithm and its hierarchical matrix extension for compressing and factorizing large frontal matrices via graph-distance guided entry evaluation or randomized matrix-vector multiplication-based schemes. Complexity analysis and numerical experiments demonstrate $\mathcal{O}(N\log^2 N)$ computation and $\mathcal{O}(N)$ memory complexity when applied to an $N\times N$ sparse system arising from 3D high-frequency Helmholtz and Maxwell problems.

相關內容

In this paper we present an adaptive deep density approximation strategy based on KRnet (ADDA-KR) for solving the steady-state Fokker-Planck (F-P) equations. F-P equations are usually high-dimensional and defined on an unbounded domain, which limits the application of traditional grid based numerical methods. With the Knothe-Rosenblatt rearrangement, our newly proposed flow-based generative model, called KRnet, provides a family of probability density functions to serve as effective solution candidates for the Fokker-Planck equations, which has a weaker dependence on dimensionality than traditional computational approaches and can efficiently estimate general high-dimensional density functions. To obtain effective stochastic collocation points for the approximation of the F-P equation, we develop an adaptive sampling procedure, where samples are generated iteratively using the approximate density function at each iteration. We present a general framework of ADDA-KR, validate its accuracy and demonstrate its efficiency with numerical experiments.

We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations: based on random Fourier features, and based on truncating the kernel's Mercer expansion. In particular, we bound the Kullback-Leibler divergence between an exact GP and one resulting from one of the afore-described low-rank approximations to its kernel, as well as between their corresponding predictive densities, and we also bound the error between predictive mean vectors and between predictive covariance matrices computed using the exact versus using the approximate GP. We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.

This work provides a theoretical framework for the pose estimation problem using total least squares for vector observations from landmark features. First, the optimization framework is formulated with observation vectors extracted from point cloud features. Then, error-covariance expressions are derived. The attitude and position solutions obtained via the derived optimization framework are proven to reach the bounds defined by the Cram\'er-Rao lower bound under the small-angle approximation of attitude errors. The measurement data for the simulation of this problem is provided through a series of vector observation scans, and a fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover the most general case of the sensor uncertainty. Here, previous derivations are expanded for the pose estimation problem to include more generic correlations in the errors than previous cases involving an isotropic noise assumption. The proposed solution is simulated in a Monte-Carlo framework to validate the error-covariance analysis.

Group synchronization refers to estimating a collection of group elements from the noisy pairwise measurements. Such a nonconvex problem has received much attention from numerous scientific fields including computer vision, robotics, and cryo-electron microscopy. In this paper, we focus on the orthogonal group synchronization problem with general additive noise models under incomplete measurements, which is much more general than the commonly considered setting of complete measurements. Characterizations of the orthogonal group synchronization problem are given from perspectives of optimality conditions as well as fixed points of the projected gradient ascent method which is also known as the generalized power method (GPM). It is well worth noting that these results still hold even without generative models. In the meantime, we derive the local error bound property for the orthogonal group synchronization problem which is useful for the convergence rate analysis of different algorithms and can be of independent interest. Finally, we prove the linear convergence result of the GPM to a global maximizer under a general additive noise model based on the established local error bound property. Our theoretical convergence result holds under several deterministic conditions which can cover certain cases with adversarial noise, and as an example we specialize it to the setting of the Erd\"os-R\'enyi measurement graph and Gaussian noise.

Let $P$ be a bounded polyhedron defined as the intersection of the non-negative orthant ${\Bbb R}^n_+$ and an affine subspace of codimension $m$ in ${\Bbb R}^n$. We show that a simple and computationally efficient formula approximates the volume of $P$ within a factor of $\gamma^m$, where $\gamma >0$ is an absolute constant. The formula provides the best known estimate for the volume of transportation polytopes from a wide family.

We consider the distributional connection between the lossy compressed representation of a high-dimensional signal $X$ using a random spherical code and the observation of $X$ under an additive white Gaussian noise (AWGN). We show that the Wasserstein distance between a bitrate-$R$ compressed version of $X$ and its observation under an AWGN-channel of signal-to-noise ratio $2^{2R}-1$ is sub-linear in the problem dimension. We utilize this fact to connect the risk of an estimator based on an AWGN-corrupted version of $X$ to the risk attained by the same estimator when fed with its bitrate-$R$ quantized version. We demonstrate the usefulness of this connection by deriving various novel results for inference problems under compression constraints, including minimax estimation, sparse regression, compressed sensing, and the universality of linear estimation in remote source coding.

Real-world data often comes in compressed form. Analyzing compressed data directly (without decompressing it) can save space and time by orders of magnitude. In this work, we focus on fundamental sequence comparison problems and try to quantify the gain in time complexity when the underlying data is highly compressible. We consider grammar compression, which unifies many practically relevant compression schemes. For two strings of total length $N$ and total compressed size $n$, it is known that the edit distance and a longest common subsequence (LCS) can be computed exactly in time $\tilde{O}(nN)$, as opposed to $O(N^2)$ for the uncompressed setting. Many applications need to align multiple sequences simultaneously, and the fastest known exact algorithms for median edit distance and LCS of $k$ strings run in $O(N^k)$ time. This naturally raises the question of whether compression can help to reduce the running time significantly for $k \geq 3$, perhaps to $O(N^{k/2}n^{k/2})$ or $O(Nn^{k-1})$. Unfortunately, we show lower bounds that rule out any improvement beyond $\Omega(N^{k-1}n)$ time for any of these problems assuming the Strong Exponential Time Hypothesis. At the same time, we show that approximation and compression together can be surprisingly effective. We develop an $\tilde{O}(N^{k/2}n^{k/2})$-time FPTAS for the median edit distance of $k$ sequences. In comparison, no $O(N^{k-\Omega(1)})$-time PTAS is known for the median edit distance problem in the uncompressed setting. For two strings, we get an $\tilde{O}(N^{2/3}n^{4/3})$-time FPTAS for both edit distance and LCS. In contrast, for uncompressed strings, there is not even a subquadratic algorithm for LCS that has less than a polynomial gap in the approximation factor. Building on the insight from our approximation algorithms, we also obtain results for many distance measures including the edit, Hamming, and shift distances.

In this paper, we develop a Monte Carlo algorithm named the Frozen Gaussian Sampling (FGS) to solve the semiclassical Schr\"odinger equation based on the frozen Gaussian approximation. Due to the highly oscillatory structure of the wave function, traditional mesh-based algorithms suffer from "the curse of dimensionality", which gives rise to more severe computational burden when the semiclassical parameter \(\ep\) is small. The Frozen Gaussian sampling outperforms the existing algorithms in that it is mesh-free in computing the physical observables and is suitable for high dimensional problems. In this work, we provide detailed procedures to implement the FGS for both Gaussian and WKB initial data cases, where the sampling strategies on the phase space balance the need of variance reduction and sampling convenience. Moreover, we rigorously prove that, to reach a certain accuracy, the number of samples needed for the FGS is independent of the scaling parameter \(\ep\). Furthermore, the complexity of the FGS algorithm is of a sublinear scaling with respect to the microscopic degrees of freedom and, in particular, is insensitive to the dimension number. The performance of the FGS is validated through several typical numerical experiments, including simulating scattering by the barrier potential, formation of the caustics and computing the high-dimensional physical observables without mesh.

Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by Forrow et al., 2018, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.

In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.

北京阿比特科技有限公司