亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Low-rank approximation of images via singular value decomposition is well-received in the era of big data. However, singular value decomposition (SVD) is only for order-two data, i.e., matrices. It is necessary to flatten a higher order input into a matrix or break it into a series of order-two slices to tackle higher order data such as multispectral images and videos with the SVD. Higher order singular value decomposition (HOSVD) extends the SVD and can approximate higher order data using sums of a few rank-one components. We consider the problem of generalizing HOSVD over a finite dimensional commutative algebra. This algebra, referred to as a t-algebra, generalizes the field of complex numbers. The elements of the algebra, called t-scalars, are fix-sized arrays of complex numbers. One can generalize matrices and tensors over t-scalars and then extend many canonical matrix and tensor algorithms, including HOSVD, to obtain higher-performance versions. The generalization of HOSVD is called THOSVD. Its performance of approximating multi-way data can be further improved by an alternating algorithm. THOSVD also unifies a wide range of principal component analysis algorithms. To exploit the potential of generalized algorithms using t-scalars for approximating images, we use a pixel neighborhood strategy to convert each pixel to "deeper-order" t-scalar. Experiments on publicly available images show that the generalized algorithm over t-scalars, namely THOSVD, compares favorably with its canonical counterparts.

相關內容

奇異值分解(Singular Value Decomposition)是線性代數中一種重要的矩陣分解,奇異值分解則是特征分解在任意矩陣上的推廣。在信號處理、統計學等領域有重要應用。

We consider a singularly perturbed convection-diffusion problem that has in addition a shift term. We show a solution decomposition using asymptotic expansions and a stability result. Based upon this we provide a numerical analysis of high order finite element method on layer adapted meshes. We also apply a new idea of using a coarser mesh in places where weak layers appear. Numerical experiments confirm our theoretical results.

This paper proposes two convergent adaptive mesh-refining algorithms for the hybrid high-order method in convex minimization problems with two-sided p-growth. Examples include the p-Laplacian, an optimal design problem in topology optimization, and the convexified double-well problem. The hybrid high-order method utilizes a gradient reconstruction in the space of piecewise Raviart-Thomas finite element functions without stabilization on triangulations into simplices or in the space of piecewise polynomials with stabilization on polytopal meshes. The main results imply the convergence of the energy and, under further convexity properties, of the approximations of the primal resp. dual variable. Numerical experiments illustrate an efficient approximation of singular minimizers and improved convergence rates for higher polynomial degrees. Computer simulations provide striking numerical evidence that an adopted adaptive HHO algorithm can overcome the Lavrentiev gap phenomenon even with empirical higher convergence rates.

In recent years, change point detection for high dimensional data has become increasingly important in many scientific fields. Most literature develop a variety of separate methods designed for specified models (e.g. mean shift model, vector auto-regressive model, graphical model). In this paper, we provide a unified framework for structural break detection which is suitable for a large class of models. Moreover, the proposed algorithm automatically achieves consistent parameter estimates during the change point detection process, without the need for refitting the model. Specifically, we introduce a three-step procedure. The first step utilizes the block segmentation strategy combined with a fused lasso based estimation criterion, leads to significant computational gains without compromising the statistical accuracy in identifying the number and location of the structural breaks. This procedure is further coupled with hard-thresholding and exhaustive search steps to consistently estimate the number and location of the break points. The strong guarantees are proved on both the number of estimated change points and the rates of convergence of their locations. The consistent estimates of model parameters are also provided. The numerical studies provide further support of the theory and validate its competitive performance for a wide range of models. The developed algorithm is implemented in the R package LinearDetect.

In this paper, a higher-order time-discretization scheme is proposed, where the iterates approximate the solution of the stochastic semilinear wave equation driven by multiplicative noise with general drift and diffusion. We employ a variational method for its error analysis and prove an improved convergence order of 3/2 for the approximates of the solution. The core of the analysis is Holder continuity in time and moment bounds for the solutions of the continuous and the discrete problem. Computational experiments are also presented.

Since proposed in [X. Zhang and C.-W. Shu, J. Comput. Phys., 229: 3091--3120, 2010], the Zhang--Shu framework has attracted extensive attention and motivated many bound-preserving (BP) high-order discontinuous Galerkin and finite volume schemes for various hyperbolic equations. A key ingredient in the framework is the decomposition of the cell averages of the numerical solution into a convex combination of the solution values at certain quadrature points, which helps to rewrite high-order schemes as convex combinations of formally first-order schemes. The classic convex decomposition originally proposed by Zhang and Shu has been widely used over the past decade. It was verified, only for the 1D quadratic and cubic polynomial spaces, that the classic decomposition is optimal in the sense of achieving the mildest BP CFL condition. Yet, it remained unclear whether the classic decomposition is optimal in multiple dimensions. In this paper, we find that the classic multidimensional decomposition based on the tensor product of Gauss--Lobatto and Gauss quadratures is generally not optimal, and we discover a novel alternative decomposition for the 2D and 3D polynomial spaces of total degree up to 2 and 3, respectively, on Cartesian meshes. Our new decomposition allows a larger BP time step-size than the classic one, and moreover, it is rigorously proved to be optimal to attain the mildest BP CFL condition, yet requires much fewer nodes. The discovery of such an optimal convex decomposition is highly nontrivial yet meaningful, as it may lead to an improvement of high-order BP schemes for a large class of hyperbolic or convection-dominated equations, at the cost of only a slight and local modification to the implementation code. Several numerical examples are provided to further validate the advantages of using our optimal decomposition over the classic one in terms of efficiency.

In this work a general approach to compute a compressed representation of the exponential $\exp(h)$ of a high-dimensional function $h$ is presented. Such exponential functions play an important role in several problems in Uncertainty Quantification, e.g. the approximation of log-normal random fields or the evaluation of Bayesian posterior measures. Usually, these high-dimensional objects are intractable numerically and can only be accessed pointwise in sampling methods. In contrast, the proposed method constructs a functional representation of the exponential by exploiting its nature as a solution of an ordinary differential equation. The application of a Petrov--Galerkin scheme to this equation provides a tensor train representation of the solution for which we derive an efficient and reliable a posteriori error estimator. Numerical experiments with a log-normal random field and a Bayesian likelihood illustrate the performance of the approach in comparison to other recent low-rank representations for the respective applications. Although the present work considers only a specific differential equation, the presented method can be applied in a more general setting. We show that the composition of a generic holonomic function and a high-dimensional function corresponds to a differential equation that can be used in our method. Moreover, the differential equation can be modified to adapt the norm in the a posteriori error estimates to the problem at hand.

The existing randomized algorithms need an initial estimation of the tubal rank to compute a tensor singular value decomposition. This paper proposes a new randomized fixed-precision algorithm which for a given 3rd-order tensor and a prescribed approximation error bound, it automatically finds the tubal rank, and the corresponding low tubal rank approximation. The algorithm is based on the random projection technique and equipped with the power iteration method for achieving a better accuracy. We conduct simulations on the synthetic and real-world datasets to show the efficiency and performance of the proposed algorithm.

In this article, we study approximation properties of the variation spaces corresponding to shallow neural networks with a variety of activation functions. We introduce two main tools for estimating the metric entropy, approximation rates, and $n$-widths of these spaces. First, we introduce the notion of a smoothly parameterized dictionary and give upper bounds on the non-linear approximation rates, metric entropy and $n$-widths of their absolute convex hull. The upper bounds depend upon the order of smoothness of the parameterization. This result is applied to dictionaries of ridge functions corresponding to shallow neural networks, and they improve upon existing results in many cases. Next, we provide a method for lower bounding the metric entropy and $n$-widths of variation spaces which contain certain classes of ridge functions. This result gives sharp lower bounds on the $L^2$-approximation rates, metric entropy, and $n$-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLU$^k$ activation functions and sigmoidal activation functions with bounded variation.

We propose iterative projection methods for solving square or rectangular consistent linear systems $Ax = b$. Projection methods use sketching matrices (possibly randomized) to generate a sequence of small projected subproblems, but even the smaller systems can be costly. We develop a process that appends one column each iteration to the sketching matrix and that converges in a finite number of iterations independent of whether the sketch is random or deterministic. In general, our process generates orthogonal updates to the approximate solution $x_k$. By choosing the sketch to be the set of all previous residuals, we obtain a simple recursive update and convergence in at most rank($A$) iterations (in exact arithmetic). By choosing a sequence of identity columns for the sketch, we develop a generalization of the Kaczmarz method. In experiments on large sparse systems, our method (PLSS) with residual sketches is competitive with LSQR, and our method with residual and identity sketches compares favorably to state-of-the-art randomized methods.

We initiate a study of the streaming complexity of constraint satisfaction problems (CSPs) when the constraints arrive in a random order. We show that there exists a CSP, namely $\textsf{Max-DICUT}$, for which random ordering makes a provable difference. Whereas a $4/9 \approx 0.445$ approximation of $\textsf{DICUT}$ requires $\Omega(\sqrt{n})$ space with adversarial ordering, we show that with random ordering of constraints there exists a $0.48$-approximation algorithm that only needs $O(\log n)$ space. We also give new algorithms for $\textsf{Max-DICUT}$ in variants of the adversarial ordering setting. Specifically, we give a two-pass $O(\log n)$ space $0.48$-approximation algorithm for general graphs and a single-pass $\tilde{O}(\sqrt{n})$ space $0.48$-approximation algorithm for bounded degree graphs. On the negative side, we prove that CSPs where the satisfying assignments of the constraints support a one-wise independent distribution require $\Omega(\sqrt{n})$-space for any non-trivial approximation, even when the constraints are randomly ordered. This was previously known only for adversarially ordered constraints. Extending the results to randomly ordered constraints requires switching the hard instances from a union of random matchings to simple Erd\"os-Renyi random (hyper)graphs and extending tools that can perform Fourier analysis on such instances. The only CSP to have been considered previously with random ordering is $\textsf{Max-CUT}$ where the ordering is not known to change the approximability. Specifically it is known to be as hard to approximate with random ordering as with adversarial ordering, for $o(\sqrt{n})$ space algorithms. Our results show a richer variety of possibilities and motivate further study of CSPs with randomly ordered constraints.

北京阿比特科技有限公司