亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We relate the condition numbers of computing three decompositions of symmetric tensors: the canonical polyadic decomposition, the Waring decomposition, and a Tucker-compressed Waring decomposition. Based on this relation we can speed up the computation of these condition numbers by orders of magnitude

相關內容

One approach to make progress on the symbolic determinant identity testing (SDIT) problem is to study the structure of singular matrix spaces. After settling the non-commutative rank problem (Garg-Gurvits-Oliveira-Wigderson, Found. Comput. Math. 2020; Ivanyos-Qiao-Subrahmanyam, Comput. Complex. 2018), a natural next step is to understand singular matrix spaces whose non-commutative rank is full. At present, examples of such matrix spaces are mostly sporadic, so it is desirable to discover them in a more systematic way. In this paper, we make a step towards this direction, by studying the family of matrix spaces that are closed under the commutator operation, that is matrix Lie algebras. On the one hand, we demonstrate that matrix Lie algebras over the complex number field give rise to singular matrix spaces with full non-commutative ranks. On the other hand, we show that SDIT of such spaces can be decided in deterministic polynomial time. Moreover, we give a characterization for the matrix Lie algebras to yield a matrix space possessing singularity certificates as studied by Lov'asz (B. Braz. Math. Soc., 1989) and Raz and Wigderson (Building Bridges II, 2019).

The Steiner $k$-eccentricity of a vertex $v$ of a graph $G$ is the maximum Steiner distance over all $k$-subsets of $V (G)$ which contain $v$. A linear time algorithm for calculating the Steiner $k$-eccentricity of a vertex on block graphs is presented. For general graphs, an $O(n^{\nu(G)+1}(n(G) + m(G) + k))$ algorithm is designed, where $\nu(G)$ is the cyclomatic number of $G$. A linear algorithm for computing the Steiner $3$-eccentricities of all vertices of a tree is also presented which improves the quadratic algorithm from [Discrete Appl.\ Math.\ 304 (2021) 181--195].

Three algorithm are proposed to evaluate volume potentials that arise in boundary element methods for elliptic PDEs. The approach is to apply a modified fast multipole method for a boundary concentrated volume mesh. If $h$ is the meshwidth of the boundary, then the volume is discretized using nearly $O(h^{-2})$ degrees of freedom, and the algorithm computes potentials in nearly $O(h^{-2})$ complexity. Here nearly means that logarithmic terms of $h$ may appear. Thus the complexity of volume potentials calculations is of the same asymptotic order as boundary potentials. For sources and potentials with sufficient regularity the parameters of the algorithm can be designed such that the error of the approximated potential converges at any specified rate $O(h^p)$. The accuracy and effectiveness of the proposed algorithms are demonstrated for potentials of the Poisson equation in three dimensions.

One of the most pressing problems in modern analysis is the study of the growth rate of the norms of all possible matrix products $\|A_{i_{n}}\cdots A_{i_{0}}\|$ with factors from a set of matrices $\mathscr{A}$. So far, only for a relatively small number of classes of matrices $\mathscr{A}$ has it been possible to rigorously describe the sequences of matrices $\{A_{i_{n}}\}$ that guarantee the maximal growth rate of the corresponding norms. Moreover, in almost all theoretically studied cases, the index sequences $\{i_{n}\}$ of matrices maximizing the norms of the corresponding matrix products turned out to be periodic or so-called Sturmian sequences, which entails a whole set of ``good'' properties of the sequences $\{A_{i_{n}}\}$, in particular the existence of a limiting frequency of occurrence of each matrix factor $A_{i}\in\mathscr{A}$ in them. The paper determines a class of $2\times 2$ matrices consisting of two matrices similar to rotations of the plane in which the sequence $\{A_{i_{n}}\}$ maximizing the growth rate of the norms $\|A_{i_{n}}\cdots A_{i_{0}}\|$ is not Sturmian. All considerations are based on numerical modeling and cannot be considered mathematically rigorous in this part. Rather, they should be interpreted as a set of questions for further comprehensive theoretical analysis.

The Sinc-Nystr\"{o}m method is a high-order numerical method based on Sinc basis functions for discretizing evolutionary differential equations in time. But in this method we have to solve all the time steps in one-shot (i.e. all-at-once), which results in a large-scale nonsymmetric dense system that is expensive to handle. In this paper, we propose and analyze preconditioner for such dense system arising from both the parabolic and hyperbolic PDEs. The proposed preconditioner is a low-rank perturbation of the original matrix and has two advantages. First, we show that the eigenvalues of the preconditioned system are highly clustered with some uniform bounds which are independent of the mesh parameters. Second, the preconditioner can be used parallel for all the Sinc time points via a block diagonalization procedure. Such a parallel potential owes to the fact that the eigenvector matrix of the diagonalization is well conditioned. In particular, we show that the condition number of the eigenvector matrix only mildly grows as the number of Sinc time points increases, and thus the roundoff error arising from the diagonalization procedure is controllable. The effectiveness of our proposed PinT preconditioners is verified by the observed mesh-independent convergence rates of the preconditioned GMRES in reported numerical examples.

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

This paper studies the classical problem of finding all $k$ nearest neighbors to points of a query set $Q$ in another reference set $R$ within any metric space. The well-known work by Beygelzimer, Kakade, and Langford in 2006 introduced cover trees and claimed to guarantee a near linear time complexity in the size $|R|$ of the reference set for $k=1$. Our previous work defined compressed cover trees and corrected the key arguments for $k\geq 1$ and previously unknown challenging data cases. In 2009 Ram, Lee, March, and Gray attempted to improve the time complexity by using pairs of cover trees on the query and reference sets. In 2015 Curtin with the above co-authors used extra parameters to finally prove a similar complexity for $k = 1$. Our work fills all previous gaps and substantially improves the neighbor search based on pairs of new compressed cover trees. The novel imbalance parameter of paired trees allowed us to prove a better time complexity for any number of neighbors $k\geq 1$.

We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks. With this toolset, we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initializations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer. We choose the stitching layer from several possible classes of linear transformations and investigate their performance and properties. The task of matching representations is closely related to notions of similarity. Using this toolset, we also provide a novel viewpoint on the current line of research regarding similarity indices of neural network representations: the perspective of the performance on a task.

The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

北京阿比特科技有限公司