亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit the notion of root polynomials, thoroughly studied in [F. Dopico and V. Noferini, Root polynomials and their role in the theory of matrix polynomials, Linear Algebra Appl. 584:37--78, 2020] for general polynomial matrices, and show how they can efficiently be computed in the case of matrix pencils. The staircase algorithm implicitly computes so-called zero directions, as defined in [P. Van Dooren, Computation of zero directions of transfer functions, Proceedings IEEE 32nd CDC, 3132--3137, 1993]. However, zero directions generally do not provide the correct information on partial multiplicities and minimal indices. These indices are instead provided by two special cases of zero directions, namely, root polynomials and vectors of a minimal basis of the pencil. We show how to extract, starting from the block triangular pencil that the staircase algorithm computes, both a minimal basis and a maximal set of root polynomials in an efficient manner. Moreover, we argue that the accuracy of the computation of the root polynomials can be improved by making use of iterative refinement.

相關內容

Graph sampling theory extends the traditional sampling theory to graphs with topological structures. As a key part of the graph sampling theory, subset selection chooses nodes on graphs as samples to reconstruct the original signal. Due to the eigen-decomposition operation for Laplacian matrices of graphs, however, existing subset selection methods usually require high-complexity calculations. In this paper, with an aim of enhancing the computational efficiency of subset selection on graphs, we propose a novel objective function based on the optimal experimental design. Theoretical analysis shows that this function enjoys an $\alpha$-supermodular property with a provable lower bound on $\alpha$. The objective function, together with an approximate of the low-pass filter on graphs, suggests a fast subset selection method that does not require any eigen-decomposition operation. Experimental results show that the proposed method exhibits high computational efficiency, while having competitive results compared to the state-of-the-art ones, especially when the sampling rate is low.

Let $X$ be a random variable with unknown mean and finite variance. We present a new estimator of the mean of $X$ that is robust with respect to the possible presence of outliers in the sample, provides tight sub-Gaussian deviation guarantees without any additional assumptions on the shape or tails of the distribution, and moreover is asymptotically efficient. This is the first estimator that provably combines all these qualities in one package. Our construction is inspired by robustness properties possessed by the self-normalized sums. Theoretical findings are supplemented by numerical simulations highlighting strong performance of the proposed estimator in comparison with previously known techniques.

We study the phase synchronization problem with noisy measurements $Y=z^*z^{*H}+\sigma W\in\mathbb{C}^{n\times n}$, where $z^*$ is an $n$-dimensional complex unit-modulus vector and $W$ is a complex-valued Gaussian random matrix. It is assumed that each entry $Y_{jk}$ is observed with probability $p$. We prove that an SDP relaxation of the MLE achieves the error bound $(1+o(1))\frac{\sigma^2}{2np}$ under a normalized squared $\ell_2$ loss. This result matches the minimax lower bound of the problem, and even the leading constant is sharp. The analysis of the SDP is based on an equivalent non-convex programming whose solution can be characterized as a fixed point of the generalized power iteration lifted to a higher dimensional space. This viewpoint unifies the proofs of the statistical optimality of three different methods: MLE, SDP, and generalized power method. The technique is also applied to the analysis of the SDP for $\mathbb{Z}_2$ synchronization, and we achieve the minimax optimal error $\exp\left(-(1-o(1))\frac{np}{2\sigma^2}\right)$ with a sharp constant in the exponent.

Trimming consists of cutting away parts of a geometric domain, without reconstructing a global parametrization (meshing). It is a widely used operation in computer aided design, which generates meshes that are unfitted with the described physical object. This paper develops an adaptive mesh refinement strategy on trimmed geometries in the context of hierarchical B-spline based isogeometric analysis. A residual a posteriori estimator of the energy norm of the numerical approximation error is derived, in the context of Poisson equation. The reliability of the estimator is proven, and the effectivity index is shown to be independent from the number of hierarchical levels and from the way the trimmed boundaries cut the underlying mesh. In particular, it is thus independent from the size of the active part of the trimmed mesh elements. Numerical experiments are performed to validate the presented theory.

Analysis and use of stochastic models represented by a discrete-time Markov Chain require evaluation of performance measures and characterization of its stationary distribution. Analytical solutions are often unavailable when the system states are continuous or mixed. This paper presents a new method for computing the stationary distribution and performance measures for stochastic systems represented by continuous-, or mixed-state Markov chains. We show the asymptotic convergence and provide deterministic non-asymptotic error bounds for our method under the supremum norm. Our finite approximation method is near-optimal among all discrete approximate distributions, including empirical distributions obtained from Markov chain Monte Carlo (MCMC). Numerical experiments validate the accuracy and efficiency of our method and show that it significantly outperforms MCMC based approach.

Sparse binary matrices are of great interest in the field of compressed sensing. This class of matrices make possible to perform signal recovery with lower storage costs and faster decoding algorithms. In particular, random matrices formed by i.i.d Bernoulli $p$ random variables are of practical relevance in the context of nonnegative sparse recovery. In this work, we investigate the robust nullspace property of sparse Bernoulli $p$ matrices. Previous results in the literature establish that such matrices can accurately recover $n$-dimensional $s$-sparse vectors with $m=O\left (\frac{s}{c(p)}\log\frac{en}{s}\right )$ measurements, where $c(p) \le p$ is a constant that only depends on the parameter $p$. These results suggest that, when $p$ vanishes, the sparse Bernoulli matrix requires considerably more measurements than the minimal necessary achieved by the standard isotropic subgaussian designs. We show that this is not true. Our main result characterizes, for a wide range of levels sparsity $s$, the smallest $p$ such that it is possible to perform sparse recovery with the minimal number of measurements. We also provide matching lower bounds to establish the optimality of our results.

Kernel methods are used frequently in various applications of machine learning. For large-scale high dimensional applications, the success of kernel methods hinges on the ability to operate certain large dense kernel matrix K. An enormous amount of literature has been devoted to the study of symmetric positive semi-definite (SPSD) kernels, where Nystrom methods compute a low-rank approximation to the kernel matrix via choosing landmark points. In this paper, we study the Nystrom method for approximating both symmetric indefinite kernel matrices as well SPSD ones. We first develop a theoretical framework for general symmetric kernel matrices, which provides a theoretical guidance for the selection of landmark points. We then leverage discrepancy theory to propose the anchor net method for computing accurate Nystrom approximations with optimal complexity. The anchor net method operates entirely on the dataset without requiring the access to $K$ or its matrix-vector product. Results on various types of kernels (both indefinite and SPSD ones) and machine learning datasets demonstrate that the new method achieves better accuracy and stability with lower computational cost compared to the state-of-the-art Nystrom methods.

We present a randomized $O(m \log^2 n)$ work, $O(\text{polylog } n)$ depth parallel algorithm for minimum cut. This algorithm matches the work bounds of a recent sequential algorithm by Gawrychowski, Mozes, and Weimann [ICALP'20], and improves on the previously best parallel algorithm by Geissmann and Gianinazzi [SPAA'18], which performs $O(m \log^4 n)$ work in $O(\text{polylog } n)$ depth. Our algorithm makes use of three components that might be of independent interest. Firstly, we design a parallel data structure that efficiently supports batched mixed queries and updates on trees. It generalizes and improves the work bounds of a previous data structure of Geissmann and Gianinazzi and is work efficient with respect to the best sequential algorithm. Secondly, we design a parallel algorithm for approximate minimum cut that improves on previous results by Karger and Motwani. We use this algorithm to give a work-efficient procedure to produce a tree packing, as in Karger's sequential algorithm for minimum cuts. Lastly, we design an efficient parallel algorithm for solving the minimum $2$-respecting cut problem.

Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs $100\times$ faster than exact matrix products and $10\times$ faster than current approximate methods. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling$-$the core operations of our method$-$could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司