亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The computation of off-diagonal blocks of matrix functions $f(T)$, where $T$ is block triangular, poses a challenging problem in scientific computing. We present a novel algorithm that exploits the structure of block triangular matrices, generalizing the algorithm of Al-Mohy and Higham for computing the Fr\'echet derivative of the matrix exponential. This work has significant applications in fields such as exponential integrators for solving systems of first-order differential equations, Hamiltonian linear systems in control theory, and option pricing in finance. Our approach introduces a linear operator that maps off-diagonal blocks of $T$ into their counterparts in $f(T)$. By studying the algebraic properties of the operator, we establish a comprehensive computational framework, paving the way to extend existing Fr\'echet derivative algorithms of matrix functions to more general settings. For the matrix exponential, in particular, the algorithm employs the scaling and squaring method with diagonal Pad\'e approximants to $\exp(x)$, with parameters chosen based on a rigorous backward error analysis, which notably does not depend on the norm of the off-diagonal blocks. The numerical experiment demonstrates that our algorithm surpasses existing algorithms in terms of accuracy and efficiency, making it highly valuable for a wide range of applications.

相關內容

In this work, we show that the class of multivariate degree-$d$ polynomials mapping $\{0,1\}^{n}$ to any Abelian group $G$ is locally correctable with $\widetilde{O}_{d}((\log n)^{d})$ queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of the authors [ABPSS24] (STOC 2024) who considered the case of linear polynomials and gave analogous results. Low-degree polynomials over the Boolean cube $\{0,1\}^{n}$ arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-$d$ polynomials. To show that the class of degree-$d$ polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-$d$ polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices.

The relaxed optimal $k$-thresholding pursuit (ROTP) is a recent algorithm for linear inverse problems. This algorithm is based on the optimal $k$-thresholding technique which performs vector thresholding and error metric reduction simultaneously. Although ROTP can be used to solve small to medium-sized linear inverse problems, the computational cost of this algorithm is high when solving large-scale problems. By merging the optimal $k$-thresholding technique and iterative method with memory as well as optimization with sparse search directions, we propose the so-called dynamic thresholding algorithm with memory (DTAM), which iteratively and dynamically selects vector bases to construct the problem solution. At every step, the algorithm uses more than one or all iterates generated so far to construct a new search direction, and solves only the small-sized quadratic subproblems at every iteration. Thus the computational complexity of DTAM is remarkably lower than that of ROTP-type methods. It turns out that DTAM can locate the solution of linear inverse problems if the matrix involved satisfies the restricted isometry property. Experiments on synthetic data, audio signal reconstruction and image denoising demonstrate that the proposed algorithm performs comparably to several mainstream thresholding and greedy algorithms, and it works much faster than the ROTP-type algorithms especially when the sparsity level of signal is relatively low.

It is well known that the minimum $\ell_2$-norm solution of the convex LASSO model, say $\mathbf{x}_{\star}$, is a continuous piecewise linear function of the regularization parameter $\lambda$, and its signed sparsity pattern is constant within each linear piece. The current study is an extension of this classic result, proving that the aforementioned properties extend to the min-norm solution map $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, where $\mathbf{y}$ is the observed signal, for a generalization of LASSO termed the scaled generalized minimax concave (sGMC) model. The sGMC model adopts a nonconvex debiased variant of the $\ell_1$-norm as sparse regularizer, but its objective function is overall-convex. Based on the geometric properties of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we propose an extension of the least angle regression (LARS) algorithm, which iteratively computes the closed-form expression of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ in each linear zone. Under suitable conditions, the proposed algorithm provably obtains the whole solution map $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ within finite iterations. Notably, our proof techniques for establishing continuity and piecewise linearity of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$ are novel, and they lead to two side contributions: (a) our proofs establish continuity of the sGMC solution set as a set-valued mapping of $(\mathbf{y},\lambda)$; (b) to prove piecewise linearity and piecewise constant sparsity pattern of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we do not require any assumption that previous work relies on (whereas to prove some additional properties of $\mathbf{x}_{\star}(\mathbf{y},\lambda)$, we use a different set of assumptions from previous work).

The Softmax attention mechanism in Transformer models is notoriously computationally expensive, particularly due to its quadratic complexity, posing significant challenges in vision applications. In contrast, linear attention provides a far more efficient solution by reducing the complexity to linear levels. However, compared to Softmax attention, linear attention often experiences significant performance degradation. Our experiments indicate that this performance drop is due to the low-rank nature of linear attention's feature map, which hinders its ability to adequately model complex spatial information. In this paper, to break the low-rank dilemma of linear attention, we conduct rank analysis from two perspectives: the KV buffer and the output features. Consequently, we introduce Rank-Augmented Linear Attention (RALA), which rivals the performance of Softmax attention while maintaining linear complexity and high efficiency. Based on RALA, we construct the Rank-Augmented Vision Linear Transformer (RAVLT). Extensive experiments demonstrate that RAVLT achieves excellent performance across various vision tasks. Specifically, without using any additional labels, data, or supervision during training, RAVLT achieves an 84.4% Top-1 accuracy on ImageNet-1k with only 26M parameters and 4.6G FLOPs. This result significantly surpasses previous linear attention mechanisms, fully illustrating the potential of RALA. Code will be available at //github.com/qhfan/RALA.

We show that the CNF satisfiability problem (SAT) can be solved in time $O^*(1.1199^{(d-2)n})$, where $d$ is either the maximum number of occurrences of any variable or the average number of occurrences of all variables if no variable occurs only once. This improves upon the known upper bound of $O^*(1.1279^{(d-2)n})$ by Wahlstr$\ddot{\text{o}}$m (SAT 2005) and $O^*(1.1238^{(d-2)n})$ by Peng and Xiao (IJCAI 2023). For $d\leq 4$, our algorithm is better than previous results. Our main technical result is an algorithm that runs in $O^*(1.1199^n)$ for 3-occur-SAT, a restricted instance of SAT where all variables have at most 3 occurrences. Through deeper case analysis and a reduction rule that allows us to resolve many variables under a relatively broad criteria, we are able to circumvent the bottlenecks in previous algorithms.

We propose a two-stage memory retrieval dynamics for modern Hopfield models, termed $\mathtt{U\text{-}Hop}$, with enhanced memory capacity. Our key contribution is a learnable feature map $\Phi$ which transforms the Hopfield energy function into kernel space. This transformation ensures convergence between the local minima of energy and the fixed points of retrieval dynamics within the kernel space. Consequently, the kernel norm induced by $\Phi$ serves as a novel similarity measure. It utilizes the stored memory patterns as learning data to enhance memory capacity across all modern Hopfield models. Specifically, we accomplish this by constructing a separation loss $\mathcal{L}_\Phi$ that separates the local minima of kernelized energy by separating stored memory patterns in kernel space. Methodologically, $\mathtt{U\text{-}Hop}$ memory retrieval process consists of: (Stage I) minimizing separation loss for a more uniform memory (local minimum) distribution, followed by (Stage II) standard Hopfield energy minimization for memory retrieval. This results in a significant reduction of possible metastable states in the Hopfield energy function, thus enhancing memory capacity by preventing memory confusion. Empirically, with real-world datasets, we demonstrate that $\mathtt{U\text{-}Hop}$ outperforms all existing modern Hopfield models and state-of-the-art similarity measures, achieving substantial improvements in both associative memory retrieval and deep learning tasks. Code is available at //github.com/MAGICS-LAB/UHop ; future updates are on arXiv:2404.03827

Oja's algorithm for Streaming Principal Component Analysis (PCA) for $n$ data-points in a $d$ dimensional space achieves the same sin-squared error $O(r_{\mathsf{eff}}/n)$ as the offline algorithm in $O(d)$ space and $O(nd)$ time and a single pass through the datapoints. Here $r_{\mathsf{eff}}$ is the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix $\Sigma$). Under this computational budget, we consider the problem of sparse PCA, where the principal eigenvector of $\Sigma$ is $s$-sparse, and $r_{\mathsf{eff}}$ can be large. In this setting, to our knowledge, \textit{there are no known single-pass algorithms} that achieve the minimax error bound in $O(d)$ space and $O(nd)$ time without either requiring strong initialization conditions or assuming further structure (e.g., spiked) of the covariance matrix. We show that a simple single-pass procedure that thresholds the output of Oja's algorithm (the Oja vector) can achieve the minimax error bound under some regularity conditions in $O(d)$ space and $O(nd)$ time. We present a nontrivial and novel analysis of the entries of the unnormalized Oja vector, which involves the projection of a product of independent random matrices on a random initial vector. This is completely different from previous analyses of Oja's algorithm and matrix products, which have been done when the $r_{\mathsf{eff}}$ is bounded.

Optimization over the set of matrices $X$ that satisfy $X^\top B X = I_p$, referred to as the generalized Stiefel manifold, appears in many applications involving sampled covariance matrices such as the canonical correlation analysis (CCA), independent component analysis (ICA), and the generalized eigenvalue problem (GEVP). Solving these problems is typically done by iterative methods that require a fully formed $B$. We propose a cheap stochastic iterative method that solves the optimization problem while having access only to random estimates of $B$. Our method does not enforce the constraint in every iteration; instead, it produces iterations that converge to critical points on the generalized Stiefel manifold defined in expectation. The method has lower per-iteration cost, requires only matrix multiplications, and has the same convergence rates as its Riemannian optimization counterparts that require the full matrix $B$. Experiments demonstrate its effectiveness in various machine learning applications involving generalized orthogonality constraints, including CCA, ICA, and the GEVP.

We study the query complexity of the metric Steiner Tree problem, where we are given an $n \times n$ metric on a set $V$ of vertices along with a set $T \subseteq V$ of $k$ terminals, and the goal is to find a tree of minimum cost that contains all terminals in $T$. The query complexity for the related minimum spanning tree (MST) problem is well-understood: for any fixed $\varepsilon > 0$, one can estimate the MST cost to within a $(1+\varepsilon)$-factor using only $\tilde{O}(n)$ queries, and this is known to be tight. This implies that a $(2 + \varepsilon)$-approximate estimate of Steiner Tree cost can be obtained with $\tilde{O}(k)$ queries by simply applying the MST cost estimation algorithm on the metric induced by the terminals. Our first result shows that any (randomized) algorithm that estimates the Steiner Tree cost to within a $(5/3 - \varepsilon)$-factor requires $\Omega(n^2)$ queries, even if $k$ is a constant. This lower bound is in sharp contrast to an upper bound of $O(nk)$ queries for computing a $(5/3)$-approximate Steiner Tree, which follows from previous work by Du and Zelikovsky. Our second main result, and the main technical contribution of this work, is a sublinear query algorithm for estimating the Steiner Tree cost to within a strictly better-than-$2$ factor, with query complexity $\tilde{O}(n^{12/7} + n^{6/7}\cdot k)=\tilde{O}(n^{13/7})=o(n^2)$. We complement this result by showing an $\tilde{\Omega}(n + k^{6/5})$ query lower bound for any algorithm that estimates Steiner Tree cost to a strictly better than $2$ factor. Thus $\tilde{\Omega}(n^{6/5})$ queries are needed to just beat $2$-approximation when $k = \Omega(n)$; a sharp contrast to MST cost estimation where a $(1+o(1))$-approximate estimate of cost is achievable with only $\tilde{O}(n)$ queries.

We study computational problems related to the Schr\"odinger operator $H = -\Delta + V$ in the real space under the condition that (i) the potential function $V$ is smooth and has its value and derivative bounded within some polynomial of $n$ and (ii) $V$ only consists of $O(1)$-body interactions. We prove that (i) simulating the dynamics generated by the Schr\"odinger operator implements universal quantum computation, i.e., it is BQP-hard, and (ii) estimating the ground energy of the Schr\"odinger operator is as hard as estimating that of local Hamiltonians with no sign problem (a.k.a. stoquastic Hamiltonians), i.e., it is StoqMA-complete. This result is particularly intriguing because the ground energy problem for general bosonic Hamiltonians is known to be QMA-hard and it is widely believed that $\texttt{StoqMA}\varsubsetneq \texttt{QMA}$.

北京阿比特科技有限公司