亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The randomized sparse Kaczmarz method, designed for seeking the sparse solutions of the linear systems $Ax=b$, selects the $i$-th projection hyperplane with likelihood proportional to $\|a_{i}\|_2^2$, where $a_{i}^T$ is $i$-th row of $A$. In this work, we propose a weighted randomized sparse Kaczmarz method, which selects the $i$-th projection hyperplane with probability proportional to $\lvert\langle a_{i},x_{k}\rangle-b_{i}\rvert^p$, where $0<p<\infty$, for possible acceleration. It bridges the randomized Kaczmarz and greedy Kaczmarz by parameter $p$. Theoretically, we show its linear convergence rate in expectation with respect to the Bregman distance in the noiseless and noisy cases, which is at least as efficient as the randomized sparse Kaczmarz method. The superiority of the proposed method is demonstrated via a group of numerical experiments.

相關內容

A dictionary data structure maintains a set of at most $n$ keys from the universe $[U]$ under key insertions and deletions, such that given a query $x \in [U]$, it returns if $x$ is in the set. Some variants also store values associated to the keys such that given a query $x$, the value associated to $x$ is returned when $x$ is in the set. This fundamental data structure problem has been studied for six decades since the introduction of hash tables in 1953. A hash table occupies $O(n\log U)$ bits of space with constant time per operation in expectation. There has been a vast literature on improving its time and space usage. The state-of-the-art dictionary by Bender, Farach-Colton, Kuszmaul, Kuszmaul and Liu [BFCK+22] has space consumption close to the information-theoretic optimum, using a total of \[ \log\binom{U}{n}+O(n\log^{(k)} n) \] bits, while supporting all operations in $O(k)$ time, for any parameter $k \leq \log^* n$. The term $O(\log^{(k)} n) = O(\underbrace{\log\cdots\log}_k n)$ is referred to as the wasted bits per key. In this paper, we prove a matching cell-probe lower bound: For $U=n^{1+\Theta(1)}$, any dictionary with $O(\log^{(k)} n)$ wasted bits per key must have expected operational time $\Omega(k)$, in the cell-probe model with word-size $w=\Theta(\log U)$. Furthermore, if a dictionary stores values of $\Theta(\log U)$ bits, we show that regardless of the query time, it must have $\Omega(k)$ expected update time. It is worth noting that this is the first cell-probe lower bound on the trade-off between space and update time for general data structures.

A Krylov subspace recycling method for the efficient evaluation of a sequence of matrix functions acting on a set of vectors is developed. The method improves over the recycling methods presented in [Burke et al., arXiv:2209.14163, 2022] in that it uses a closed-form expression for the augmented FOM approximants and hence circumvents the use of numerical quadrature. We further extend our method to use randomized sketching in order to avoid the arithmetic cost of orthogonalizing a full Krylov basis, offering an attractive solution to the fact that recycling algorithms built from shifted augmented FOM cannot easily be restarted. The efficacy of the proposed algorithms is demonstrated with numerical experiments.

In past decade, previous balanced datasets have been used to advance algorithms for classification, object detection, semantic segmentation, and anomaly detection in industrial applications. Specifically, for condition-based maintenance, automating visual inspection is crucial to ensure high quality. Deterioration prognostic attempts to optimize the fine decision process for predictive maintenance and proactive repair. In civil infrastructure and living environment, damage data mining cannot avoid the imbalanced data issue because of rare unseen events and high quality status by improved operations. For visual inspection, deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced. From numerous related surveys, we summarize that imbalanced data problems can be categorized into four types; 1) missing range of target and label valuables, 2) majority-minority class imbalance, 3) foreground-background of spatial imbalance, 4) long-tailed class of pixel-wise imbalance. Since 2015, there has been many imbalanced studies using deep learning approaches that includes regression, image classification, object detection, semantic segmentation. However, anomaly detection for imbalanced data is not yet well known. In the study, we highlight one-class anomaly detection application whether anomalous class or not, and demonstrate clear examples on imbalanced vision datasets: blood smear, lung infection, wooden, concrete deterioration, and disaster damage. We provide key results on damage vision mining advantage, hypothesizing that the more effective range of positive ratio, the higher accuracy gain of anomaly detection application. Finally, the applicability of the damage learning methods, limitations, and future works are mentioned.

Previous versions of sparse principal component analysis (PCA) have presumed that the eigen-basis (a $p \times k$ matrix) is approximately sparse. We propose a method that presumes the $p \times k$ matrix becomes approximately sparse after a $k \times k$ rotation. The simplest version of the algorithm initializes with the leading $k$ principal components. Then, the principal components are rotated with an $k \times k$ orthogonal rotation to make them approximately sparse. Finally, soft-thresholding is applied to the rotated principal components. This approach differs from prior approaches because it uses an orthogonal rotation to approximate a sparse basis. One consequence is that a sparse component need not to be a leading eigenvector, but rather a mixture of them. In this way, we propose a new (rotated) basis for sparse PCA. In addition, our approach avoids "deflation" and multiple tuning parameters required for that. Our sparse PCA framework is versatile; for example, it extends naturally to a two-way analysis of a data matrix for simultaneous dimensionality reduction of rows and columns. We provide evidence showing that for the same level of sparsity, the proposed sparse PCA method is more stable and can explain more variance compared to alternative methods. Through three applications -- sparse coding of images, analysis of transcriptome sequencing data, and large-scale clustering of social networks, we demonstrate the modern usefulness of sparse PCA in exploring multivariate data.

We develop a numerical method for the computation of a minimal convex and compact set, $\mathcal{B}\subset\mathbb{R}^N$, in the sense of mean width. This minimisation is constrained by the requirement that $\max_{b\in\mathcal{B}}\langle b , u\rangle\geq C(u)$ for all unit vectors $u\in S^{N-1}$ given some Lipschitz function $C$. This problem arises in the construction of environmental contours under the assumption of convex failure sets. Environmental contours offer descriptions of extreme environmental conditions commonly applied for reliability analysis in the early design phase of marine structures. Usually, they are applied in order to reduce the number of computationally expensive response analyses needed for reliability estimation. We solve this problem by reformulating it as a linear programming problem. Rigorous convergence analysis is performed, both in terms of convergence of mean widths and in the sense of the Hausdorff metric. Additionally, numerical examples are provided to illustrate the presented methods.

We propose a novel sparse sliced inverse regression method based on random projections in a large $p$ small $n$ setting. Embedded in a generalized eigenvalue framework, the proposed approach finally reduces to parallel execution of low-dimensional (generalized) eigenvalue decompositions, which facilitates high computational efficiency. Theoretically, we prove that this method achieves the minimax optimal rate of convergence under suitable assumptions. Furthermore, our algorithm involves a delicate reweighting scheme, which can significantly enhance the identifiability of the active set of covariates. Extensive numerical studies demonstrate high superiority of the proposed algorithm in comparison to competing methods.

We present a linear-time algorithm that, given as input (i) a bipartite Pfaffian graph $G$ of minimum degree three, (ii) a Hamiltonian cycle $H$ in $G$, and (iii) an edge $e$ in $H$, outputs at least three other Hamiltonian cycles through the edge $e$ in $G$. This linear-time complexity of finding another Hamiltonian cycle given one is in sharp contrast to the problem of deciding the existence of a Hamiltonian cycle, which is NP-complete already for cubic bipartite planar graphs; such graphs are Pfaffian. Also, without the degree requirement, we show that it is NP-hard to find another Hamiltonian cycle in a bipartite Pfaffian graph. We present further improved algorithms for finding optimal traveling salesperson tours and counting Hamiltonian cycles in bipartite planar graphs with running times that are not known to hold in general planar graphs. We prove our results by a new structural technique that efficiently witnesses each Hamiltonian cycle $H$ through an arbitrary fixed anchor edge $e$ in a bipartite Pfaffian graph using a two-coloring of the vertices as advice that is unique to $H$. Previous techniques -- the Cut&Count technique of Cygan et al. [FOCS'11, TALG'22] in particular -- were able to reduce the Hamiltonian cycle problem only to essentially counting problems; our results show that counting can be avoided by leveraging properties of bipartite Pfaffian graphs. Our technique also has purely graph-theoretical consequences; for example, we show that every cubic bipartite Pfaffian graph has either zero or at least six distinct Hamiltonian cycles; the latter case is tight for the cube graph.

We study quantum speedups in quantum machine learning (QML) by analyzing the quantum singular value transformation (QSVT) framework. QSVT, introduced by [GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup; in particular, a wide variety of QML proposals are applications of QSVT on low-rank classical data. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime up to a small polynomial overhead. We show that, given a matrix $A \in \mathbb{C}^{m\times n}$, a vector $b \in \mathbb{C}^{n}$, a bounded degree-$d$ polynomial $p$, and linear-time pre-processing, we can output a description of a vector $v$ such that $\|v - p(A) b\| \leq \varepsilon\|b\|$ in $\widetilde{\mathcal{O}}(d^{11} \|A\|_{\mathrm{F}}^4 / (\varepsilon^2 \|A\|^4 ))$ time. This improves upon the best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which requires $\widetilde{\mathcal{O}}(d^{22} \|A\|_{\mathrm{F}}^6 /(\varepsilon^6 \|A\|^6 ) )$ time, and narrows the gap with QSVT, which, after linear-time pre-processing to load input into a quantum-accessible memory, can estimate the magnitude of an entry $p(A)b$ to $\varepsilon\|b\|$ error in $\widetilde{\mathcal{O}}(d\|A\|_{\mathrm{F}}/(\varepsilon \|A\|))$ time. Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix polynomials, with sketching techniques to simulate QSVT classically. We introduce several new classical techniques in this work, including (a) a non-oblivious matrix sketch for approximately preserving bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and (c) a new technique to bound arithmetic progressions of the coefficients appearing in the Chebyshev series expansion of bounded functions, each of which may be of independent interest.

We provide a $O(\log^6 \log n)$-round randomized algorithm for distance-2 coloring in CONGEST with $\Delta^2+1$ colors. For $\Delta\gg\operatorname{poly}\log n$, this improves exponentially on the $O(\log\Delta+\operatorname{poly}\log\log n)$ algorithm of [Halld\'orsson, Kuhn, Maus, Nolin, DISC'20]. Our study is motivated by the ubiquity and hardness of local reductions in CONGEST. For instance, algorithms for the Local Lov\'asz Lemma [Moser, Tardos, JACM'10; Fischer, Ghaffari, DISC'17; Davies, SODA'23] usually assume communication on the conflict graph, which can be simulated in LOCAL with only constant overhead, while this may be prohibitively expensive in CONGEST. We hope our techniques help tackle in CONGEST other coloring problems defined by local relations.

Given a heterogeneous Gaussian sequence model with unknown mean $\theta \in \mathbb R^d$ and known covariance matrix $\Sigma = \operatorname{diag}(\sigma_1^2,\dots, \sigma_d^2)$, we study the signal detection problem against sparse alternatives, for known sparsity $s$. Namely, we characterize how large $\epsilon^*>0$ should be, in order to distinguish with high probability the null hypothesis $\theta=0$ from the alternative composed of $s$-sparse vectors in $\mathbb R^d$, separated from $0$ in $L^t$ norm ($t \in [1,\infty]$) by at least $\epsilon^*$. We find minimax upper and lower bounds over the minimax separation radius $\epsilon^*$ and prove that they are always matching. We also derive the corresponding minimax tests achieving these bounds. Our results reveal new phase transitions regarding the behavior of $\epsilon^*$ with respect to the level of sparsity, to the $L^t$ metric, and to the heteroscedasticity profile of $\Sigma$. In the case of the Euclidean (i.e. $L^2$) separation, we bridge the remaining gaps in the literature.

北京阿比特科技有限公司