亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We give a new deterministic construction of integer sensing matrices that can be used for the recovery of integer-valued signals in compressed sensing. This is a family of $n \times d$ integer matrices, $d \geq n$, with bounded sup-norm and the property that no $\ell$ column vectors are linearly dependent, $\ell \leq n$. Further, if $\ell \leq o(\log n)$ then $d/n \to \infty$ as $n \to \infty$. Our construction comes from particular sets of difference vectors of point-sets in $\mathbb R^n$ that cannot be covered by few parallel hyperplanes. We construct examples of such sets on the $0, \pm 1$ grid and use them for the matrix construction. We also show a connection of our constructions to a simple version of the Tarski plank problem.

相關內容

Minimum cut/maximum flow (min-cut/max-flow) algorithms solve a variety of problems in computer vision and thus significant effort has been put into developing fast min-cut/max-flow algorithms. As a result, it is difficult to choose an ideal algorithm for a given problem. Furthermore, parallel algorithms have not been thoroughly compared. In this paper, we evaluate the state-of-the-art serial and parallel min-cut/max-flow algorithms on the largest set of computer vision problems yet. We focus on generic algorithms, i.e., for unstructured graphs, but also compare with the specialized GridCut implementation. When applicable, GridCut performs best. Otherwise, the two pseudoflow algorithms, Hochbaum pseudoflow and excesses incremental breadth first search, achieves the overall best performance. The most memory efficient implementation tested is the Boykov-Kolmogorov algorithm. Amongst generic parallel algorithms, we find the bottom-up merging approach by Liu and Sun to be best, but no method is dominant. Of the generic parallel methods, only the parallel preflow push-relabel algorithm is able to efficiently scale with many processors across problem sizes, and no generic parallel method consistently outperforms serial algorithms. Finally, we provide and evaluate strategies for algorithm selection to obtain good expected performance. We make our dataset and implementations publicly available for further research.

The hard thresholding technique plays a vital role in the development of algorithms for sparse signal recovery. By merging this technique and heavy-ball acceleration method which is a multi-step extension of the traditional gradient descent method, we propose the so-called heavy-ball-based hard thresholding (HBHT) and heavy-ball-based hard thresholding pursuit (HBHTP) algorithms for signal recovery. It turns out that the HBHT and HBHTP can successfully recover a $k$-sparse signal if the restricted isometry constant of the measurement matrix satisfies $\delta_{3k}<0.618 $ and $\delta_{3k}<0.577,$ respectively. The guaranteed success of HBHT and HBHTP is also shown under the conditions $\delta_{2k}<0.356$ and $\delta_{2k}<0.377,$ respectively. Moreover, the finite convergence and stability of the two algorithms are also established in this paper. Simulations on random problem instances are performed to compare the performance of the proposed algorithms and several existing ones. Empirical results indicate that the HBHTP performs very comparably to a few existing algorithms and it takes less average time to achieve the signal recovery than these existing methods.

Learning Markov decision processes (MDPs) in the presence of the adversary is a challenging problem in reinforcement learning (RL). In this paper, we study RL in episodic MDPs with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping, and the reward function can change arbitrarily episode by episode. We propose an optimistic policy optimization algorithm POWERS and show that it can achieve $\tilde{O}(dH\sqrt{T})$ regret, where $H$ is the length of the episode, $T$ is the number of interactions with the MDP, and $d$ is the dimension of the feature mapping. Furthermore, we also prove a matching lower bound of $\tilde{\Omega}(dH\sqrt{T})$ up to logarithmic factors. Our key technical contributions are two-fold: (1) a new value function estimator based on importance weighting; and (2) a tighter confidence set for the transition kernel. They together lead to the nearly minimax optimal regret.

We introduce a new distortion measure for point processes called functional-covering distortion. It is inspired by intensity theory and is related to both the covering of point processes and logarithmic loss distortion. We obtain the distortion-rate function with feedforward under this distortion measure for a large class of point processes. For Poisson processes, the rate-distortion function is obtained under a general condition called constrained functional-covering distortion, of which both covering and functional-covering are special cases. Also for Poisson processes, we characterize the rate-distortion region for a two-encoder CEO problem and show that feedforward does not enlarge this region.

We consider the problem of enumerating optimal solutions for two hypergraph $k$-partitioning problems -- namely, Hypergraph-$k$-Cut and Minmax-Hypergraph-$k$-Partition. The input in hypergraph $k$-partitioning problems is a hypergraph $G=(V, E)$ with positive hyperedge costs along with a fixed positive integer $k$. The goal is to find a partition of $V$ into $k$ non-empty parts $(V_1, V_2, \ldots, V_k)$ -- known as a $k$-partition -- so as to minimize an objective of interest. 1. If the objective of interest is the maximum cut value of the parts, then the problem is known as Minmax-Hypergraph-$k$-Partition. A subset of hyperedges is a minmax-$k$-cut-set if it is the subset of hyperedges crossing an optimum $k$-partition for Minmax-Hypergraph-$k$-Partition. 2. If the objective of interest is the total cost of hyperedges crossing the $k$-partition, then the problem is known as Hypergraph-$k$-Cut. A subset of hyperedges is a min-$k$-cut-set if it is the subset of hyperedges crossing an optimum $k$-partition for Hypergraph-$k$-Cut. We give the first polynomial bound on the number of minmax-$k$-cut-sets and a polynomial-time algorithm to enumerate all of them in hypergraphs for every fixed $k$. Our technique is strong enough to also enable an $n^{O(k)}p$-time deterministic algorithm to enumerate all min-$k$-cut-sets in hypergraphs, thus improving on the previously known $n^{O(k^2)}p$-time deterministic algorithm, where $n$ is the number of vertices and $p$ is the size of the hypergraph. The correctness analysis of our enumeration approach relies on a structural result that is a strong and unifying generalization of known structural results for Hypergraph-$k$-Cut and Minmax-Hypergraph-$k$-Partition. We believe that our structural result is likely to be of independent interest in the theory of hypergraphs (and graphs).

We consider statistical models arising from the common set of solutions to a sparse polynomial system with general coefficients. The maximum likelihood degree counts the number of critical points of the likelihood function restricted to the model. We prove the maximum likelihood degree of a sparse polynomial system is determined by its Newton polytopes and equals the mixed volume of a related Lagrange system of equations.

We study the problem of testing whether a function $f: \mathbb{R}^n \to \mathbb{R}$ is a polynomial of degree at most $d$ in the \emph{distribution-free} testing model. Here, the distance between functions is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathbb{R}^n$ from which we can draw samples. In contrast to previous work, we do not assume that $\mathcal{D}$ has finite support. We design a tester that given query access to $f$, and sample access to $\mathcal{D}$, makes $(d/\varepsilon)^{O(1)}$ many queries to $f$, accepts with probability $1$ if $f$ is a polynomial of degree $d$, and rejects with probability at least $2/3$ if every degree-$d$ polynomial $P$ disagrees with $f$ on a set of mass at least $\varepsilon$ with respect to $\mathcal{D}$. Our result also holds under mild assumptions when we receive only a polynomial number of bits of precision for each query to $f$, or when $f$ can only be queried on rational points representable using a logarithmic number of bits. Along the way, we prove a new stability theorem for multivariate polynomials that may be of independent interest.

Given two strings $T$ and $S$ and a set of strings $P$, for each string $p \in P$, consider the unique substrings of $T$ that have $p$ as their prefix and $S$ as their suffix. Two problems then come to mind; the first problem being the counting of such substrings, and the second problem being the problem of listing all such substrings. In this paper, we describe linear-time, linear-space suffix tree-based algorithms for both problems. More specifically, we describe an $O(|T| + |P|)$ time algorithm for the counting problem, and an $O(|T| + |P| + \#(ans))$ time algorithm for the listing problem, where $\#(ans)$ refers to the number of strings being listed in total, and $|P|$ refers to the total length of the strings in $P$. We also consider the reversed version of the problems, where one prefix condition string and multiple suffix condition strings are given instead, and similarly describe linear-time, linear-space algorithms to solve them.

This paper presents new deterministic and distributed low-diameter decomposition algorithms for weighted graphs. In particular, we show that if one can efficiently compute approximate distances in a parallel or a distributed setting, one can also efficiently compute low-diameter decompositions. This consequently implies solutions to many fundamental distance based problems using a polylogarithmic number of approximate distance computations. Our low-diameter decomposition generalizes and extends the line of work starting from [Rozho\v{n}, Ghaffari STOC 2020] to weighted graphs in a very model-independent manner. Moreover, our clustering results have additional useful properties, including strong-diameter guarantees, separation properties, restricting cluster centers to specified terminals, and more. Applications include: -- The first near-linear work and polylogarithmic depth randomized and deterministic parallel algorithm for low-stretch spanning trees (LSST) with polylogarithmic stretch. Previously, the best parallel LSST algorithm required $m \cdot n^{o(1)}$ work and $n^{o(1)}$ depth and was inherently randomized. No deterministic LSST algorithm with truly sub-quadratic work and sub-linear depth was known. -- The first near-linear work and polylogarithmic depth deterministic algorithm for computing an $\ell_1$-embedding into polylogarithmic dimensional space with polylogarithmic distortion. The best prior deterministic algorithms for $\ell_1$-embeddings either require large polynomial work or are inherently sequential. Even when we apply our techniques to the classical problem of computing a ball-carving with strong-diameter $O(\log^2 n)$ in an unweighted graph, our new clustering algorithm still leads to an improvement in round complexity from $O(\log^{10} n)$ rounds [Chang, Ghaffari PODC 21] to $O(\log^{4} n)$.

SVD (singular value decomposition) is one of the basic tools of machine learning, allowing to optimize basis for a given matrix. However, sometimes we have a set of matrices $\{A_k\}_k$ instead, and would like to optimize a single common basis for them: find orthogonal matrices $U$, $V$, such that $\{U^T A_k V\}$ set of matrices is somehow simpler. For example DCT-II is orthonormal basis of functions commonly used in image/video compression - as discussed here, this kind of basis can be quickly automatically optimized for a given dataset. While also discussed gradient descent optimization might be computationally costly, there is proposed CSVD (common SVD): fast general approach based on SVD. Specifically, we choose $U$ as built of eigenvectors of $\sum_i (w_k)^q (A_k A_k^T)^p$ and $V$ of $\sum_k (w_k)^q (A_k^T A_k)^p$, where $w_k$ are their weights, $p,q>0$ are some chosen powers e.g. 1/2, optionally with normalization e.g. $A \to A - rc^T$ where $r_i=\sum_j A_{ij}, c_j =\sum_i A_{ij}$.

北京阿比特科技有限公司