Given two matroids $\mathcal{M}_1 = (V, \mathcal{I}_1)$ and $\mathcal{M}_2 = (V, \mathcal{I}_2)$ over an $n$-element integer-weighted ground set $V$, the weighted matroid intersection problem aims to find a common independent set $S^{*} \in \mathcal{I}_1 \cap \mathcal{I}_2$ maximizing the weight of $S^{*}$. In this paper, we present a simple deterministic algorithm for weighted matroid intersection using $\tilde{O}(nr^{3/4}\log{W})$ rank queries, where $r$ is the size of the largest intersection of $\mathcal{M}_1$ and $\mathcal{M}_2$ and $W$ is the maximum weight. This improves upon the best previously known $\tilde{O}(nr\log{W})$ algorithm given by Lee, Sidford, and Wong [FOCS'15], and is the first subquadratic algorithm for polynomially-bounded weights under the standard independence or rank oracle models. The main contribution of this paper is an efficient algorithm that computes shortest-path trees in weighted exchange graphs.
The Matroid Secretary Conjecture is a notorious open problem in online optimization. It claims the existence of an $O(1)$-competitive algorithm for the Matroid Secretary Problem (MSP). Here, the elements of a weighted matroid appear one-by-one, revealing their weight at appearance, and the task is to select elements online with the goal to get an independent set of largest possible weight. $O(1)$-competitive MSP algorithms have so far only been obtained for restricted matroid classes and for MSP variations, including Random-Assignment MSP (RA-MSP), where an adversary fixes a number of weights equal to the ground set size of the matroid, which then get assigned randomly to the elements of the ground set. Unfortunately, these approaches heavily rely on knowing the full matroid upfront. This is an arguably undesirable requirement, and there are good reasons to believe that an approach towards resolving the MSP Conjecture should not rely on it. Thus, both Soto [SIAM Journal on Computing 2013] and Oveis Gharan & Vondrak [Algorithmica 2013] raised as an open question whether RA-MSP admits an $O(1)$-competitive algorithm even without knowing the matroid upfront. In this work, we answer this question affirmatively. Our result makes RA-MSP the first well-known MSP variant with an $O(1)$-competitive algorithm that does not need to know the underlying matroid upfront and without any restriction on the underlying matroid. Our approach is based on first approximately learning the rank-density curve of the matroid, which we then exploit algorithmically.
Motivated by applications to noncoherent network coding, we study subspace codes defined by sets of linear cellular automata (CA). As a first remark, we show that a family of linear CA where the local rules have the same diameter -- and thus the associated polynomials have the same degree -- induces a Grassmannian code. Then, we prove that the minimum distance of such a code is determined by the maximum degree occurring among the pairwise greatest common divisors (GCD) of the polynomials in the family. Finally, we consider the setting where all such polynomials have the same GCD, and determine the cardinality of the corresponding Grassmannian code. As a particular case, we show that if all polynomials in the family are pairwise coprime, the resulting Grassmannian code has the highest minimum distance possible.
In this paper we present new arithmetical and algebraic results following the work of Babindamana and al. on hyperbolas and describe in the new results an approach to attacking a RSA-type modulus based on continued fractions, independent and not bounded by the size of the private key $d$ nor the public exponent $e$ compared to Wiener's attack. When successful, this attack is bounded by $\displaystyle\mathcal{O}\left( b\log{\alpha_{j4}}\log{(\alpha_{i3}+\alpha_{j3})}\right)$ with $b=10^{y}$, $\alpha_{i3}+\alpha_{j3}$ a non trivial factor of $n$ and $\alpha_{j4}$ such that $(n+1)/(n-1)=\alpha_{i4}/\alpha_{j4}$. The primary goal of this attack is to find a point $\displaystyle X_{\alpha}=\left(-\alpha_{3}, \ \alpha_{3}+1 \right) \in \mathbb{Z}^{2}_{\star}$ that satisfies $\displaystyle\left\langle X_{\alpha_{3}}, \ P_{3} \right\rangle =0$ from a convergent of $\displaystyle\frac{\alpha_{i4}}{\alpha_{j4}}+\delta$, with $P_{3}\in \mathcal{B}_{n}(x, y)_{\mid_{x\geq 4n}}$. We finally present some experimental examples. We believe these results constitute a new direction in RSA Cryptanalysis using continued fractions independently of parameters $e$ and $d$.
The trade algorithm, which includes the curveball and fastball implementations, is the state-of-the-art for uniformly sampling r x c binary matrices with fixed row and column sums. The mixing time of the trade algorithm is currently unknown, although 5r is currently used as a heuristic. We propose a distribution-based approach to estimating the mixing time, but which also can return a sample of matrices that are nearly guaranteed to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least 10r, and that it increases as a function of both c and the fraction of cells containing a 1.
According to Aistleitner and Weimar, there exist two-dimensional (double) infinite matrices whose star-discrepancy $D_N^{*s}$ of the first $N$ rows and $s$ columns, interpreted as $N$ points in $[0,1]^s$, satisfies an inequality of the form $$D_N^{*s} \leq \sqrt{\alpha} \sqrt{A+B\frac{\ln(\log_2(N))}{s}}\sqrt{\frac{s}{N}}$$ with $\alpha = \zeta^{-1}(2) \approx 1.73, A=1165$ and $B=178$. These matrices are obtained by using i.i.d sequences, and the parameters $s$ and $N$ refer to the dimension and the sample size respectively. In this paper, we improve their result in two directions: First, we change the character of the equation so that the constant $A$ gets replaced by a value $A_s$ dependent on the dimension $s$ such that for $s>1$ we have $A_s<A$. Second, we generalize the result to the case of the (extreme) discrepancy. The paper is complemented by a section where we show numerical results for the dependence of the parameter $A_s$ on $s$.
Let a polytope $P$ be defined by a system $A x \leq b$. We consider the problem of counting the number of integer points inside $P$, assuming that $P$ is $\Delta$-modular, where the polytope $P$ is called $\Delta$-modular if all the rank sub-determinants of $A$ are bounded by $\Delta$ in the absolute value. We present a new FPT-algorithm, parameterized by $\Delta$ and by the maximal number of vertices in $P$, where the maximum is taken by all r.h.s. vectors $b$. We show that our algorithm is more efficient for $\Delta$-modular problems than the approach of A. Barvinok et al. To this end, we do not directly compute the short rational generating function for $P \cap Z^n$, which is commonly used for the considered problem. Instead, we use the dynamic programming principle to compute its particular representation in the form of exponential series that depends on a single variable. We completely do not rely to the Barvinok's unimodular sign decomposition technique. Using our new complexity bound, we consider different special cases that may be of independent interest. For example, we give FPT-algorithms for counting the integer points number in $\Delta$-modular simplices and similar polytopes that have $n + O(1)$ facets. As a special case, for any fixed $m$, we give an FPT-algorithm to count solutions of the unbounded $m$-dimensional $\Delta$-modular subset-sum problem.
We show that the sparsified block elimination algorithm for solving undirected Laplacian linear systems from [Kyng-Lee-Peng-Sachdeva-Spielman STOC'16] directly works for directed Laplacians. Given access to a sparsification algorithm that, on graphs with $n$ vertices and $m$ edges, takes time $\mathcal{T}_{\rm S}(m)$ to output a sparsifier with $\mathcal{N}_{\rm S}(n)$ edges, our algorithm solves a directed Eulerian system on $n$ vertices and $m$ edges to $\epsilon$ relative accuracy in time $$ O(\mathcal{T}_{\rm S}(m) + {\mathcal{N}_{\rm S}(n)\log {n}\log(n/\epsilon)}) + \tilde{O}(\mathcal{T}_{\rm S}(\mathcal{N}_{\rm S}(n)) \log n), $$ where the $\tilde{O}(\cdot)$ notation hides $\log\log(n)$ factors. By previous results, this implies improved runtimes for linear systems in strongly connected directed graphs, PageRank matrices, and asymmetric M-matrices. When combined with slower constructions of smaller Eulerian sparsifiers based on short cycle decompositions, it also gives a solver that runs in $O(n \log^{5}n \log(n / \epsilon))$ time after $O(n^2 \log^{O(1)} n)$ pre-processing. At the core of our analyses are constructions of augmented matrices whose Schur complements encode error matrices.
We consider the problem of using location queries to monitor the congestion potential among a collection of entities moving, with bounded speed but otherwise unpredictably, in $d$-dimensional Euclidean space. Uncertainty in entity locations due to potential motion between queries gives rise to a space of possible entity configurations at each moment in time, with possibly very different congestion properties. We define different measures of what we call the congestion potential of such spaces, in terms of the (dynamic) intersection graph of the uncertainty regions associated with entities, to describe the congestion that might actually occur. Previous work [SoCG'13, EuroCG'14, SICOMP'16, SODA'19], in the same uncertainty model, addressed the problem of minimizing congestion potential using location queries of some bounded frequency. It was shown that it is possible to design a query scheme that is $O(1)$-competitive, in terms of worst-case congestion potential, with other, even clairvoyant query schemes (that know the trajectories of all entities), subject to the same bound on query frequency. In this paper we address the dual problem: how to guarantee a fixed bound on congestion potential while minimizing the query frequency, measured in terms of total number of queries or the minimum spacing between queries (granularity), over any fixed time interval. This complementary objective necessitates quite different algorithms and analyses. Nevertheless, our results parallel those of the earlier papers, specifically tight competitive bounds on required query frequency, with a few surprising differences.
We investigate random matrices whose entries are obtained by applying a nonlinear kernel function to pairwise inner products between $n$ independent data vectors, drawn uniformly from the unit sphere in $\mathbb{R}^d$. This study is motivated by applications in machine learning and statistics, where these kernel random matrices and their spectral properties play significant roles. We establish the weak limit of the empirical spectral distribution of these matrices in a polynomial scaling regime, where $d, n \to \infty$ such that $n / d^\ell \to \kappa$, for some fixed $\ell \in \mathbb{N}$ and $\kappa \in (0, \infty)$. Our findings generalize an earlier result by Cheng and Singer, who examined the same model in the linear scaling regime (with $\ell = 1$). Our work reveals an equivalence principle: the spectrum of the random kernel matrix is asymptotically equivalent to that of a simpler matrix model, constructed as a linear combination of a (shifted) Wishart matrix and an independent matrix sampled from the Gaussian orthogonal ensemble. The aspect ratio of the Wishart matrix and the coefficients of the linear combination are determined by $\ell$ and the expansion of the kernel function in the orthogonal Hermite polynomial basis. Consequently, the limiting spectrum of the random kernel matrix can be characterized as the free additive convolution between a Marchenko-Pastur law and a semicircle law. We also extend our results to cases with data vectors sampled from isotropic Gaussian distributions instead of spherical distributions.
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.