In this article, we study the test for independence of two random elements $X$ and $Y$ lying in an infinite dimensional space ${\cal{H}}$ (specifically, a real separable Hilbert space equipped with the inner product $\langle ., .\rangle_{\cal{H}}$). In the course of this study, a measure of association is proposed based on the sup-norm difference between the joint probability density function of the bivariate random vector $(\langle l_{1}, X \rangle_{\cal{H}}, \langle l_{2}, Y \rangle_{\cal{H}})$ and the product of marginal probability density functions of the random variables $\langle l_{1}, X \rangle_{\cal{H}}$ and $\langle l_{2}, Y \rangle_{\cal{H}}$, where $l_{1}\in{\cal{H}}$ and $l_{2}\in{\cal{H}}$ are two arbitrary elements. It is established that the proposed measure of association equals zero if and only if the random elements are independent. In order to carry out the test whether $X$ and $Y$ are independent or not, the sample version of the proposed measure of association is considered as the test statistic after appropriate normalization, and the asymptotic distributions of the test statistic under the null and the local alternatives are derived. The performance of the new test is investigated for simulated data sets and the practicability of the test is shown for three real data sets related to climatology, biological science and chemical science.
We consider a finite-dimensional vector space $W\subset K^E$ over an arbitrary field $K$ and an arbitrary set $E$. We show that the set $C(W)\subset 2^E$ consisting of the minimal supports of $W$ are the circuits of a matroid on $E$. In particular, we show that this matroid is cofinitary (hence, tame). When the cardinality of $K$ is large enough (with respect to the cardinality of $E$), then the set $trop(W)\subset 2^E$ consisting of all the supports of $W$ is a matroid itself. Afterwards we apply these results to tropical differential algebraic geometry and study the set of supports $trop(Sol(\Sigma))\subset (2^{\mathbb{N}^{m}})^n$ of spaces of formal power series solutions $\text{Sol}(\Sigma)$ of systems of linear differential equations $\Sigma$ in differential variables $x_1,\ldots,x_n$ having coefficients in the ring ${K}[\![t_1,\ldots,t_m]\!]$. If $\Sigma$ is of differential type zero, then the set $C(Sol(\Sigma))\subset (2^{\mathbb{N}^{m}})^n$ of minimal supports defines a matroid on $E=\mathbb{N}^{mn}$, and if the cardinality of $K$ is large enough, then the set of supports $trop(Sol(\Sigma))$ itself is a matroid on $E$ as well. By applying the fundamental theorem of tropical differential algebraic geometry (fttdag), we give a necessary condition under which the set of solutions $Sol(U)$ of a system $U$ of tropical linear differential equations to be a matroid. We also give a counterexample to the fttdag for systems $\Sigma$ of linear differential equations over countable fields. In this case, the set $trop(Sol(\Sigma))$ may not form a matroid.
According to Aistleitner and Weimar, there exist two-dimensional (double) infinite matrices whose star-discrepancy $D_N^{*s}$ of the first $N$ rows and $s$ columns, interpreted as $N$ points in $[0,1]^s$, satisfies an inequality of the form $$D_N^{*s} \leq \sqrt{\alpha} \sqrt{A+B\frac{\ln(\log_2(N))}{s}}\sqrt{\frac{s}{N}}$$ with $\alpha = \zeta^{-1}(2) \approx 1.73, A=1165$ and $B=178$. These matrices are obtained by using i.i.d sequences, and the parameters $s$ and $N$ refer to the dimension and the sample size respectively. In this paper, we improve their result in two directions: First, we change the character of the equation so that the constant $A$ gets replaced by a value $A_s$ dependent on the dimension $s$ such that for $s>1$ we have $A_s<A$. Second, we generalize the result to the case of the (extreme) discrepancy. The paper is complemented by a section where we show numerical results for the dependence of the parameter $A_s$ on $s$.
Partitioning a set of elements into subsets of a priori unknown sizes is essential in many applications. These subset sizes are rarely explicitly learned - be it the cluster sizes in clustering applications or the number of shared versus independent generative latent factors in weakly-supervised learning. Probability distributions over correct combinations of subset sizes are non-differentiable due to hard constraints, which prohibit gradient-based optimization. In this work, we propose the differentiable hypergeometric distribution. The hypergeometric distribution models the probability of different group sizes based on their relative importance. We introduce reparameterizable gradients to learn the importance between groups and highlight the advantage of explicitly learning the size of subsets in two typical applications: weakly-supervised learning and clustering. In both applications, we outperform previous approaches, which rely on suboptimal heuristics to model the unknown size of groups.
We introduce and study pawn games, a class of two-player zero-sum turn-based graph games. A turn-based graph game proceeds by placing a token on an initial vertex, and whoever controls the vertex on which the token is located, chooses its next location. This leads to a path in the graph, which determines the winner. Traditionally, the control of vertices is predetermined and fixed. The novelty of pawn games is that control of vertices changes dynamically throughout the game as follows. Each vertex of a pawn game is owned by a pawn. In each turn, the pawns are partitioned between the two players, and the player who controls the pawn that owns the vertex on which the token is located, chooses the next location of the token. Control of pawns changes dynamically throughout the game according to a fixed mechanism. Specifically, we define several grabbing-based mechanisms in which control of at most one pawn transfers at the end of each turn. We study the complexity of solving pawn games, where we focus on reachability objectives and parameterize the problem by the mechanism that is being used and by restrictions on pawn ownership of vertices. On the positive side, even though pawn games are exponentially-succinct turn-based games, we identify several natural classes that can be solved in PTIME. On the negative side, we identify several EXPTIME-complete classes, where our hardness proofs are based on a new class of games called lock & Key games, which may be of independent interest.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method.
Most existing studies on linear bandits focus on the one-dimensional characterization of the overall system. While being representative, this formulation may fail to model applications with high-dimensional but favorable structures, such as the low-rank tensor representation for recommender systems. To address this limitation, this work studies a general tensor bandits model, where actions and system parameters are represented by tensors as opposed to vectors, and we particularly focus on the case that the unknown system tensor is low-rank. A novel bandit algorithm, coined TOFU (Tensor Optimism in the Face of Uncertainty), is developed. TOFU first leverages flexible tensor regression techniques to estimate low-dimensional subspaces associated with the system tensor. These estimates are then utilized to convert the original problem to a new one with norm constraints on its system parameters. Lastly, a norm-constrained bandit subroutine is adopted by TOFU, which utilizes these constraints to avoid exploring the entire high-dimensional parameter space. Theoretical analyses show that TOFU improves the best-known regret upper bound by a multiplicative factor that grows exponentially in the system order. A novel performance lower bound is also established, which further corroborates the efficiency of TOFU.
We investigate random matrices whose entries are obtained by applying a nonlinear kernel function to pairwise inner products between $n$ independent data vectors, drawn uniformly from the unit sphere in $\mathbb{R}^d$. This study is motivated by applications in machine learning and statistics, where these kernel random matrices and their spectral properties play significant roles. We establish the weak limit of the empirical spectral distribution of these matrices in a polynomial scaling regime, where $d, n \to \infty$ such that $n / d^\ell \to \kappa$, for some fixed $\ell \in \mathbb{N}$ and $\kappa \in (0, \infty)$. Our findings generalize an earlier result by Cheng and Singer, who examined the same model in the linear scaling regime (with $\ell = 1$). Our work reveals an equivalence principle: the spectrum of the random kernel matrix is asymptotically equivalent to that of a simpler matrix model, constructed as a linear combination of a (shifted) Wishart matrix and an independent matrix sampled from the Gaussian orthogonal ensemble. The aspect ratio of the Wishart matrix and the coefficients of the linear combination are determined by $\ell$ and the expansion of the kernel function in the orthogonal Hermite polynomial basis. Consequently, the limiting spectrum of the random kernel matrix can be characterized as the free additive convolution between a Marchenko-Pastur law and a semicircle law. We also extend our results to cases with data vectors sampled from isotropic Gaussian distributions instead of spherical distributions.
Arguably, the largest class of stochastic processes generated by means of a finite memory consists of those that are sequences of observations produced by sequential measurements in a suitable generalized probabilistic theory (GPT). These are constructed from a finite-dimensional memory evolving under a set of possible linear maps, and with probabilities of outcomes determined by linear functions of the memory state. Examples of such models are given by classical hidden Markov processes, where the memory state is a probability distribution, and at each step it evolves according to a non-negative matrix, and hidden quantum Markov processes, where the memory state is a finite dimensional quantum state, and at each step it evolves according to a completely positive map. Here we show that the set of processes admitting a finite-dimensional explanation do not need to be explainable in terms of either classical probability or quantum mechanics. To wit, we exhibit families of processes that have a finite-dimensional explanation, defined manifestly by the dynamics of explicitly given GPT, but that do not admit a quantum, and therefore not even classical, explanation in finite dimension. Furthermore, we present a family of quantum processes on qubits and qutrits that do not admit a classical finite-dimensional realization, which includes examples introduced earlier by Fox, Rubin, Dharmadikari and Nadkarni as functions of infinite dimensional Markov chains, and lower bound the size of the memory of a classical model realizing a noisy version of the qubit processes.
We investigate expansions of Presburger arithmetic, i.e., the theory of the integers with addition and order, with additional structure related to exponentiation: either a function that takes a number to the power of $2$, or a predicate for the powers of $2$. The latter theory, denoted $\mathrm{PresPower}$, was introduced by B\"uchi as a first attempt at characterising the sets of tuples of numbers that can be expressed using finite automata; B\"uchi's method does not give an elementary upper bound, and the complexity of this theory has been open. The former theory, denoted as $\mathrm{PresExp}$, was shown decidable by Semenov; while the decision procedure for this theory differs radically from the automata-based method proposed by B\"uchi, the method is also non-elementary. And in fact, the theory with the power function has a non-elementary lower bound. In this paper, we show that while Semenov's and B\"uchi's approaches yield non-elementary blow-ups for $\mathrm{PresPower}$, the theory is in fact decidable in triply exponential time, similar to the best known quantifier-elimination algorithm for Presburger arithmetic. We also provide a $\mathrm{NExpTime}$ upper bound for the existential fragment of $\mathrm{PresExp}$, a step towards a finer-grained analysis of its complexity. Both these results are established by analysing a single parameterized satisfiability algorithm for $\mathrm{PresExp}$, which can be specialized to either the setting of $\mathrm{PresPower}$ or the existential theory of $\mathrm{PresExp}$. Besides the new upper bounds for the existential theory of $\mathrm{PresExp}$ and $\mathrm{PresPower}$, we believe our algorithm provides new intuition for the decidability of these theories, and for the features that lead to non-elementary blow-ups.
Layer Normalization (LayerNorm) is an inherent component in all Transformer-based models. In this paper, we show that LayerNorm is crucial to the expressivity of the multi-head attention layer that follows it. This is in contrast to the common belief that LayerNorm's only role is to normalize the activations during the forward pass, and their gradients during the backward pass. We consider a geometric interpretation of LayerNorm and show that it consists of two components: (a) projection of the input vectors to a $d-1$ space that is orthogonal to the $\left[1,1,...,1\right]$ vector, and (b) scaling of all vectors to the same norm of $\sqrt{d}$. We show that each of these components is important for the attention layer that follows it in Transformers: (a) projection allows the attention mechanism to create an attention query that attends to all keys equally, offloading the need to learn this operation by the attention; and (b) scaling allows each key to potentially receive the highest attention, and prevents keys from being "un-select-able". We show empirically that Transformers do indeed benefit from these properties of LayeNorm in general language modeling and even in computing simple functions such as "majority". Our code is available at //github.com/tech-srl/layer_norm_expressivity_role .