We treat three cubic recurrences, two of which generalize the famous iterated map $x \mapsto x (1-x)$ from discrete chaos theory. A feature of each asymptotic series developed here is a constant, dependent on the initial condition but otherwise intrinsic to the function at hand.
Recently, Miller and Wu introduced the positive $\lambda$-calculus, a call-by-value $\lambda$-calculus with sharing obtained by assigning proof terms to the positively polarized focused proofs for minimal intuitionistic logic. The positive $\lambda$-calculus stands out among $\lambda$-calculi with sharing for a compactness property related to the sharing of variables. We show that -- thanks to compactness -- the positive calculus neatly captures the core of useful sharing, a technique for the study of reasonable time cost models.
This work addresses the fundamental linear inverse problem in compressive sensing (CS) by introducing a new type of regularizing generative prior. Our proposed method utilizes ideas from classical dictionary-based CS and, in particular, sparse Bayesian learning (SBL), to integrate a strong regularization towards sparse solutions. At the same time, by leveraging the notion of conditional Gaussianity, it also incorporates the adaptability from generative models to training data. However, unlike most state-of-the-art generative models, it is able to learn from a few compressed and noisy data samples and requires no optimization algorithm for solving the inverse problem. Additionally, similar to Dirichlet prior networks, our model parameterizes a conjugate prior enabling its application for uncertainty quantification. We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals.
Existing algorithms for explaining the output of image classifiers use different definitions of explanations and a variety of techniques to extract them. However, none of the existing tools use a principled approach based on formal definitions of causes and explanations for the explanation extraction. In this paper we present a novel black-box approach to computing explanations grounded in the theory of actual causality. We prove relevant theoretical results and present an algorithm for computing approximate explanations based on these definitions. We prove termination of our algorithm and discuss its complexity and the amount of approximation compared to the precise definition. We implemented the framework in a tool rex and we present experimental results and a comparison with state-of-the-art tools. We demonstrate that rex is the most efficient tool and produces the smallest explanations, in addition to outperforming other black-box tools on standard quality measures.
We present two new explicit constructions of Cayley high dimensional expanders (HDXs) over the abelian group $\mathbb{F}_2^n$. Our expansion proofs use only linear algebra and combinatorial arguments. The first construction gives local spectral HDXs of any constant dimension and subpolynomial degree $\exp(n^\epsilon)$ for every $\epsilon >0$, improving on a construction by Golowich [Gol23] which achieves $\epsilon =1/2$. [Gol23] derives these HDXs by sparsifying the complete Grassmann poset of subspaces. The novelty in our construction is the ability to sparsify any expanding Grassmannian posets, leading to iterated sparsification and much smaller degrees. The sparse Grassmannian (which is of independent interest in the theory of HDXs) serves as the generating set of the Cayley graph. Our second construction gives a 2-dimensional HDXs of any polynomial degree $\exp(\epsilon n$) for any constant $\epsilon > 0$, which is simultaneously a spectral expander and a coboundary expander. To the best of our knowledge, this is the first such non-trivial construction. We name it the Johnson complex, as it is derived from the classical Johnson scheme, whose vertices serve as the generating set of this Cayley graph. This construction may be viewed as a derandomization of the recent random geometric complexes of [LMSY23]. Establishing coboundary expansion through Gromov's "cone method" and the associated isoperimetric inequalities is the most intricate aspect of this construction. While these two constructions are quite different, we show that they both share a common structure, resembling the intersection patterns of vectors in the Hadamard code. We propose a general framework of such "Hadamard-like" constructions in the hope that it will yield new HDXs.
We characterize the power of constant-depth Boolean circuits in generating uniform symmetric distributions. Let $f\colon\{0,1\}^m\to\{0,1\}^n$ be a Boolean function where each output bit of $f$ depends only on $O(1)$ input bits. Assume the output distribution of $f$ on uniform input bits is close to a uniform distribution $D$ with a symmetric support. We show that $D$ is essentially one of the following six possibilities: (1) point distribution on $0^n$, (2) point distribution on $1^n$, (3) uniform over $\{0^n,1^n\}$, (4) uniform over strings with even Hamming weights, (5) uniform over strings with odd Hamming weights, and (6) uniform over all strings. This confirms a conjecture of Filmus, Leigh, Riazanov, and Sokolov (RANDOM 2023).
The singular value decomposition (SVD) allows to put a matrix as a product of three matrices: a matrix with the left singular vectors, a matrix with the positive-valued singular values and a matrix with the right singular vectors. There are two main approaches allowing to get the SVD result: the classical method and the randomized method. The analysis of the classical approach leads to accurate singular values. The randomized approach is especially used for high dimensional matrix and is based on the approximation accuracy without computing necessary all singular values. In this paper, the SVD computation is formalized as an optimization problem and a use of the gradient search algorithm. That results in a power method allowing to get all or the first largest singular values and their associated right vectors. In this iterative search, the accuracy on the singular values and the associated vector matrix depends on the user settings. Two applications of the SVD are the principal component analysis and the autoencoder used in the neural network models.
We consider a class of structured fractional minimization problems, where the numerator includes a differentiable function, a simple nonconvex nonsmooth function, a concave nonsmooth function, and a convex nonsmooth function composed with a linear operator, while the denominator is a continuous function that is either weakly convex or has a weakly convex square root. These problems are widespread and span numerous essential applications in machine learning and data science. Existing methods are mainly based on subgradient methods and smoothing proximal gradient methods, which may suffer from slow convergence and numerical stability issues. In this paper, we introduce {\sf FADMM}, the first Alternating Direction Method of Multipliers tailored for this class of problems. {\sf FADMM} decouples the original problem into linearized proximal subproblems, featuring two variants: one using Dinkelbach's parametric method ({\sf FADMM-D}) and the other using the quadratic transform method ({\sf FADMM-Q}). By introducing a novel Lyapunov function, we establish that {\sf FADMM} converges to $\epsilon$-approximate critical points of the problem within an oracle complexity of $\mathcal{O}(1/\epsilon^{3})$. Our experiments on synthetic and real-world data for sparse Fisher discriminant analysis, robust Sharpe ratio minimization, and robust sparse recovery demonstrate the effectiveness of our approach. Keywords: Fractional Minimization, Nonconvex Optimization, Proximal Linearized ADMM, Nonsmooth Optimization, Convergence Analysis
We prove an $\Omega(n^{1-1/k} \log k \ /2^k)$ lower bound on the $k$-party number-in-hand communication complexity of collision-finding. This implies a $2^{n^{1-o(1)}}$ lower bound on the size of tree-like cutting-planes proofs of the bit pigeonhole principle, a compact and natural propositional encoding of the pigeonhole principle, improving on the best previous lower bound of $2^{\Omega(\sqrt{n})}$.
In this work, we show that the class of multivariate degree-$d$ polynomials mapping $\{0,1\}^{n}$ to any Abelian group $G$ is locally correctable with $\widetilde{O}_{d}((\log n)^{d})$ queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of the authors [ABPSS24] (STOC 2024) who considered the case of linear polynomials and gave analogous results. Low-degree polynomials over the Boolean cube $\{0,1\}^{n}$ arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-$d$ polynomials. To show that the class of degree-$d$ polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-$d$ polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.