This paper presents a theoretical analysis of the convergence rate of the Sinkhorn algorithm when the cost matrix is sparse. We derive bounds on the convergence rate that depend on the sparsity pattern and the degree of sparsity of the cost matrix. We also explore whether existing convergence results for dense cost matrices can be adapted or improved for the sparse case. Our analysis provides new insights into the behavior of the Sinkhorn algorithm in the presence of sparsity and highlights potential avenues for algorithmic improvements.
In the realm of self-supervised learning (SSL), masked image modeling (MIM) has gained popularity alongside contrastive learning methods. MIM involves reconstructing masked regions of input images using their unmasked portions. A notable subset of MIM methodologies employs discrete tokens as the reconstruction target, but the theoretical underpinnings of this choice remain underexplored. In this paper, we explore the role of these discrete tokens, aiming to unravel their benefits and limitations. Building upon the connection between MIM and contrastive learning, we provide a comprehensive theoretical understanding on how discrete tokenization affects the model's generalization capabilities. Furthermore, we propose a novel metric named TCAS, which is specifically designed to assess the effectiveness of discrete tokens within the MIM framework. Inspired by this metric, we contribute an innovative tokenizer design and propose a corresponding MIM method named ClusterMIM. It demonstrates superior performance on a variety of benchmark datasets and ViT backbones. Code is available at //github.com/PKU-ML/ClusterMIM.
Probabilistic proofs of the Johnson-Lindenstrauss lemma imply that random projection can reduce the dimension of a data set and approximately preserve pairwise distances. If a distance being approximately preserved is called a success, and the complement of this event is called a failure, then such a random projection likely results in no failures. Assuming a Gaussian random projection, the lemma is proved by showing that the no-failure probability is positive using a combination of Bonferroni's inequality and Markov's inequality. This paper modifies this proof in two ways to obtain a greater lower bound on the no-failure probability. First, Bonferroni's inequality is applied to pairs of failures instead of individual failures. Second, since a pair of projection errors has a bivariate gamma distribution, the probability of a pair of successes is bounded using an inequality from Jensen (1969). If $n$ is the number of points to be embedded and $\mu$ is the probability of a success, then this leads to an increase in the lower bound on the no-failure probability of $\frac{1}{2}\binom{n}{2}(1-\mu)^2$ if $\binom{n}{2}$ is even and $\frac{1}{2}\left(\binom{n}{2}-1\right)(1-\mu)^2$ if $\binom{n}{2}$ is odd. For example, if $n=10^5$ points are to be embedded in $k=10^4$ dimensions with a tolerance of $\epsilon=0.1$, then the improvement in the lower bound is on the order of $10^{-14}$. We also show that further improvement is possible if the inequality in Jensen (1969) extends to three successes, though we do not have a proof of this result.
Stochastic approximation is a class of algorithms that update a vector iteratively, incrementally, and stochastically, including, e.g., stochastic gradient descent and temporal difference learning. One fundamental challenge in analyzing a stochastic approximation algorithm is to establish its stability, i.e., to show that the stochastic vector iterates are bounded almost surely. In this paper, we extend the celebrated Borkar-Meyn theorem for stability from the Martingale difference noise setting to the Markovian noise setting, which greatly improves its applicability in reinforcement learning, especially in those off-policy reinforcement learning algorithms with linear function approximation and eligibility traces. Central to our analysis is the diminishing asymptotic rate of change of a few functions, which is implied by both a form of strong law of large numbers and a commonly used V4 Lyapunov drift condition and trivially holds if the Markov chain is finite and irreducible.
This paper introduces the notion of referring forms as a new metric for analyzing sequential circuits from a functional perspective. Sequential circuits are modeled as causal stream functions, the outputs of which depend solely on the past and current inputs. Referring forms are defined based on the type expressions of functions and represent how a circuit refers to past inputs. The key contribution of this study is identifying a universal property in multiple clock domain circuits using referring forms. This theoretical framework is expected to enhance the comprehension and analysis of sequential circuits.
We provide a geometric approach to the lasso. We study the tangency of the level sets of the least square objective function with the polyhedral boundary sets $B(t)$ of the parameters in $\mathbb R^p$ with the $\ell_1$ norm equal to $t$. Here $t$ decreases from the value $\hat t$, which corresponds to the actual, nonconstrained minimizer of the least square objective function, denoted by $\hat\beta$. We derive closed exact formulae for the solution of the lasso under the full rank assumption. Our method does not assume iterative numerical procedures and it is, thus, computationally more efficient than the existing algorithms for solving the lasso. We also establish several important general properties of the solutions of the lasso. We prove that each lasso solution form a simple polygonal chain in $\mathbb{R}^p$ with $\hat\beta$ and the origin as the endpoints. There are no two segments of the polygonal chain that are parallel. We prove that such a polygonal chain can intersect interiors of more than one orthant in $\mathbb{R}^p$, but it cannot intersect interiors of more than $p$ orthants, which is, in general, the best possible estimate for non-normalized data. We prove that if a polygonal chain passes from the interior of one to the interior of another orthant, then it never again returns to the interior of the former. The intersection of a chain and the interior of an orthant coincides with a segment minus its end points, which belongs to a ray having $\hat\beta$ as its initial point. We illustrate the results using real data examples as well as especially crafted examples with hypothetical data. Already in $p=2$ case we show a striking difference in the maximal number of quadrants a polygonal chain of a lasso solution can intersect in the case of normalized data, which is $1$ vs. nonnormalized data, which is $2$.
We provide an algorithm for the simultaneous system identification and model predictive control of nonlinear systems. The algorithm has finite-time near-optimality guarantees and asymptotically converges to the optimal (non-causal) controller. Particularly, the algorithm enjoys sublinear dynamic regret, defined herein as the suboptimality against an optimal clairvoyant controller that knows how the unknown disturbances and system dynamics will adapt to its actions. The algorithm is self-supervised and applies to control-affine systems with unknown dynamics and disturbances that can be expressed in reproducing kernel Hilbert spaces. Such spaces can model external disturbances and modeling errors that can even be adaptive to the system's state and control input. For example, they can model wind and wave disturbances to aerial and marine vehicles, or inaccurate model parameters such as inertia of mechanical systems. The algorithm first generates random Fourier features that are used to approximate the unknown dynamics or disturbances. Then, it employs model predictive control based on the current learned model of the unknown dynamics (or disturbances). The model of the unknown dynamics is updated online using least squares based on the data collected while controlling the system. We validate our algorithm in both hardware experiments and physics-based simulations. The simulations include (i) a cart-pole aiming to maintain the pole upright despite inaccurate model parameters, and (ii) a quadrotor aiming to track reference trajectories despite unmodeled aerodynamic drag effects. The hardware experiments include a quadrotor aiming to track a circular trajectory despite unmodeled aerodynamic drag effects, ground effects, and wind disturbances.
The zero-shot effectiveness of neural retrieval models is often evaluated on the BEIR benchmark -- a combination of different IR evaluation datasets. Interestingly, previous studies found that particularly on the BEIR subset Touch\'e 2020, an argument retrieval task, neural retrieval models are considerably less effective than BM25. Still, so far, no further investigation has been conducted on what makes argument retrieval so "special". To more deeply analyze the respective potential limits of neural retrieval models, we run a reproducibility study on the Touch\'e 2020 data. In our study, we focus on two experiments: (i) a black-box evaluation (i.e., no model retraining), incorporating a theoretical exploration using retrieval axioms, and (ii) a data denoising evaluation involving post-hoc relevance judgments. Our black-box evaluation reveals an inherent bias of neural models towards retrieving short passages from the Touch\'e 2020 data, and we also find that quite a few of the neural models' results are unjudged in the Touch\'e 2020 data. As many of the short Touch\'e passages are not argumentative and thus non-relevant per se, and as the missing judgments complicate fair comparison, we denoise the Touch\'e 2020 data by excluding very short passages (less than 20 words) and by augmenting the unjudged data with post-hoc judgments following the Touch\'e guidelines. On the denoised data, the effectiveness of the neural models improves by up to 0.52 in nDCG@10, but BM25 is still more effective. Our code and the augmented Touch\'e 2020 dataset are available at \url{//github.com/castorini/touche-error-analysis}.
We consider the problem of sampling from the Ising model when the underlying interaction matrix has eigenvalues lying within an interval of length $\gamma$. Recent work in this setting has shown various algorithmic results that apply roughly when $\gamma< 1$, notably with nearly-linear running times based on the classical Glauber dynamics. However, the optimality of the range of $\gamma$ was not clear since previous inapproximability results developed for the antiferromagnetic case (where the matrix has entries $\leq 0$) apply only for $\gamma>2$. To this end, Kunisky (SODA'24) recently provided evidence that the problem becomes hard already when $\gamma>1$ based on the low-degree hardness for an inference problem on random matrices. Based on this, he conjectured that sampling from the Ising model in the same range of $\gamma$ is NP-hard. Here we confirm this conjecture, complementing in particular the known algorithmic results by showing NP-hardness results for approximately counting and sampling when $\gamma>1$, with strong inapproximability guarantees; we also obtain a more refined hardness result for matrices where only a constant number of entries per row are allowed to be non-zero. The main observation in our reductions is that, for $\gamma>1$, Glauber dynamics mixes slowly when the interactions are all positive (ferromagnetic) for the complete and random regular graphs, due to a bimodality in the underlying distribution. While ferromagnetic interactions typically preclude NP-hardness results, here we work around this by introducing in an appropriate way mild antiferromagnetism, keeping the spectrum roughly within the same range. This allows us to exploit the bimodality of the aforementioned graphs and show the target NP-hardness by adapting suitably previous inapproximability techniques developed for antiferromagnetic systems.
We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist. To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model. As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome). Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest. We theoretically demonstrate the correctness of the proposed algorithm. Experimental results show the effectiveness of our method for estimating causal effects in bi-directional MR.
We explore the use of deep learning to localise galactic structures in low surface brightness (LSB) images. LSB imaging reveals many interesting structures, though these are frequently confused with galactic dust contamination, due to a strong local visual similarity. We propose a novel unified approach to multi-class segmentation of galactic structures and of extended amorphous image contaminants. Our panoptic segmentation model combines Mask R-CNN with a contaminant specialised network and utilises an adaptive preprocessing layer to better capture the subtle features of LSB images. Further, a human-in-the-loop training scheme is employed to augment ground truth labels. These different approaches are evaluated in turn, and together greatly improve the detection of both galactic structures and contaminants in LSB images.