Data pruning algorithms are commonly used to reduce the memory and computational cost of the optimization process. Recent empirical results reveal that random data pruning remains a strong baseline and outperforms most existing data pruning methods in the high compression regime, i.e., where a fraction of $30\%$ or less of the data is kept. This regime has recently attracted a lot of interest as a result of the role of data pruning in improving the so-called neural scaling laws; in [Sorscher et al.], the authors showed the need for high-quality data pruning algorithms in order to beat the sample power law. In this work, we focus on score-based data pruning algorithms and show theoretically and empirically why such algorithms fail in the high compression regime. We demonstrate ``No Free Lunch" theorems for data pruning and present calibration protocols that enhance the performance of existing pruning algorithms in this high compression regime using randomization.
Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex optimization that sequentially minimizes a majorizing surrogate of the objective function in each block coordinate while the other block coordinates are held fixed. We consider a family of BMM algorithms for minimizing smooth nonconvex objectives, where each parameter block is constrained within a subset of a Riemannian manifold. We establish that this algorithm converges asymptotically to the set of stationary points, and attains an $\epsilon$-stationary point within $\widetilde{O}(\epsilon^{-2})$ iterations. In particular, the assumptions for our complexity results are completely Euclidean when the underlying manifold is a product of Euclidean or Stiefel manifolds, although our analysis makes explicit use of the Riemannian geometry. Our general analysis applies to a wide range of algorithms with Riemannian constraints: Riemannian MM, block projected gradient descent, optimistic likelihood estimation, geodesically constrained subspace tracking, robust PCA, and Riemannian CP-dictionary-learning. We experimentally validate that our algorithm converges faster than standard Euclidean algorithms applied to the Riemannian setting.
Due to their cost, experiments for inertial confinement fusion (ICF) heavily rely on numerical simulations to guide design. As simulation technology progresses, so too can the fidelity of models used to plan for new experiments. However, these high-fidelity models are by themselves insufficient for optimal experimental design, because their computational cost remains too high to efficiently and effectively explore the numerous parameters required to describe a typical experiment. Traditionally, ICF design has relied on low-fidelity modeling to initially identify potentially interesting design regions, which are then subsequently explored via selected high-fidelity modeling. In this paper, we demonstrate that this two-step approach can be insufficient: even for simple design problems, a two-step optimization strategy can lead high-fidelity searching towards incorrect regions and consequently waste computational resources on parameter regimes far away from the true optimal solution. We reveal that a primary cause of this behavior in ICF design problems is the presence of low-fidelity optima in distinct regions of the parameter space from high-fidelity optima. To address this issue, we propose an iterative multifidelity Bayesian optimization method based on Gaussian Process Regression that leverages both low- and high-fidelity modelings. We demonstrate, using both two- and eight-dimensional ICF test problems, that our algorithm can effectively utilize low-fidelity modeling for exploration, while automatically refining promising designs with high-fidelity models. This approach proves to be more efficient than relying solely on high-fidelity modeling for optimization.
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.
We study the Weyl formula for the asymptotic number of eigenvalues of the Laplace-Beltrami operator with Dirichlet boundary condition on a Riemannian manifold in the context of geometric flows. Assuming the eigenvalues to be the energies of some associated statistical system, we show that geometric flows are directly related with the direction of increasing entropy chosen. For a closed Riemannian manifold we obtain a volume preserving flow of geometry being equivalent to the increment of Gibbs entropy function derived from the spectrum of Laplace-Beltrami operator. Resemblance with Arnowitt, Deser, and Misner (ADM) formalism of gravity is also noted by considering open Riemannian manifolds, directly equating the geometric flow parameter and the direction of increasing entropy as time direction.
A class of (block) rational Krylov subspace based projection method for solving large-scale continuous-time algebraic Riccati equation (CARE) $0 = \mathcal{R}(X) := A^HX + XA + C^HC - XBB^HX$ with a large, sparse $A$ and $B$ and $C$ of full low rank is proposed. The CARE is projected onto a block rational Krylov subspace $\mathcal{K}_j$ spanned by blocks of the form $(A^H+ s_kI)C^H$ for some shifts $s_k, k = 1, \ldots, j.$ The considered projections do not need to be orthogonal and are built from the matrices appearing in the block rational Arnoldi decomposition associated to $\mathcal{K}_j.$ The resulting projected Riccati equation is solved for the small square Hermitian $Y_j.$ Then the Hermitian low-rank approximation $X_j = Z_jY_jZ_j^H$ to $X$ is set up where the columns of $Z_j$ span $\mathcal{K}_j.$ The residual norm $\|R(X_j )\|_F$ can be computed efficiently via the norm of a readily available $2p \times 2p$ matrix. We suggest to reduce the rank of the approximate solution $X_j$ even further by truncating small eigenvalues from $X_j.$ This truncated approximate solution can be interpreted as the solution of the Riccati residual projected to a subspace of $\mathcal{K}_j.$ This gives us a way to efficiently evaluate the norm of the resulting residual. Numerical examples are presented.
Quantum computing devices are believed to be powerful in solving the prime factorization problem, which is at the heart of widely deployed public-key cryptographic tools. However, the implementation of Shor's quantum factorization algorithm requires significant resources scaling linearly with the number size; taking into account an overhead that is required for quantum error correction the estimation is that 20 millions of (noisy) physical qubits are required for factoring 2048-bit RSA key in 8 hours. Recent proposal by Yan et al. claims a possibility of solving the factorization problem with sublinear quantum resources. As we demonstrate in our work, this proposal lacks systematic analysis of the computational complexity of the classical part of the algorithm, which exploits the Schnorr's lattice-based approach. We provide several examples illustrating the need in additional resource analysis for the proposed quantum factorization algorithm.
This manuscript is devoted to investigating the conservation laws of incompressible Navier-Stokes equations(NSEs), written in the energy-momentum-angular momentum conserving(EMAC) formulation, after being linearized by the two-level methods. With appropriate correction steps(e.g., Stoke/Newton corrections), we show that the two-level methods, discretized from EMAC NSEs, could preserve momentum, angular momentum, and asymptotically preserve energy. Error estimates and (asymptotic) conservative properties are analyzed and obtained, and numerical experiments are conducted to validate the theoretical results, mainly confirming that the two-level linearized methods indeed possess the property of (almost) retainability on conservation laws. Moreover, experimental error estimates and optimal convergence rates of two newly defined types of pressure approximation in EMAC NSEs are also obtained.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.