Gaussian process is an indispensable tool in clustering functional data, owing to it's flexibility and inherent uncertainty quantification. However, when the functional data is observed over a large grid (say, of length $p$), Gaussian process clustering quickly renders itself infeasible, incurring $O(p^2)$ space complexity and $O(p^3)$ time complexity per iteration; and thus prohibiting it's natural adaptation to large environmental applications. To ensure scalability of Gaussian process clustering in such applications, we propose to embed the popular Vecchia approximation for Gaussian processes at the heart of the clustering task, provide crucial theoretical insights towards algorithmic design, and finally develop a computationally efficient expectation maximization (EM) algorithm. Empirical evidence of the utility of our proposal is provided via simulations and analysis of polar temperature anomaly (\href{//www.ncei.noaa.gov/access/monitoring/climate-at-a-glance/global/time-series}{noaa.gov}) data-sets.
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models with a downstream application and thus error quantification plays a key role. However, by ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of the Gaussian process inference theorem to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models by blurring the boundaries between numerical analysis and Bayesian inference.
Multi-objective optimization is a type of decision making problems where multiple conflicting objectives are optimized. We study offline optimization of multi-objective policies from data collected by an existing policy. We propose a pessimistic estimator for the multi-objective policy values that can be easily plugged into existing formulas for hypervolume computation and optimized. The estimator is based on inverse propensity scores (IPS), and improves upon a naive IPS estimator in both theory and experiments. Our analysis is general, and applies beyond our IPS estimators and methods for optimizing them. The pessimistic estimator can be optimized by policy gradients and performs well in all of our experiments.
We consider the massively parallel computation (MPC) model, which is a theoretical abstraction of large-scale parallel processing models such as MapReduce. In this model, assuming the widely believed 1-vs-2-cycles conjecture, solving many basic graph problems in $O(1)$ rounds with a strongly sublinear memory size per machine is impossible. We improve on the recent work of Holm and T\v{e}tek [SODA 2023] that bypass this barrier for problems when a planar embedding of the graph is given. In the previous work, on graphs of size $n$ with $O(n/\mathcal{S})$ machines, the memory size per machine needs to be at least $\mathcal{S} = n^{2/3+\Omega(1)}$, whereas we extend their work to the fully scalable regime, where the memory size per machine can be $\mathcal{S} = n^{\delta}$ for any constant $0< \delta < 1$. We give the first constant round fully scalable algorithms for embedded planar graphs for the problems of (i) connectivity and (ii) minimum spanning tree (MST). Moreover, we show that the $\varepsilon$-emulator of Chang, Krauthgamer, and Tan [STOC 2022] can be incorporated into our recursive framework to obtain constant-round $(1+\varepsilon)$-approximation algorithms for the problems of computing (iii) single source shortest path (SSSP), (iv) global min-cut, and (v) $st$-max flow. All previous results on cuts and flows required linear memory in the MPC model. Furthermore, our results give new algorithms for problems that implicitly involve embedded planar graphs. We give as corollaries constant round fully scalable algorithms for (vi) 2D Euclidean MST using $O(n)$ total memory and (vii) $(1+\varepsilon)$-approximate weighted edit distance using $\widetilde{O}(n^{2-\delta})$ memory. Our main technique is a recursive framework combined with novel graph drawing algorithms to compute smaller embedded planar graphs in constant rounds in the fully scalable setting.
Quantum circuit mapping is a crucial process in the quantum circuit compilation pipeline, facilitating the transformation of a logical quantum circuit into a list of instructions directly executable on a target quantum system. Recent research has introduced a post-compilation step known as remapping, which seeks to reconfigure the initial circuit mapping to mitigate quantum circuit errors arising from system variability. As quantum processors continue to scale in size, the efficiency of quantum circuit mapping and the overall compilation process has become of paramount importance. In this work, we introduce a quantum circuit remapping algorithm that leverages the intrinsic symmetries in quantum processors, making it well-suited for large-scale quantum systems. This algorithm identifies all topologically equivalent circuit mappings by constraining the search space using symmetries and accelerates the scoring of each mapping using vector computation. Notably, this symmetry-based circuit remapping algorithm exhibits linear scaling with the number of qubits in the target quantum hardware and is proven to be optimal in terms of its time complexity. Moreover, we conduct a comparative analysis against existing methods in the literature, demonstrating the superior performance of our symmetry-based method on state-of-the-art quantum hardware architectures and highlighting the practical utility of our algorithm, particularly for quantum processors with millions of qubits.
Computational simulation is increasingly relied upon for high-consequence engineering decisions, and a foundational element to solid mechanics simulations, such as finite element analysis (FEA), is a credible constitutive or material model. Calibration of these complex models is an essential step; however, the selection, calibration and validation of material models is often a discrete, multi-stage process that is decoupled from material characterization activities, which means the data collected does not always align with the data that is needed. To address this issue, an integrated workflow for delivering an enhanced characterization and calibration procedure (Interlaced Characterization and Calibration (ICC)) is introduced. This framework leverages Bayesian optimal experimental design (BOED) to select the optimal load path for a cruciform specimen in order to collect the most informative data for model calibration. The critical first piece of algorithm development is to demonstrate the active experimental design for a fast model with simulated data. For this demonstration, a material point simulator that models a plane stress elastoplastic material subject to bi-axial loading was chosen. The ICC framework is demonstrated on two exemplar problems in which BOED is used to determine which load step to take, e.g., in which direction to increment the strain, at each iteration of the characterization and calibration cycle. Calibration results from data obtained by adaptively selecting the load path within the ICC algorithm are compared to results from data generated under two naive static load paths that were chosen a priori based on human intuition. In these exemplar problems, data generated in an adaptive setting resulted in calibrated model parameters with reduced measures of uncertainty compared to the static settings.
Label embedding is a framework for multiclass classification problems where each label is represented by a distinct vector of some fixed dimension, and training involves matching model output to the vector representing the correct label. While label embedding has been successfully applied in extreme classification and zero-shot learning, and offers both computational and statistical advantages, its theoretical foundations remain poorly understood. This work presents an analysis of label embedding in the context of extreme multiclass classification, where the number of classes $C$ is very large. We present an excess risk bound that reveals a trade-off between computational and statistical efficiency, quantified via the coherence of the embedding matrix. We further show that under the Massart noise condition, the statistical penalty for label embedding vanishes with sufficiently low coherence. Our analysis supports an algorithm that is simple, scalable, and easily parallelizable, and experimental results demonstrate its effectiveness in large-scale applications.
Gaussian process (GP) hyperparameter optimization requires repeatedly solving linear systems with $n \times n$ kernel matrices. To address the prohibitive $\mathcal{O}(n^3)$ time complexity, recent work has employed fast iterative numerical methods, like conjugate gradients (CG). However, as datasets increase in magnitude, the corresponding kernel matrices become increasingly ill-conditioned and still require $\mathcal{O}(n^2)$ space without partitioning. Thus, while CG increases the size of datasets GPs can be trained on, modern datasets reach scales beyond its applicability. In this work, we propose an iterative method which only accesses subblocks of the kernel matrix, effectively enabling \emph{mini-batching}. Our algorithm, based on alternating projection, has $\mathcal{O}(n)$ per-iteration time and space complexity, solving many of the practical challenges of scaling GPs to very large datasets. Theoretically, we prove our method enjoys linear convergence and empirically we demonstrate its robustness to ill-conditioning. On large-scale benchmark datasets up to four million datapoints our approach accelerates training by a factor of 2$\times$ to 27$\times$ compared to CG.
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.
Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data. A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties. Although many studies have been proposed to address this challenge, we find that they fail to achieve high performance in image datasets with deep learning models. In this paper, we propose MOON: model-contrastive federated learning. MOON is a simple and effective federated learning framework. The key idea of MOON is to utilize the similarity between model representations to correct the local training of individual parties, i.e., conducting contrastive learning in model-level. Our extensive experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.