The computing resource needs of LHC experiments are expected to continue growing significantly during the Run 3 and into the HL-LHC era. The landscape of available resources will also evolve, as High Performance Computing (HPC) and Cloud resources will provide a comparable, or even dominant, fraction of the total compute capacity. The future years present a challenge for the experiments' resource provisioning models, both in terms of scalability and increasing complexity. The CMS Submission Infrastructure (SI) provisions computing resources for CMS workflows. This infrastructure is built on a set of federated HTCondor pools, currently aggregating 400k CPU cores distributed worldwide and supporting the simultaneous execution of over 200k computing tasks. Incorporating HPC resources into CMS computing represents firstly an integration challenge, as HPC centers are much more diverse compared to Grid sites. Secondly, evolving the present SI, dimensioned to harness the current CMS computing capacity, to reach the resource scales required for the HLLHC phase, while maintaining global flexibility and efficiency, will represent an additional challenge for the SI. To preventively address future potential scalability limits, the SI team regularly runs tests to explore the maximum reach of our infrastructure. In this note, the integration of HPC resources into CMS offline computing is summarized, the potential concerns for the SI derived from the increased scale of operations are described, and the most recent results of scalability test on the CMS SI are reported.
We analyze the anti-symmetric properties of a spectral discretization for the one-dimensional Vlasov-Poisson equations. The discretization is based on a spectral expansion in velocity with the symmetrically weighted Hermite basis functions, central finite differencing in space, and an implicit Runge Kutta integrator in time. The proposed discretization preserves the anti-symmetric structure of the advection operator in the Vlasov equation, resulting in a stable numerical method. We apply such discretization to two formulations: the canonical Vlasov-Poisson equations and their continuously transformed square-root representation. The latter preserves the positivity of the particle distribution function. We derive analytically the conservation properties of both formulations, including particle number, momentum, and energy, which are verified numerically on the following benchmark problems: manufactured solution, linear and nonlinear Landau damping, two-stream instability, bump-on-tail instability, and ion-acoustic wave.
Energy measurement of computer devices, which are widely used in the Internet of Things (IoT), is an important yet challenging task. Most of these IoT devices lack ready-to-use hardware or software for power measurement. A cost-effective solution is to use low-end consumer-grade power meters. However, these low-end power meters cannot provide accurate instantaneous power measurements. In this paper, we propose an easy-to-use approach to derive an instantaneous software-based energy estimation model with only low-end power meters based on data-driven analysis through machine learning. Our solution is demonstrated with a Jetson Nano board and Ruideng UM25C USB power meter. Various machine learning methods combined with our smart data collection method and physical measurement are explored. Benchmarks were used to evaluate the derived software-power model for the Jetson Nano board and Raspberry Pi. The results show that 92% accuracy can be achieved compared to the long-duration measurement. A kernel module that can collect running traces of utilization and frequencies needed is developed, together with the power model derived, for power prediction for programs running in real environment.
The computational challenges of Large Language Model (LLM) inference remain a significant barrier to their widespread deployment, especially as prompt lengths continue to increase. Due to the quadratic complexity of the attention computation, it takes 30 minutes for an 8B LLM to process a prompt of 1M tokens (i.e., the pre-filling stage) on a single A100 GPU. Existing methods for speeding up prefilling often fail to maintain acceptable accuracy or efficiency when applied to long-context LLMs. To address this gap, we introduce MInference (Milliontokens Inference), a sparse calculation method designed to accelerate pre-filling of long-sequence processing. Specifically, we identify three unique patterns in long-context attention matrices-the A-shape, Vertical-Slash, and Block-Sparsethat can be leveraged for efficient sparse computation on GPUs. We determine the optimal pattern for each attention head offline and dynamically build sparse indices based on the assigned pattern during inference. With the pattern and sparse indices, we perform efficient sparse attention calculations via our optimized GPU kernels to significantly reduce the latency in the pre-filling stage of long-context LLMs. Our proposed technique can be directly applied to existing LLMs without any modifications to the pre-training setup or additional fine-tuning. By evaluating on a wide range of downstream tasks, including InfiniteBench, RULER, PG-19, and Needle In A Haystack, and models including LLaMA-3-1M, GLM4-1M, Yi-200K, Phi-3-128K, and Qwen2-128K, we demonstrate that MInference effectively reduces inference latency by up to 10x for pre-filling on an A100, while maintaining accuracy. Our code is available at //aka.ms/MInference.
One of the primary areas of interest in High Performance Computing is the improvement of performance of parallel workloads. Nowadays, compilable source code-based optimization tasks that employ deep learning often exploit LLVM Intermediate Representations (IRs) for extracting features from source code. Most such works target specific tasks, or are designed with a pre-defined set of heuristics. So far, pre-trained models are rare in this domain, but the possibilities have been widely discussed. Especially approaches mimicking large-language models (LLMs) have been proposed. But these have prohibitively large training costs. In this paper, we propose MIREncoder, a M}ulti-modal IR-based Auto-Encoder that can be pre-trained to generate a learned embedding space to be used for downstream tasks by machine learning-based approaches. A multi-modal approach enables us to better extract features from compilable programs. It allows us to better model code syntax, semantics and structure. For code-based performance optimizations, these features are very important while making optimization decisions. A pre-trained model/embedding implicitly enables the usage of transfer learning, and helps move away from task-specific trained models. Additionally, a pre-trained model used for downstream performance optimization should itself have reduced overhead, and be easily usable. These considerations have led us to propose a modeling approach that i) understands code semantics and structure, ii) enables use of transfer learning, and iii) is small and simple enough to be easily re-purposed or reused even with low resource availability. Our evaluations will show that our proposed approach can outperform the state of the art while reducing overhead.
We propose a novel framework of generalised Petrov-Galerkin Dynamical Low Rank Approximations (DLR) in the context of random PDEs. It builds on the standard Dynamical Low Rank Approximations in their Dynamically Orthogonal formulation. It allows to seamlessly build-in many standard and well-studied stabilisation techniques that can be framed as either generalised Galerkin methods, or Petrov-Galerkin methods. The framework is subsequently applied to the case of Streamine Upwind/Petrov Galerkin (SUPG) stabilisation of advection-dominated problems with small stochastic perturbations of the transport field. The norm-stability properties of two time discretisations are analysed. Numerical experiments confirm that the stabilising properties of the SUPG method naturally carry over to the DLR framework.
Despite being trained on massive and diverse datasets, speech self-supervised encoders are generally used for downstream purposes as mere frozen feature extractors or model initializers before fine-tuning. The former severely limits the exploitation of large encoders, while the latter hurts the robustness acquired during pretraining, especially in low-resource scenarios. This work explores middle-ground solutions, conjecturing that reducing the forgetting of the self-supervised task during the downstream fine-tuning leads to better generalization. To prove this, focusing on speech recognition, we benchmark different continual-learning approaches during fine-tuning and show that they improve both in-domain and out-of-domain generalization abilities. Relative performance gains reach 15.7% and 22.5% with XLSR used as the encoder on two English and Danish speech recognition tasks. Further probing experiments show that these gains are indeed linked to less forgetting.
We propose ParaPIF, a parareal based time parallelization scheme for the particle-in-Fourier (PIF) discretization of the Vlasov-Poisson system used in kinetic plasma simulations. Our coarse propagators are based on the coarsening of the numerical discretization scheme combined with, if possible, temporal coarsening rather than coarsening of particles and/or Fourier modes, which are not possible or effective for PIF schemes. Specifically, we use PIF with a coarse tolerance for nonuniform FFTs or even the standard particle-in-cell schemes as coarse propagators for the ParaPIF algorithm. We state and prove the convergence of the algorithm and verify the results numerically with Landau damping, two-stream instability, and Penning trap test cases in 3D-3V. We also implement the space-time parallelization of the PIF schemes in the open-source, performance-portable library IPPL and conduct scaling studies up to 1536 A100 GPUs on the JUWELS booster supercomputer. The space-time parallelization utilizing the ParaPIF algorithm for the time parallelization provides up to $4-6$ times speedup compared to spatial parallelization alone and achieves a push rate of around 1 billion particles per second for the benchmark plasma mini-apps considered.
Neural networks used in computations with more advanced algebras than real numbers perform better in some applications. However, there is no general framework for constructing hypercomplex neural networks. We propose a library integrated with Keras that can do computations within TensorFlow and PyTorch. It provides Dense and Convolutional 1D, 2D, and 3D layers architectures.
Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings. The code and pre-trained models are available at //github.com/Albert-Ma/PROP.
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.