The Gaia Astrometric Verification Unit-Global Sphere Reconstruction (AVU-GSR) Parallel Solver aims to find the astrometric parameters for $\sim$10$^8$ stars in the Milky Way, the attitude and the instrumental specifications of the Gaia satellite, and the global parameter $\gamma$ of the post Newtonian formalism. The code iteratively solves a system of linear equations, $\mathbf{A} \times \vec{x} = \vec{b}$, where the coefficient matrix $\mathbf{A}$ is large ($\sim$$10^{11} \times 10^8$ elements) and sparse. To solve this system of equations, the code exploits a hybrid implementation of the iterative PC-LSQR algorithm, where the computation related to different horizontal portions of the coefficient matrix is assigned to separate MPI processes. In the original code, each matrix portion is further parallelized over the OpenMP threads. To further improve the code performance, we ported the application to the GPU, replacing the OpenMP parallelization language with OpenACC. In this port, $\sim$95% of the data is copied from the host to the device at the beginning of the entire cycle of iterations, making the code $compute$ $bound$ rather than $data$$-$$transfer$ $bound$. The OpenACC code presents a speedup of $\sim$1.5 over the OpenMP version but further optimizations are in progress to obtain higher gains. The code runs on multiple GPUs and it was tested on the CINECA supercomputer Marconi100, in anticipation of a port to the pre-exascale system Leonardo, that will be installed at CINECA in 2022.
Three-player Number On the Forehead communication may be thought of as a three-player Number In the Hand promise model, in which each player is given the inputs that are supposedly on the other two players' heads, and promised that they are consistent with the inputs of of the other players. The set of all allowed inputs under this promise may be thought of as an order-3 tensor. We surprisingly observe that this tensor is exactly the matrix multiplication tensor, which is widely studied in the design of fast matrix multiplication algorithms. Using this connection, we prove a number of results about both Number On the Forehead communication and matrix multiplication, each by using known results or techniques about the other. For example, we show how the Laser method, a key technique used to design the best matrix multiplication algorithms, can also be used to design communication protocols for a variety of problems. We also show how known lower bounds for Number On the Forehead communication can be used to bound properties of the matrix multiplication tensor such as its zeroing out subrank. Finally, we substantially generalize known methods based on slice-rank for studying communication, and show how they directly relate to the matrix multiplication exponent $\omega$.
Data-to-text (D2T) and text-to-data (T2D) are dual tasks that convert structured data, such as graphs or tables into fluent text, and vice versa. These tasks are usually handled separately and use corpora extracted from a single source. Current systems leverage pre-trained language models fine-tuned on D2T or T2D tasks. This approach has two main limitations: first, a separate system has to be tuned for each task and source; second, learning is limited by the scarcity of available corpora. This paper considers a more general scenario where data are available from multiple heterogeneous sources. Each source, with its specific data format and semantic domain, provides a non-parallel corpus of text and structured data. We introduce a variational auto-encoder model with disentangled style and content variables that allows us to represent the diversity that stems from multiple sources of text and data. Our model is designed to handle the tasks of D2T and T2D jointly. We evaluate our model on several datasets, and show that by learning from multiple sources, our model closes the performance gap with its supervised single-source counterpart and outperforms it in some cases.
In recent years, large-scale models have demonstrated state-of-the-art performance across various domains. However, training such models requires various techniques to address the problem of limited computing power and memory on devices such as GPUs. Some commonly used techniques include pipeline parallelism, tensor parallelism, and activation checkpointing. While existing works have focused on finding efficient distributed execution plans (Zheng et al. 2022) and activation checkpoint scheduling (Herrmann et al. 2019, Beaumont et al. 2021}, there has been no method proposed to optimize these two plans jointly. Moreover, ahead-of-time compilation relies heavily on accurate memory and computing overhead estimation, which is often time-consuming and misleading. Existing training systems and machine learning pipelines either physically execute each operand or estimate memory usage with a scaled input tensor. To address these challenges, we introduce a system that can jointly optimize distributed execution and gradient checkpointing plans. Additionally, we provide an easy-to-use symbolic profiler that generates memory and computing statistics for any PyTorch model with a minimal time cost. Our approach allows users to parallelize their model training on the given hardware with minimum code change based. The source code is publicly available at Colossal-AI GitHub or //github.com/hpcaitech/ColossalAI
The current certification process for aerospace software is not adapted to "AI-based" algorithms such as deep neural networks. Unlike traditional aerospace software, the precise parameters optimized during neural network training are as important as (or more than) the code processing the network and they are not directly mathematically understandable. Despite their lack of explainability such algorithms are appealing because for some applications they can exhibit high performance unattainable with any traditional explicit line-by-line software methods. This paper proposes a framework and principles that could be used to establish certification methods for neural network models for which the current certification processes such as DO-178 cannot be applied. While it is not a magic recipe, it is a set of common sense steps that will allow the applicant and the regulator increase their confidence in the developed software, by demonstrating the capabilities to bring together, trace, and track the requirements, data, software, training process, and test results.
The increase in the number of counterfeit and recycled microelectronic chips in recent years has created significant security and safety concerns in various applications. Hence, detecting such counterfeit chips in electronic systems is critical before deployment in the field. Unfortunately, the conventional verification tools using physical inspection and side-channel methods are costly, unscalable, error-prone, and often incompatible with legacy systems. This paper introduces a generic non-invasive and low-cost counterfeit chip detection based on characterizing the impedance of the system's power delivery network (PDN). Our method relies on the fact that the impedance of the counterfeit and recycled chips differs from the genuine ones. To sense such impedance variations confidently, we deploy scattering parameters, frequently used for impedance characterization of RF/microwave circuits. Our proposed approach can directly be applied to soldered chips on the system's PCB and does not require any modifications on the legacy systems. To validate our claims, we perform extensive measurements on genuine and aged samples from two families of STMicroelectronics chips to assess the effectiveness of the proposed approach.
Cross-sectional strategies are a classical and popular trading style, with recent high performing variants incorporating sophisticated neural architectures. While these strategies have been applied successfully to data-rich settings involving mature assets with long histories, deploying them on instruments with limited samples generally produce over-fitted models with degraded performance. In this paper, we introduce Fused Encoder Networks -- a novel and hybrid parameter-sharing transfer ranking model. The model fuses information extracted using an encoder-attention module operated on a source dataset with a similar but separate module focused on a smaller target dataset of interest. This mitigates the issue of models with poor generalisability that are a consequence of training on scarce target data. Additionally, the self-attention mechanism enables interactions among instruments to be accounted for, not just at the loss level during model training, but also at inference time. Focusing on momentum applied to the top ten cryptocurrencies by market capitalisation as a demonstrative use-case, the Fused Encoder Networks outperforms the reference benchmarks on most performance measures, delivering a three-fold boost in the Sharpe ratio over classical momentum as well as an improvement of approximately 50% against the best benchmark model without transaction costs. It continues outperforming baselines even after accounting for the high transaction costs associated with trading cryptocurrencies.
There has been on-going philosophical debate on whether artificial life models, also known as digital organisms, are truly alive. The main difficulty appears to be finding an encompassing and definite definition of life. By examining similarities and differences in recent definitions of life, we define life as "any system with a boundary to confine the system within a definite volume and protect the system from external effects, consisting of a program that is capable of improvisation, able to react and adapt to the environment, able to regenerate parts of it-self or its entirety, with energy system comprises of non-interference sets of secluded reactions for self-sustenance, is considered alive or a living system. Any incomplete system containing a program and can be re-assembled into a living system; thereby, converting the reassembled system for the purpose of the incomplete system, are also considered alive." Using this definition, we argue that digital organisms may not be the boundary case of life even though some digital organisms are not considered alive; thereby, taking the view that some form of digital organisms can be considered alive. In addition, we present an experimental framework based on continuity of the overall system and potential discontinuity of elements within the system for testing future definitions of life.
The paper presents a strategy to construct an incremental Singular Value Decomposition (SVD) for time-evolving, spatially 3D discrete data sets. A low memory access procedure for reducing and deploying the snapshot data is presented. Considered examples refer to Computational Fluid Dynamic (CFD) results extracted from unsteady flow simulations, which are computed spatially parallel using domain decomposition strategies. The framework addresses state of the art PDE-solvers dedicated to practical applications. Although the approach is applied to technical flows, it is applicable in similar applications under the umbrella of Computational Science and Engineering (CSE). To this end, we introduce a bunch matrix that allows the aggregation of multiple time steps and SVD updates, and significantly increases the computational efficiency. The incremental SVD strategy is initially verified and validated by simulating the 2D laminar single-phase flow around a circular cylinder. Subsequent studies analyze the proposed strategy for a 2D submerged hydrofoil located in turbulent two-phase flows. Attention is directed to the accuracy of the SVD-based reconstruction based on local and global flow quantities, their physical realizability, the independence of the domain partitioning, and related implementation aspects. Moreover, the influence of lower and (adaptive) upper construction rank thresholds on both the effort and the accuracy are assessed. The incremental SVD process is applied to analyze and compress the predicted flow field around a Kriso container ship in harmonic head waves at Fn = 0.26 and ReL = 1.4E+07. With a numerical overhead of O(10%), the snapshot matrix of size O(R10E+08 x 10E+04) computed on approximately 3000 processors can be incrementally compressed by O(95%). The storage reduction is accompanied by errors in integral force and local wave elevation quantities of O(1E-02%).
It is often unnoticed that the predominant way to use collocation methods is fundamentally flawed when applied to optimal control in robotics. Such methods assume that the system dynamics is given by a first order ODE, whereas robots are often governed by a second or higher order ODE involving configuration variables and their time derivatives. To apply a collocation method, therefore, the usual practice is to resort to the well known procedure of casting an M th order ODE into M first order ones. This manipulation, which in the continuous domain is perfectly valid, leads to inconsistencies when the problem is discretized. Since the configuration variables and their time derivatives are approximated with polynomials of the same degree, their differential dependencies cannot be fulfilled, and the actual dynamics is not satisfied, not even at the collocation points. This paper draws attention to this problem, and develops improved versions of the trapezoidal and Hermite-Simpson collocation methods that do not present these inconsistencies. In many cases, the new methods reduce the dynamic transcription error in one order of magnitude, or even more, without noticeably increasing the cost of computing the solutions.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.