The purpose of this work is to present an effective tool for computing different QR-decompositions of a complex nonsingular square matrix. The concept of the discrete signal-induced heap transform (DsiHT, Grigoryan 2006) is used. This transform is fast, has a unique algorithm for any length of the input vector/signal and can be used with different complex basic 2x2 transforms. The DsiHT zeroes all components of the input signal while moving or heaping the energy of the signal into one component, such as the first. We describe three different types of QR-decompositions that use the basic transforms with the T, G, and M-type complex matrices we introduce, and also without matrices, but using analytical formulas. We also present the mixed QR-decomposition, when different type DsiHTs are used at different stages of the algorithm. The number of such decompositions is greater than 3^((N-1)), for an NxN complex matrix. Examples of the QR-decomposition are described in detail for the 4x4 and 6x6 complex matrices and compared with the known method of Householder transforms. The precision of the QR-decompositions of NxN matrices, when N are 6, 13, 17, 19, 21, 40, 64, 100, 128, 201, 256, and 400 is also compared. The MATLAB-based scripts of the codes for QR-decompositions by the described DsiHTs are given.
Feature shaping refers to a family of methods that exhibit state-of-the-art performance for out-of-distribution (OOD) detection. These approaches manipulate the feature representation, typically from the penultimate layer of a pre-trained deep learning model, so as to better differentiate between in-distribution (ID) and OOD samples. However, existing feature-shaping methods usually employ rules manually designed for specific model architectures and OOD datasets, which consequently limit their generalization ability. To address this gap, we first formulate an abstract optimization framework for studying feature-shaping methods. We then propose a concrete reduction of the framework with a simple piecewise constant shaping function and show that existing feature-shaping methods approximate the optimal solution to the concrete optimization problem. Further, assuming that OOD data is inaccessible, we propose a formulation that yields a closed-form solution for the piecewise constant shaping function, utilizing solely the ID data. Through extensive experiments, we show that the feature-shaping function optimized by our method improves the generalization ability of OOD detection across a large variety of datasets and model architectures.
Agent-based models (ABMs) have shown promise for modelling various real world phenomena incompatible with traditional equilibrium analysis. However, a critical concern is the manual definition of behavioural rules in ABMs. Recent developments in multi-agent reinforcement learning (MARL) offer a way to address this issue from an optimisation perspective, where agents strive to maximise their utility, eliminating the need for manual rule specification. This learning-focused approach aligns with established economic and financial models through the use of rational utility-maximising agents. However, this representation departs from the fundamental motivation for ABMs: that realistic dynamics emerging from bounded rationality and agent heterogeneity can be modelled. To resolve this apparent disparity between the two approaches, we propose a novel technique for representing heterogeneous processing-constrained agents within a MARL framework. The proposed approach treats agents as constrained optimisers with varying degrees of strategic skills, permitting departure from strict utility maximisation. Behaviour is learnt through repeated simulations with policy gradients to adjust action likelihoods. To allow efficient computation, we use parameterised shared policy learning with distributions of agent skill levels. Shared policy learning avoids the need for agents to learn individual policies yet still enables a spectrum of bounded rational behaviours. We validate our model's effectiveness using real-world data on a range of canonical $n$-agent settings, demonstrating significantly improved predictive capability.
We conduct a systematic study of the approximation properties of Transformer for sequence modeling with long, sparse and complicated memory. We investigate the mechanisms through which different components of Transformer, such as the dot-product self-attention, positional encoding and feed-forward layer, affect its expressive power, and we study their combined effects through establishing explicit approximation rates. Our study reveals the roles of critical parameters in the Transformer, such as the number of layers and the number of attention heads, and these insights also provide natural suggestions for alternative architectures.
Quantum computing holds immense potential for solving classically intractable problems by leveraging the unique properties of quantum mechanics. The scalability of quantum architectures remains a significant challenge. Multi-core quantum architectures are proposed to solve the scalability problem, arising a new set of challenges in hardware, communications and compilation, among others. One of these challenges is to adapt a quantum algorithm to fit within the different cores of the quantum computer. This paper presents a novel approach for circuit partitioning using Deep Reinforcement Learning, contributing to the advancement of both quantum computing and graph partitioning. This work is the first step in integrating Deep Reinforcement Learning techniques into Quantum Circuit Mapping, opening the door to a new paradigm of solutions to such problems.
Bayesian model updating facilitates the calibration of analytical models based on observations and the quantification of uncertainties in model parameters such as stiffness and mass. This process significantly enhances damage assessment and response predictions in existing civil structures. Predominantly, current methods employ modal properties identified from acceleration measurements to evaluate the likelihood of the model parameters. This modal analysis-based likelihood generally involves a prior assumption regarding the mass parameters. In civil structures, accurately determining mass parameters proves challenging owing to the time-varying nature of imposed loads. The resulting inaccuracy potentially introduces biases while estimating the stiffness parameters, which affects the assessment of structural response and associated damage. Addressing this issue, the present study introduces a stress-resultant-based approach for Bayesian model updating independent of mass assumptions. This approach utilizes system identification on strain and acceleration measurements to establish the relationship between nodal displacements and elemental stress resultants. Employing static analysis to depict this relationship aids in assessing the likelihood of stiffness parameters. Integrating this static-analysis-based likelihood with a modal-analysis-based likelihood facilitates the simultaneous estimation of mass and stiffness parameters. The proposed approach was validated using numerical examples on a planar frame and experimental studies on a full-scale moment-resisting steel frame structure.
We present a theoretical analysis of the performance of transformer with softmax attention in in-context learning with linear regression tasks. While the existing literature predominantly focuses on the convergence of transformers with single-/multi-head attention, our research centers on comparing their performance. We conduct an exact theoretical analysis to demonstrate that multi-head attention with a substantial embedding dimension performs better than single-head attention. When the number of in-context examples D increases, the prediction loss using single-/multi-head attention is in O(1/D), and the one for multi-head attention has a smaller multiplicative constant. In addition to the simplest data distribution setting, we consider more scenarios, e.g., noisy labels, local examples, correlated features, and prior knowledge. We observe that, in general, multi-head attention is preferred over single-head attention. Our results verify the effectiveness of the design of multi-head attention in the transformer architecture.
This paper belongs to a group of work in the intersection of symbolic computation and group analysis aiming for the symbolic analysis of differential equations. The goal is to extract important properties without finding the explicit general solution. In this contribution, we introduce the algorithmic verification of nonlinear superposition properties and its implementation. More exactly, for a system of nonlinear ordinary differential equations of first order with a polynomial right-hand side, we check if the differential system admits a general solution by means of a superposition rule and a certain number of particular solutions. It is based on the theory of Newton polytopes and associated symbolic computation. The developed method provides the basis for the identification of nonlinear superpositions within a given system and for the construction of numerical methods which preserve important algebraic properties at the numerical level.
The combination of multiple-input multiple-output (MIMO) systems and intelligent reflecting surfaces (IRSs) is foreseen as a critical enabler of beyond 5G (B5G) and 6G. In this work, two different approaches are considered for the joint optimization of the IRS phase-shift matrix and MIMO precoders of an IRS-assisted multi-stream (MS) multi-user MIMO (MU-MIMO) system. Both approaches aim to maximize the system sum-rate for every channel realization. The first proposed solution is a novel contextual bandit (CB) framework with continuous state and action spaces called deep contextual bandit-oriented deep deterministic policy gradient (DCB-DDPG). The second is an innovative deep reinforcement learning (DRL) formulation where the states, actions, and rewards are selected such that the Markov decision process (MDP) property of reinforcement learning (RL) is appropriately met. Both proposals perform remarkably better than state-of-the-art heuristic methods in scenarios with high multi-user interference.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.