We consider a hardware-impaired multi-cell Rician faded massive multi-input multi-output (mMIMO) system with two-layer pilot decontamination precoding, also known as large-scale fading precoding (LSFP). Each BS is equipped with a flexible dynamic analog-to-digital converter (ADC)/digital-to-analog converter (DAC) architecture and the user equipments (UEs) have low-resolution ADCs. Further, both BS and UEs have hardwareimpaired radio frequency chains. The dynamic ADC/DAC architecture allows us to vary the resolution of ADC/DAC connected to each BS antenna, and suitably choose them to maximize the SE. We propose a distortion-aware minimum mean squared error (DA-MMSE) precoder and investigate its usage with two-layer LSFP and conventional single-layer precoding (SLP) for hardware-impaired mMIMO systems. We discuss the use cases of LSFP and SLP with DA-MMSE and distortion-unaware MMSE (DU-MMSE) precoders, which will provide critical insights to the system designer regarding their usage in practical systems.
We propose a matrix-free solver for the numerical solution of the cardiac electrophysiology model consisting of the monodomain nonlinear reaction-diffusion equation coupled with a system of ordinary differential equations for the ionic species. Our numerical approximation is based on the high-order Spectral Element Method (SEM) to achieve accurate numerical discretization while employing a much smaller number of Degrees of Freedom than first-order Finite Elements. We combine vectorization with sum-factorization, thus allowing for a very efficient use of high-order polynomials in a high performance computing framework. We validate the effectiveness of our matrix-free solver in a variety of applications and perform different electrophysiological simulations ranging from a simple slab of cardiac tissue to a realistic four-chamber heart geometry. We compare SEM to SEM with Numerical Integration (SEM-NI), showing that they provide comparable results in terms of accuracy and efficiency. In both cases, increasing the local polynomial degree $p$ leads to better numerical results and smaller computational times than reducing the mesh size $h$. We also implement a matrix-free Geometric Multigrid preconditioner that results in a comparable number of linear solver iterations with respect to a state-of-the-art matrix-based Algebraic Multigrid preconditioner. As a matter of fact, the matrix-free solver proposed here yields up to 45$\times$ speed-up with respect to a conventional matrix-based solver.
In this paper, we consider a semiconducting device with an active zone made of a single-layer material. The associated Poisson equation for the electrostatic potential (to be solved in order to perform self-consistent computations) is characterized by a surface particle density and an out-of-plane dielectric permittivity in the region surrounding the single-layer. To avoid mesh refinements in such a region, we propose an interface problem based on the natural domain decomposition suggested by the physical device. Two different interface continuity conditions are discussed. Then, we write the corresponding variational formulations adapting the so called three-fields formulation for domain decomposition and we approximate them using a proper finite element method. Finally, numerical experiments are performed to illustrate some specific features of this interface approach.
Numerical approximations of partial differential equations (PDEs) are routinely employed to formulate the solution of physics, engineering and mathematical problems involving functions of several variables, such as the propagation of heat or sound, fluid flow, elasticity, electrostatics, electrodynamics, and more. While this has led to solving many complex phenomena, there are still significant limitations. Conventional approaches such as Finite Element Methods (FEMs) and Finite Differential Methods (FDMs) require considerable time and are computationally expensive. In contrast, machine learning-based methods such as neural networks are faster once trained, but tend to be restricted to a specific discretization. This article aims to provide a comprehensive summary of conventional methods and recent machine learning-based methods to approximate PDEs numerically. Furthermore, we highlight several key architectures centered around the neural operator, a novel and fast approach (1000x) to learning the solution operator of a PDE. We will note how these new computational approaches can bring immense advantages in tackling many problems in fundamental and applied physics.
The basic idea of lifelike computing systems is the transfer of concepts in living systems to technical use that goes even beyond existing concepts of self-adaptation and self-organisation (SASO). As a result, these systems become even more autonomous and changeable - up to a runtime transfer of the actual target function. Maintaining controllability requires a complete and dynamic (self-)quantification of the system behaviour with regard to aspects of SASO but also, in particular, lifelike properties. In this article, we discuss possible approaches for such metrics and establish a first metric for transferability. We analyse the behaviour of the metric using example applications and show that it is suitable for describing the system's behaviour at runtime.
Navigating the diverse solution spaces of non-trivial software engineering tasks requires a combination of technical knowledge, problem-solving skills, and creativity. With multiple possible solutions available, each with its own set of trade-offs, it is essential for programmers to evaluate the various options and select the one that best suits the specific requirements and constraints of a project. Whether it is choosing from a range of libraries, weighing the pros and cons of different architecture and design solutions, or finding unique ways to fulfill user requirements, the ability to think creatively is crucial for making informed decisions that will result in efficient and effective software. However, the interfaces of current chatbot tools for programmers, such as OpenAI's ChatGPT or GitHub Copilot, are optimized for presenting a single solution, even for complex queries. While other solutions can be requested, they are not displayed by default and are not intuitive to access. In this paper, we present our work-in-progress prototype "GPTCompare", which allows programmers to visually compare multiple source code solutions generated by GPT-n models for the same programming-related query by highlighting their similarities and differences.
In this paper, we investigate the uplink performance of cell-free (CF) extremely large-scale multiple-input-multipleoutput (XL-MIMO) systems, which is a promising technique for future wireless communications. More specifically, we consider the practical scenario with multiple base stations (BSs) and multiple user equipments (UEs). To this end, we derive exact achievable spectral efficiency (SE) expressions for any combining scheme. It is worth noting that we derive the closed-form SE expressions for the CF XL-MIMO with maximum ratio (MR) combining. Numerical results show that the SE performance of the CF XL-MIMO can be hugely improved compared with the small-cell XL-MIMO. It is interesting that a smaller antenna spacing leads to a higher correlation level among patch antennas. Finally, we prove that increasing the number of UE antennas may decrease the SE performance with MR combining.
As an effective way to enhance the physical layer security (PLS) for the broadcast channel (BC), regularized zero-forcing (RZF) precoding has attracted much attention. However, the reliability performance, i.e., secrecy outage probability (SOP), of RZF is not well investigated in the literature. In this paper, we characterize the secrecy performance of RZF precoding in the large multiple-input single-output (MISO) broadcast system. For this purpose, we first consider a central limit theorem (CLT) for the joint distribution of the users' signal-to-interference-plus-noise ratio (SINR) and the eavesdropper's (Eve's) signal-to-noise ratio (ESNR) by leveraging random matrix theory (RMT). The result is then utilized to obtain a closed-form approximation for the ergodic secrecy rate (ESR) and SOP of three typical scenarios: the case with only external Eves, the case with only internal Eves, and that with both. The derived results are then used to evaluate the percentage of users in secrecy outage and the required number of transmit antennas to achieve a positive secrecy rate. It is shown that, with equally-capable Eves, the secrecy loss caused by external Eves is higher than that caused by internal Eves. Numerical simulations validate the accuracy of the theoretical results.
Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.
In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multi-task loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines.