The parametrization of wireless channels by so-called "beyond-diagonal reconfigurable intelligent surfaces" (BD-RIS) is mathematically characterized by a matrix whose off-diagonal entries are partially or fully populated. Physically, this corresponds to tunable coupling mechanisms between the RIS elements that originate from the RIS control circuit. Here, we derive a physics-compliant diagonal representation for BD-RIS-parametrized channels. Recognizing that the RIS control circuit, irrespective of its detailed architecture, can always be represented as a multi-port network with auxiliary ports terminated by tunable individual loads, we physics-compliantly express the BD-RIS-parametrized channel as a multi-port chain cascade of i) radio environment, ii) static parts of the control circuit, and iii) individually tunable loads. Thus, the cascade of the former two systems is terminated by a system that is mathematically always characterized by a diagonal matrix. This physics-compliant diagonal representation implies that existing algorithms for channel estimation and optimization for conventional ("diagonal") RIS can be readily applied to BD-RIS scenarios. We demonstrate this in an experimentally grounded case study. Importantly, we highlight that, operationally, an ambiguous characterization of the cascade of radio environment and the static parts of the control circuit is required, but not the breakdown into the characteristics of its two constituent systems nor the lifting of the ambiguities. Nonetheless, we demonstrate how to derive or estimate the characteristics of the static parts of the control circuit for pedagogical purposes. The diagonal representation of BD-RIS-parametrized channels also enables their treatment with coupled-dipole-based models. We furthermore derive the assumptions under which the physics-compliant BD-RIS model simplifies to the widespread linear cascaded model.
Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators by minimizing a specific loss function. The SNPE method proposed by Lueckmann et al. (2017) used a calibration kernel to boost the sample weights around the observed data, resulting in a concentrated loss function. However, the use of calibration kernels may increase the variances of both the empirical loss and its gradient, making the training inefficient. To improve the stability of SNPE, this paper proposes to use an adaptive calibration kernel and several variance reduction techniques. The proposed method greatly speeds up the process of training and provides a better approximation of the posterior than the original SNPE method and some existing competitors as confirmed by numerical experiments. We also manage to demonstrate the superiority of the proposed method for a high-dimensional model with real-world dataset.
The Johnson--Lindenstrauss (JL) lemma is a powerful tool for dimensionality reduction in modern algorithm design. The lemma states that any set of high-dimensional points in a Euclidean space can be flattened to lower dimensions while approximately preserving pairwise Euclidean distances. Random matrices satisfying this lemma are called JL transforms (JLTs). Inspired by existing $s$-hashing JLTs with exactly $s$ nonzero elements on each column, the present work introduces an ensemble of sparse matrices encompassing so-called $s$-hashing-like matrices whose expected number of nonzero elements on each column is~$s$. The independence of the sub-Gaussian entries of these matrices and the knowledge of their exact distribution play an important role in their analyses. Using properties of independent sub-Gaussian random variables, these matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically using a technique from geometric functional analysis. As the dimensions of the matrix grow to infinity, these singular values are proved to converge almost surely to fixed quantities (by using the universal Bai--Yin law), and in distribution to the Gaussian orthogonal ensemble (GOE) Tracy--Widom law after proper rescalings. Understanding the behaviors of extreme singular values is important in general because they are often used to define a measure of stability of matrix algorithms. For example, JLTs were recently used in derivative-free optimization algorithmic frameworks to select random subspaces in which are constructed random models or poll directions to achieve scalability, whence estimating their smallest singular value in particular helps determine the dimension of these subspaces.
To address the sensitivity of parameters and limited precision for physics-informed extreme learning machines (PIELM) with common activation functions, such as sigmoid, tangent, and Gaussian, in solving high-order partial differential equations (PDEs) relevant to scientific computation and engineering applications, this work develops a Fourier-induced PIELM (FPIELM) method. This approach aims to approximate solutions for a class of fourth-order biharmonic equations with two boundary conditions on both unitized and non-unitized domains. By carefully calculating the differential and boundary operators of the biharmonic equation on discretized collections, the solution for this high-order equation is reformulated as a linear least squares minimization problem. We further evaluate the FPIELM with varying hidden nodes and scaling factors for uniform distribution initialization, and then determine the optimal range for these two hyperparameters. Numerical experiments and comparative analyses demonstrate that the proposed FPIELM method is more stable, robust, precise, and efficient than other PIELM approaches in solving biharmonic equations across both regular and irregular domains.
We consider random matrix ensembles on the set of Hermitian matrices that are heavy tailed, in particular not all moments exist, and that are invariant under the conjugate action of the unitary group. The latter property entails that the eigenvectors are Haar distributed and, therefore, factorise from the eigenvalue statistics. We prove a classification for stable matrix ensembles of this kind of matrices represented in terms of matrices, their eigenvalues and their diagonal entries with the help of the classification of the multivariate stable distributions and the harmonic analysis on symmetric matrix spaces. Moreover, we identify sufficient and necessary conditions for their domains of attraction. To illustrate our findings we discuss for instance elliptical invariant random matrix ensembles and P\'olya ensembles, the latter playing a particular role in matrix convolutions. As a byproduct we generalise the derivative principle on the Hermitian matrices to general tempered distributions. This principle relates the joint probability density of the eigenvalues and the diagonal entries of the random matrix.
The Gromov-Wasserstein (GW) distances define a family of metrics, based on ideas from optimal transport, which enable comparisons between probability measures defined on distinct metric spaces. They are particularly useful in areas such as network analysis and geometry processing, as computation of a GW distance involves solving for registration between the objects which minimizes geometric distortion. Although GW distances have proven useful for various applications in the recent machine learning literature, it has been observed that they are inherently sensitive to outlier noise and cannot accommodate partial matching. This has been addressed by various constructions building on the GW framework; in this article, we focus specifically on a natural relaxation of the GW optimization problem, introduced by Chapel et al., which is aimed at addressing exactly these shortcomings. Our goal is to understand the theoretical properties of this relaxed optimization problem, from the viewpoint of metric geometry. While the relaxed problem fails to induce a metric, we derive precise characterizations of how it fails the axioms of non-degeneracy and triangle inequality. These observations lead us to define a novel family of distances, whose construction is inspired by the Prokhorov and Ky Fan distances, as well as by the recent work of Raghvendra et al.\ on robust versions of classical Wasserstein distance. We show that our new distances define true metrics, that they induce the same topology as the GW distances, and that they enjoy additional robustness to perturbations. These results provide a mathematically rigorous basis for using our robust partial GW distances in applications where outliers and partial matching are concerns.
While overparameterization is known to benefit generalization, its impact on Out-Of-Distribution (OOD) detection is less understood. This paper investigates the influence of model complexity in OOD detection. We propose an expected OOD risk metric to evaluate classifiers confidence on both training and OOD samples. Leveraging Random Matrix Theory, we derive bounds for the expected OOD risk of binary least-squares classifiers applied to Gaussian data. We show that the OOD risk depicts an infinite peak, when the number of parameters is equal to the number of samples, which we associate with the double descent phenomenon. Our experimental study on different OOD detection methods across multiple neural architectures extends our theoretical insights and highlights a double descent curve. Our observations suggest that overparameterization does not necessarily lead to better OOD detection. Using the Neural Collapse framework, we provide insights to better understand this behavior. To facilitate reproducibility, our code will be made publicly available upon publication.
We employ physics-informed neural networks (PINNs) to solve fundamental Dyson-Schwinger integral equations in the theory of quantum electrodynamics (QED) in Euclidean space. Our approach uses neural networks to approximate the fermion wave function renormalization, dynamical mass function, and photon propagator. By integrating the Dyson-Schwinger equations into the loss function, the networks learn and predict solutions over a range of momenta and ultraviolet cutoff values. This method can be extended to other quantum field theories (QFTs), potentially paving the way for forefront applications of machine learning within high-level theoretical physics.
The rapid advancement of quantum technologies calls for the design and deployment of quantum-safe cryptographic protocols and communication networks. There are two primary approaches to achieving quantum-resistant security: quantum key distribution (QKD) and post-quantum cryptography (PQC). While each offers unique advantages, both have drawbacks in practical implementation. In this work, we introduce the pros and cons of these protocols and explore how they can be combined to achieve a higher level of security and/or improved performance in key distribution. We hope our discussion inspires further research into the design of hybrid cryptographic protocols for quantum-classical communication networks.
Neural-networks-driven intelligent data-plane (NN-driven IDP) is becoming an emerging topic for excellent accuracy and high performance. Meanwhile we argue that NN-driven IDP should satisfy three design goals: the flexibility to support various NNs models, the low-latency-high-throughput inference performance, and the data-plane-unawareness harming no performance and functionality. Unfortunately, existing work either over-modify NNs for IDP, or insert inline pipelined accelerators into the data-plane, failing to meet the flexibility and unawareness goals. In this paper, we propose Kaleidoscope, a flexible and high-performance co-processor located at the bypass of the data-plane. To address the challenge of meeting three design goals, three key techniques are presented. The programmable run-to-completion accelerators are developed for flexible inference. To further improve performance, we design a scalable inference engine which completes low-latency and low-cost inference for the mouse flows, and perform complex NNs with high-accuracy for the elephant flows. Finally, raw-bytes-based NNs are introduced, which help to achieve unawareness. We prototype Kaleidoscope on both FPGA and ASIC library. In evaluation on six NNs models, Kaleidoscope reaches 256-352 ns inference latency and 100 Gbps throughput with negligible influence on the data-plane. The on-board tested NNs perform state-of-the-art accuracy among other NN-driven IDP, exhibiting the the significant impact of flexibility on enhancing traffic analysis accuracy.
Lagrangian Neural Networks (LNNs) are a powerful tool for addressing physical systems, particularly those governed by conservation laws. LNNs can parametrize the Lagrangian of a system to predict trajectories with nearly conserved energy. These techniques have proven effective in unconstrained systems as well as those with holonomic constraints. In this work, we adapt LNN techniques to mechanical systems with nonholonomic constraints. We test our approach on some well-known examples with nonholonomic constraints, showing that incorporating these restrictions into the neural network's learning improves not only trajectory estimation accuracy but also ensures adherence to constraints and exhibits better energy behavior compared to the unconstrained counterpart.