While interference in time domain (caused by path difference) is mitigated by OFDM modulation, interference in frequency domain (due to velocity difference), can be mitigated by OTFS modulation. However, in non-stationary channels, the relative difference in acceleration will cause Inter-Doppler Interference (IDI) and a modulation method for mitigating IDI does not exist in the literature. Both methods in the literature use carriers in a specific domain which achieve orthogonality in the target domain to mitigate interference. Moreover, those modulation cannot directly incorporate space domain, which requires additional precoding technique to mitigate inter-user interference (IUI) for MU-MIMO channels. This work presents a generalized modulation for any multidimensional channel. Recently, Higher Order Mercer's Theorem (HOGMT) [1] has been proposed to decompose multi-user non-stationary channels into independent fading subchannels (Eigenwaves). Based on HOGMT decomposition, we develop Multidimensional Eigenwaves Multiplexing (MEM) modulation which uses jointly orthogonal eigenwaves, decomposed from the multidimensional channel as subcarriers. Data symbols modulated by these eigenwaves can achieve orthogonality across each degree of freedom(\eg space (users/antennas), time-frequency and delay-Doppler). Consequently, the transmitted remain independent over the high dimensional channel, thereby avoiding interference from other symbols.
Topological Data Analysis is a growing area of data science, which aims at computing and characterizing the geometry and topology of data sets, in order to produce useful descriptors for subsequent statistical and machine learning tasks. Its main computational tool is persistent homology, which amounts to track the topological changes in growing families of subsets of the data set itself, called filtrations, and encode them in an algebraic object, called persistence module. Even though algorithms and theoretical properties of modules are now well-known in the single-parameter case, that is, when there is only one filtration to study, much less is known in the multi-parameter case, where several filtrations are given at once. Though more complicated, the resulting persistence modules are usually richer and encode more information, making them better descriptors for data science. In this article, we present the first approximation scheme, which is based on fibered barcodes and exact matchings, two constructions that stem from the theory of single-parameter persistence, for computing and decomposing general multi-parameter persistence modules. Our algorithm has controlled complexity and running time, and works in arbitrary dimension, i.e., with an arbitrary number of filtrations. Moreover, when restricting to specific classes of multi-parameter persistence modules, namely the ones that can be decomposed into intervals, we establish theoretical results about the approximation error between our estimate and the true module in terms of interleaving distance. Finally, we present empirical evidence validating output quality and speed-up on several data sets.
We present an extension of the summation-by-parts (SBP) framework to tensor-product spectral-element operators in collapsed coordinates. The proposed approach enables the construction of provably stable discretizations of arbitrary order which combine the geometric flexibility of unstructured triangular and tetrahedral meshes with the efficiency of sum-factorization algorithms. Specifically, a methodology is developed for constructing triangular and tetrahedral spectral-element operators of any order which possess the SBP property (i.e. satisfying a discrete analogue of integration by parts) as well as a tensor-product decomposition. Such operators are then employed within the context of discontinuous spectral-element methods based on nodal expansions collocated at the tensor-product quadrature nodes as well as modal expansions employing Proriol-Koornwinder-Dubiner polynomials, the latter approach resolving the time step limitation associated with the singularity of the collapsed coordinate transformation. Energy-stable formulations for curvilinear meshes are obtained using a skew-symmetric splitting of the metric terms, and a weight-adjusted approximation is used to efficiently invert the curvilinear modal mass matrix. The proposed schemes are compared to those using non-tensorial multidimensional SBP operators, and are found to offer comparable accuracy to such schemes in the context of smooth linear advection problems on curved meshes, but at a reduced computational cost for higher polynomial degrees.
Finite element methods and kinematically coupled schemes that decouple the fluid velocity and structure's displacement have been extensively studied for incompressible fluid-structure interaction (FSI) over the past decade. While these methods are known to be stable and easy to implement, optimal error analysis has remained challenging. Previous work has primarily relied on the classical elliptic projection technique, which is only suitable for parabolic problems and does not lead to optimal convergence of numerical solutions to the FSI problems in the standard $L^2$ norm. In this article, we propose a new kinematically coupled scheme for incompressible FSI thin-structure model and establish a new framework for the numerical analysis of FSI problems in terms of a newly introduced coupled non-stationary Ritz projection, which allows us to prove the optimal-order convergence of the proposed method in the $L^2$ norm. The methodology presented in this article is also applicable to numerous other FSI models and serves as a fundamental tool for advancing research in this field.
Non-Orthogonal Multiple Access (NOMA) is at the heart of a paradigm shift towards non-orthogonal communication due to its potential to scale well in massive deployments. Nevertheless, the overhead of channel estimation remains a key challenge in such scenarios. This paper introduces a data-driven, meta-learning-aided NOMA uplink model that minimizes the channel estimation overhead and does not require perfect channel knowledge. Unlike conventional deep learning successive interference cancellation (SICNet), Meta-Learning aided SIC (meta-SICNet) is able to share experience across different devices, facilitating learning for new incoming devices while reducing training overhead. Our results confirm that meta-SICNet outperforms classical SIC and conventional SICNet as it can achieve a lower symbol error rate with fewer pilots.
We present a space--time ultra-weak discontinuous Galerkin discretization of the linear Schr\"odinger equation with variable potential. The proposed method is well-posed and quasi-optimal in mesh-dependent norms for very general discrete spaces. Optimal~$h$-convergence error estimates are derived for the method when test and trial spaces are chosen either as piecewise polynomials, or as a novel quasi-Trefftz polynomial space. The latter allows for a substantial reduction of the number of degrees of freedom and admits piecewise-smooth potentials. Several numerical experiments validate the accuracy and advantages of the proposed method.
Demand for reliable statistics at a local area (small area) level has greatly increased in recent years. Traditional area-specific estimators based on probability samples are not adequate because of small sample size or even zero sample size in a local area. As a result, methods based on models linking the areas are widely used. World Bank focused on estimating poverty measures, in particular poverty incidence and poverty gap called FGT measures, using a simulated census method, called ELL, based on a one-fold nested error model for a suitable transformation of the welfare variable. Modified ELL methods leading to significant gain in efficiency over ELL also have been proposed under the one-fold model. An advantage of ELL and modified ELL methods is that distributional assumptions on the random effects in the model are not needed. In this paper, we extend ELL and modified ELL to two-fold nested error models to estimate poverty indicators for areas (say a state) and subareas (say counties within a state). Our simulation results indicate that the modified ELL estimators lead to large efficiency gains over ELL at the area level and subarea level. Further, modified ELL method retaining both area and subarea estimated effects in the model (called MELL2) performs significantly better in terms of mean squared error (MSE) for sampled subareas than the modified ELL retaining only estimated area effect in the model (called MELL1).
We propose a new supervised dimensionality reduction technique called Supervised Linear Centroid-Encoder (SLCE), a linear counterpart of the nonlinear Centroid-Encoder (CE) \citep{ghosh2022supervised}. SLCE works by mapping the samples of a class to its class centroid using a linear transformation. The transformation is a projection that reconstructs a point such that its distance from the corresponding class centroid, i.e., centroid-reconstruction loss, is minimized in the ambient space. We derive a closed-form solution using an eigendecomposition of a symmetric matrix. We did a detailed analysis and presented some crucial mathematical properties of the proposed approach. %We also provide an iterative solution approach based solving the optimization problem using a descent method. We establish a connection between the eigenvalues and the centroid-reconstruction loss. In contrast to Principal Component Analysis (PCA) which reconstructs a sample in the ambient space, the transformation of SLCE uses the instances of a class to rebuild the corresponding class centroid. Therefore the proposed method can be considered a form of supervised PCA. Experimental results show the performance advantage of SLCE over other supervised methods.
Spectral clustering is a well-known technique which identifies $k$ clusters in an undirected graph with weight matrix $W\in\mathbb{R}^{n\times n}$ by exploiting its graph Laplacian $L(W)$, whose eigenvalues $0=\lambda_1\leq \lambda_2 \leq \dots \leq \lambda_n$ and eigenvectors are related to the $k$ clusters. Since the computation of $\lambda_{k+1}$ and $\lambda_k$ affects the reliability of this method, the $k$-th spectral gap $\lambda_{k+1}-\lambda_k$ is often considered as a stability indicator. This difference can be seen as an unstructured distance between $L(W)$ and an arbitrary symmetric matrix $L_\star$ with vanishing $k$-th spectral gap. A more appropriate structured distance to ambiguity such that $L_\star$ represents the Laplacian of a graph has been proposed by Andreotti et al. (2021). Slightly differently, we consider the objective functional $ F(\Delta)=\lambda_{k+1}\left(L(W+\Delta)\right)-\lambda_k\left(L(W+\Delta)\right)$, where $\Delta$ is a perturbation such that $W+\Delta$ has non-negative entries and the same pattern of $W$. We look for an admissible perturbation $\Delta_\star$ of smallest Frobenius norm such that $F(\Delta_\star)=0$. In order to solve this optimization problem, we exploit its low rank underlying structure. We formulate a rank-4 symmetric matrix ODE whose stationary points are the optimizers sought. The integration of this equation benefits from the low rank structure with a moderate computational effort and memory requirement, as it is shown in some illustrative numerical examples.
Distributional robustness is a promising framework for training deep learning models that are less vulnerable to adversarial examples and data distribution shifts. Previous works have mainly focused on exploiting distributional robustness in data space. In this work, we explore an optimal transport-based distributional robustness framework on model spaces. Specifically, we examine a model distribution in a Wasserstein ball of a given center model distribution that maximizes the loss. We have developed theories that allow us to learn the optimal robust center model distribution. Interestingly, through our developed theories, we can flexibly incorporate the concept of sharpness awareness into training a single model, ensemble models, and Bayesian Neural Networks by considering specific forms of the center model distribution, such as a Dirac delta distribution over a single model, a uniform distribution over several models, and a general Bayesian Neural Network. Furthermore, we demonstrate that sharpness-aware minimization (SAM) is a specific case of our framework when using a Dirac delta distribution over a single model, while our framework can be viewed as a probabilistic extension of SAM. We conduct extensive experiments to demonstrate the usefulness of our framework in the aforementioned settings, and the results show remarkable improvements in our approaches to the baselines.
Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this context. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from real-world domains, all of which demonstrate the strength of our model. With this work, we hope to steer some of the GNN research towards new aggregation methods which we believe are essential in the search for powerful and robust models.