We study the complexity of isomorphism problems for d-way arrays, or tensors, under natural actions by classical groups such as orthogonal, unitary, and symplectic groups. Such problems arise naturally in statistical data analysis and quantum information. We study two types of complexity-theoretic questions. First, for a fixed action type (isomorphism, conjugacy, etc.), we relate the complexity of the isomorphism problem over a classical group to that over the general linear group. Second, for a fixed group type (orthogonal, unitary, or symplectic), we compare the complexity of the decision problems for different actions. Our main results are as follows. First, for orthogonal and symplectic groups acting on 3-way arrays, the isomorphism problems reduce to the corresponding problem over the general linear group. Second, for orthogonal and unitary groups, the isomorphism problems of five natural actions on 3-way arrays are polynomial-time equivalent, and the d-tensor isomorphism problem reduces to the 3-tensor isomorphism problem for any fixed d>3. For unitary groups, the preceding result implies that LOCC classification of tripartite quantum states is at least as difficult as LOCC classification of d-partite quantum states for any d. Lastly, we also show that the graph isomorphism problem reduces to the tensor isomorphism problem over orthogonal and unitary groups.
Wireless communication is enabling billions of people to connect to each other and the internet, transforming every sector of the economy, and building the foundations for powerful new technologies that hold great promise to improve lives at an unprecedented rate and scale. The rapid increase in the number of devices and the associated demands for higher data rates and broader network coverage fuels the need for more robust wireless technologies. The key technology identified to address this problem is referred to as Cell-Free Massive MIMO (CF-mMIMO). CF-mMIMO is accompanied by many challenges, one of which is efficiently allocating limited resources. In this paper, we focus on a major resource allocation problem in wireless networks, namely the Pilot Assignment problem (PA). We show that PA is strongly NP-hard and that it does not admit a polynomial-time constant-factor approximation algorithm. Further, we show that PA cannot be approximated in polynomial time within $\mathcal{O}(K^2)$ (where $K$ is the number of users) when the system consists of at least three pilots. Finally, we present an approximation lower bound of $1.058$ (resp. $\epsilon|K|^2$, for $\epsilon >0$) in special cases where the system consists of exactly two (resp. three) pilots.
Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.
An assumption that has often been used by researchers to model the interference in a wireless network is the unit disk graph model. While many theoretical results and performance guarantees have been obtained under this model, an open research direction is to extend these results to hypergraph interference models. Motivated by recent results that the worst-case performance of the distributed maximal scheduling algorithm is characterized by the interference degree of the hypergraph, in the present work we investigate properties of the interference degree of the hypergraph and the structure of hypergraphs arising from physical constraints. We show that the problem of computing the interference degree of a hypergraph is NP-hard and we prove some properties and results concerning this hypergraph invariant. We investigate which hypergraphs are realizable, i.e. which hypergraphs arise in practice, based on physical constraints, as the interference model of a wireless network. In particular, a question that arises naturally is: what is the maximal value of $r$ such that the hypergraph $K_{1,r}$ is realizable? We determine this quantity for various values of the path loss exponent of signal propagation. We also investigate hypergraphs generated by line networks.
Perturbation analysis has emerged as a significant concern across multiple disciplines, with notable advancements being achieved, particularly in the realm of matrices. This study centers on specific aspects pertaining to tensor T-eigenvalues within the context of the tensor-tensor multiplication. Initially, an analytical perturbation analysis is introduced to explore the sensitivity of T-eigenvalues. In the case of third-order tensors featuring square frontal slices, we extend the classical Gershgorin disc theorem and show that all T-eigenvalues are located inside a union of Gershgorin discs. Additionally, we extend the Bauer-Fike theorem to encompass F-diagonalizable tensors and present two modified versions applicable to more general scenarios. The tensor case of the Kahan theorem, which accounts for general perturbations on Hermite tensors, is also investigated. Furthermore, we propose the concept of pseudospectra for third-order tensors based on tensor-tensor multiplication. We develop four definitions that are equivalent under the spectral norm to characterize tensor $\varepsilon$-pseudospectra. Additionally, we present several pseudospectral properties. To provide visualizations, several numerical examples are also provided to illustrate the $\varepsilon$-pseudospectra of specific tensors at different levels.
The level-$k$ $\ell_1$-Fourier weight of a Boolean function refers to the sum of absolute values of its level-$k$ Fourier coefficients. Fourier growth refers to the growth of these weights as $k$ grows. It has been extensively studied for various computational models, and bounds on the Fourier growth, even for the first few levels, have proven useful in learning theory, circuit lower bounds, pseudorandomness, and quantum-classical separations. We investigate the Fourier growth of certain functions that naturally arise from communication protocols for XOR functions (partial functions evaluated on the bitwise XOR of the inputs to Alice and Bob). If a protocol $\mathcal C$ computes an XOR function, then $\mathcal C(x,y)$ is a function of the parity $x\oplus y$. This motivates us to analyze the XOR-fiber of $\mathcal C$, defined as $h(z):=\mathbb E_{x,y}[\mathcal C(x,y)|x\oplus y=z]$. We present improved Fourier growth bounds for the XOR-fibers of protocols that communicate $d$ bits. For the first level, we show a tight $O(\sqrt d)$ bound and obtain a new coin theorem, as well as an alternative proof for the tight randomized communication lower bound for Gap-Hamming. For the second level, we show an $d^{3/2}\cdot\mathrm{polylog}(n)$ bound, which improves the previous $O(d^2)$ bound by Girish, Raz, and Tal (ITCS 2021) and implies a polynomial improvement on the randomized communication lower bound for the XOR-lift of Forrelation, extending its quantum-classical gap. Our analysis is based on a new way of adaptively partitioning a relatively large set in Gaussian space to control its moments in all directions. We achieve this via martingale arguments and allowing protocols to transmit real values. We also show a connection between Fourier growth and lifting theorems with constant-sized gadgets as a potential approach to prove optimal bounds for the second level and beyond.
Identifying the closest fog node is crucial for mobile clients to benefit from fog computing. Relying on geographical location alone us insufficient for this as it ignores real observed client access latency. In this paper, we analyze the performance of the Meridian and Vivaldi network coordinate systems in identifying nearest fog nodes. To that end, we simulate a dense fog environment with mobile clients. We find that while network coordinate systems really find fog nodes in close network proximity, a purely latency-oriented identification approach ignores the larger problem of balancing load across fog nodes.
The main objective of the present paper is to construct a new class of space-time discretizations for the stochastic $p$-Stokes system and analyze its stability and convergence properties. We derive regularity results for the approximation that are similar to the natural regularity of solutions. One of the key arguments relies on discrete extrapolation that allows to relate lower moments of discrete maximal processes. We show that, if the generic spatial discretization is constraint conforming, then the velocity approximation satisfies a best-approximation property in the natural distance. Moreover, we present an example such that the resulting velocity approximation converges with rate $1/2$ in time and $1$ in space towards the (unknown) target velocity with respect to the natural distance.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.
Co-evolving time series appears in a multitude of applications such as environmental monitoring, financial analysis, and smart transportation. This paper aims to address the following challenges, including (C1) how to incorporate explicit relationship networks of the time series; (C2) how to model the implicit relationship of the temporal dynamics. We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN). TGCN tackles the first challenge by generalizing Graph Convolutional Network (GCN) for flat graphs to tensor graphs, which captures the synergy between multiple graphs associated with the tensors. TRNN leverages tensor decomposition to model the implicit relationships among co-evolving time series. The experimental results on five real-world datasets demonstrate the efficacy of the proposed method.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.