A near-field secure transmission framework is proposed. Employing the hybrid beamforming architecture, a base station (BS) transmits the confidential information to a legitimate user (U) against an eavesdropper (E) in the near field. A two-stage algorithm is proposed to maximize the near-field secrecy capacity. Based on the fully-digital beamformers obtained in the first stage, the optimal analog beamformers and baseband digital beamformers can be alternatingly derived in the closed-form expressions in the second stage. Numerical results demonstrate that in contrast to the far-field secure communication relying on the angular disparity, the near-filed secure communication in the near-field communication mainly relies on the distance disparity between U and E.
We address the problem of synthesizing distorting mechanisms that maximize infinite horizon privacy for Networked Control Systems (NCSs). We consider stochastic LTI systems where information about the system state is obtained through noisy sensor measurements and transmitted to a (possibly adversarial) remote station via unsecured/public communication networks to compute control actions (a remote LQR controller). Because the network/station is untrustworthy, adversaries might access sensor and control data and estimate the system state. To mitigate this risk, we pass sensor and control data through distorting (privacy-preserving) mechanisms before transmission and send the distorted data through the communication network. These mechanisms consist of a linear coordinate transformation and additive-dependent Gaussian vectors. We formulate the synthesis of the distorting mechanisms as a convex program. In this convex program, we minimize the infinite horizon mutual information (our privacy metric) between the system state and its optimal estimate at the remote station for a desired upper bound on the control performance degradation (LQR cost) induced by the distortion mechanism.
This paper investigates the usage of hybrid automatic repeat request (HARQ) protocols for power-efficient and reliable communications over free space optical (FSO) links. By exploiting the large coherence time of the FSO channel, the proposed transmission schemes combat turbulence-induced fading by retransmitting the failed packets in the same coherence interval. To assess the performance of the presented HARQ technique, we extract a theoretical framework for the outage performance. In more detail, a closed-form expression for the outage probability (OP) is reported and an approximation for the high signal-to-noise ratio (SNR) region is extracted. Building upon the theoretical framework, we formulate a transmission power allocation problem throughout the retransmission rounds. This optimization problem is solved numerically through the use of an iterative algorithm. In addition, the average throughput of the HARQ schemes under consideration is examined. Simulation results validate the theoretical analysis under different turbulence conditions and demonstrate the performance improvement, in terms of both OP and throughput, of the proposed HARQ schemes compared to fixed transmit power HARQ benchmarks.
Current methods of graph signal processing rely heavily on the specific structure of the underlying network: the shift operator and the graph Fourier transform are both derived directly from a specific graph. In many cases, the network is subject to error or natural changes over time. This motivated a new perspective on GSP, where the signal processing framework is developed for an entire class of graphs with similar structures. This approach can be formalized via the theory of graph limits, where graphs are considered as random samples from a distribution represented by a graphon. When the network under consideration has underlying symmetries, they may be modeled as samples from Cayley graphons. In Cayley graphons, vertices are sampled from a group, and the link probability between two vertices is determined by a function of the two corresponding group elements. Infinite groups such as the 1-dimensional torus can be used to model networks with an underlying spatial reality. Cayley graphons on finite groups give rise to a Stochastic Block Model, where the link probabilities between blocks form a (edge-weighted) Cayley graph. This manuscript summarizes some work on graph signal processing on large networks, in particular samples of Cayley graphons.
This paper explores the application of machine learning to enhance our understanding of water accessibility issues in underserved communities called Colonias located along the northern part of the United States - Mexico border. We analyzed more than 2000 such communities using data from the Rural Community Assistance Partnership (RCAP) and applied hierarchical clustering and the adaptive affinity propagation algorithm to automatically group Colonias into clusters with different water access conditions. The Gower distance was introduced to make the algorithm capable of processing complex datasets containing both categorical and numerical attributes. To better understand and explain the clustering results derived from the machine learning process, we further applied a decision tree analysis algorithm to associate the input data with the derived clusters, to identify and rank the importance of factors that characterize different water access conditions in each cluster. Our results complement experts' priority rankings of water infrastructure needs, providing a more in-depth view of the water insecurity challenges that the Colonias suffer from. As an automated and reproducible workflow combining a series of tools, the proposed machine learning pipeline represents an operationalized solution for conducting data-driven analysis to understand water access inequality. This pipeline can be adapted to analyze different datasets and decision scenarios.
A crucial step towards the 6th generation (6G) of networks would be a shift in communication paradigm beyond the limits of Shannon's theory. In both classical and quantum Shannon's information theory, communication channels are generally assumed to combine through classical trajectories, so that the associated network path traversed by the information carrier is well-defined. Counter-intuitively, quantum mechanics enables a quantum information carrier to propagate through a quantum path, i.e., through a path such that the causal order of the constituting communications channels becomes indefinite. Quantum paths exhibit astonishing features, such as providing non-null capacity even when no information can be sent through any classical path. In this paper, we study the quantum capacity achievable via a quantum path and establish upper and the lower bounds for it. Our findings reveal the substantial advantage achievable with a quantum path over any classical placements of communications channels in terms of ultimate achievable communication rates. Furthermore, we identify the region where a quantum path incontrovertibly outperforms the amount of transmissible information beyond the limits of conventional quantum Shannon's theory, and we quantify this advantage over classical paths through a conservative estimate.
Given the large size and complexity of most biochemical regulation and signaling networks, there is a non-trivial relationship between the micro-level logic of component interactions and the observed macro-dynamics. Here we address this issue by formalizing the existing concept of pathway modules, which are sequences of state updates that are guaranteed to occur (barring outside interference) in the dynamics of automata networks after the perturbation of a subset of driver nodes. We present a novel algorithm to automatically extract pathway modules from networks and we characterize the interactions that may take place between modules. This methodology uses only the causal logic of individual node variables (micro-dynamics) without the need to compute the dynamical landscape of the networks (macro-dynamics). Specifically, we identify complex modules, which maximize pathway length and require synergy between their components. This allows us to propose a new take on dynamical modularity that partitions complex networks into causal pathways of variables that are guaranteed to transition to specific states given a perturbation to a set of driver nodes. Thus, the same node variable can take part in distinct modules depending on the state it takes. Our measure of dynamical modularity of a network is then inversely proportional to the overlap among complex modules and maximal when complex modules are completely decouplable from one another in the network dynamics. We estimate dynamical modularity for several genetic regulatory networks, including the Drosophila melanogaster segment-polarity network. We discuss how identifying complex modules and the dynamical modularity portrait of networks explains the macro-dynamics of biological networks, such as uncovering the (more or less) decouplable building blocks of emergent computation (or collective behavior) in biochemical regulation and signaling.
Recent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction with information in a digitally augmented physical world. For interacting with such devices, three main types of input - besides not very intuitive finger gestures - have emerged so far: 1) Touch input on the frame of the devices or 2) on accessories (controller) as well as 3) voice input. While these techniques have both advantages and disadvantages depending on the current situation of the user, they largely ignore the skills and dexterity that we show when interacting with the real world: Throughout our lives, we have trained extensively to use our limbs to interact with and manipulate the physical world around us. This thesis explores how the skills and dexterity of our upper and lower limbs, acquired and trained in interacting with the real world, can be transferred to the interaction with HMDs. Thus, this thesis develops the vision of around-body interaction, in which we use the space around our body, defined by the reach of our limbs, for fast, accurate, and enjoyable interaction with such devices. This work contributes four interaction techniques, two for the upper limbs and two for the lower limbs: The first contribution shows how the proximity between our head and hand can be used to interact with HMDs. The second contribution extends the interaction with the upper limbs to multiple users and illustrates how the registration of augmented information in the real world can support cooperative use cases. The third contribution shifts the focus to the lower limbs and discusses how foot taps can be leveraged as an input modality for HMDs. The fourth contribution presents how lateral shifts of the walking path can be exploited for mobile and hands-free interaction with HMDs while walking.
Wireless fingerprinting refers to a device identification method leveraging hardware imperfections and wireless channel variations as signatures. Beyond physical layer characteristics, recent studies demonstrated that user behaviours could be identified through network traffic, e.g., packet length, without decryption of the payload. Inspired by these results, we propose a multi-layer fingerprinting framework that jointly considers the multi-layer signatures for improved identification performance. In contrast to previous works, by leveraging the recent multi-view machine learning paradigm, i.e., data with multiple forms, our method can cluster the device information shared among the multi-layer features without supervision. Our information-theoretic approach can be extended to supervised and semi-supervised settings with straightforward derivations. In solving the formulated problem, we obtain a tight surrogate bound using variational inference for efficient optimization. In extracting the shared device information, we develop an algorithm based on the Wyner common information method, enjoying reduced computation complexity as compared to existing approaches. The algorithm can be applied to data distributions belonging to the exponential family class. Empirically, we evaluate the algorithm in a synthetic dataset with real-world video traffic and simulated physical layer characteristics. Our empirical results show that the proposed method outperforms the state-of-the-art baselines in both supervised and unsupervised settings.
In the era of the Internet of Things (IoT), blockchain is a promising technology for improving the efficiency of healthcare systems, as it enables secure storage, management, and sharing of real-time health data collected by the IoT devices. As the implementations of blockchain-based healthcare systems usually involve multiple conflicting metrics, it is essential to balance them according to the requirements of specific scenarios. In this paper, we formulate a joint optimization model with three metrics, namely latency, security, and computational cost, that are particularly important for IoT-enabled healthcare. However, it is computationally intractable to identify the exact optimal solution of this problem for practical sized systems. Thus, we propose an algorithm called the Adaptive Discrete Particle Swarm Algorithm (ADPSA) to obtain near-optimal solutions in a low-complexity manner. With its roots in the classical Particle Swarm Optimization (PSO) algorithm, our proposed ADPSA can effectively manage the numerous binary and integer variables in the formulation. We demonstrate by extensive numerical experiments that the ADPSA consistently outperforms existing benchmark approaches, including the original PSO, exhaustive search and Simulated Annealing, in a wide range of scenarios.
Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.