Joint communication and sensing is expected to be one of the features introduced by the sixth-generation (6G) wireless systems. This will enable a huge variety of new applications, hence, it is important to find suitable approaches to secure the exchanged information. Conventional security mechanisms may not be able to meet the stringent delay, power, and complexity requirements which opens the challenge of finding new lightweight security solutions. A promising approach coming from the physical layer is the secret key generation (SKG) from channel fading. While SKG has been investigated for several decades, practical implementations of its full protocol are still scarce. The aim of this chapter is to evaluate the SKG rates in real-life setups under a set of different scenarios. We consider a typical radar waveform and present a full implementation of the SKG protocol. Each step is evaluated to demonstrate that generating keys from the physical layer can be a viable solution for future networks. However, we show that there is not a single solution that can be generalized for all cases, instead, parameters should be chosen according to the context.
Many data symmetries can be described in terms of group equivariance and the most common way of encoding group equivariances in neural networks is by building linear layers that are group equivariant. In this work we investigate whether equivariance of a network implies that all layers are equivariant. On the theoretical side we find cases where equivariance implies layerwise equivariance, but also demonstrate that this is not the case generally. Nevertheless, we conjecture that CNNs that are trained to be equivariant will exhibit layerwise equivariance and explain how this conjecture is a weaker version of the recent permutation conjecture by Entezari et al. [2022]. We perform quantitative experiments with VGG-nets on CIFAR10 and qualitative experiments with ResNets on ImageNet to illustrate and support our theoretical findings. These experiments are not only of interest for understanding how group equivariance is encoded in ReLU-networks, but they also give a new perspective on Entezari et al.'s permutation conjecture as we find that it is typically easier to merge a network with a group-transformed version of itself than merging two different networks.
The integration of data and knowledge from several sources is known as data fusion. When data is only available in a distributed fashion or when different sensors are used to infer a quantity of interest, data fusion becomes essential. In Bayesian settings, a priori information of the unknown quantities is available and, possibly, present among the different distributed estimators. When the local estimates are fused, the prior knowledge used to construct several local posteriors might be overused unless the fusion node accounts for this and corrects it. In this paper, we analyze the effects of shared priors in Bayesian data fusion contexts. Depending on different common fusion rules, our analysis helps to understand the performance behavior as a function of the number of collaborative agents and as a consequence of different types of priors. The analysis is performed by using two divergences which are common in Bayesian inference, and the generality of the results allows to analyze very generic distributions. These theoretical results are corroborated through experiments in a variety of estimation and classification problems, including linear and nonlinear models, and federated learning schemes.
We present an approach for analyzing message passing graph neural networks (MPNNs) based on an extension of graphon analysis to a so called graphon-signal analysis. A MPNN is a function that takes a graph and a signal on the graph (a graph-signal) and returns some value. Since the input space of MPNNs is non-Euclidean, i.e., graphs can be of any size and topology, properties such as generalization are less well understood for MPNNs than for Euclidean neural networks. We claim that one important missing ingredient in past work is a meaningful notion of graph-signal similarity measure, that endows the space of inputs to MPNNs with a regular structure. We present such a similarity measure, called the graphon-signal cut distance, which makes the space of all graph-signals a dense subset of a compact metric space -- the graphon-signal space. Informally, two deterministic graph-signals are close in cut distance if they ``look like'' they were sampled from the same random graph-signal model. Hence, our cut distance is a natural notion of graph-signal similarity, which allows comparing any pair of graph-signals of any size and topology. We prove that MPNNs are Lipschitz continuous functions over the graphon-signal metric space. We then give two applications of this result: 1) a generalization bound for MPNNs, and, 2) the stability of MPNNs to subsampling of graph-signals. Our results apply to any regular enough MPNN on any distribution of graph-signals, making the analysis rather universal.
Spiking neural networks are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore drastically decrease energy consumption when run on specialized hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art spiking neural networks are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about spiking neural networks focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based. This new RNN cell, called the Spiking Recurrent Cell, therefore communicates using events, i.e. spikes, while being completely differentiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable.
We address the joint problem of learning and scheduling in multi-hop wireless network without a prior knowledge on link rates. Previous scheduling algorithms need the link rate information, and learning algorithms often require a centralized entity and polynomial complexity. These become a major obstacle to develop an efficient learning-based distributed scheme for resource allocation in large-scale multi-hop networks. In this work, by incorporating with learning algorithm, we develop provably efficient scheduling scheme under packet arrival dynamics without a priori link rate information. We extend the results to distributed implementation and evaluation their performance through simulations.
Emails are used every day for communication, and many countries and organisations mostly use email for official communications. It is highly valued and recognised for confidential conversations and transactions in day-to-day business. The Often use of this channel and the quality of information it carries attracted cyber attackers to it. There are many existing techniques to mitigate attacks on email, however, the systems are more focused on email content and behaviour and not securing entrances to email boxes, composition, and settings. This work intends to protect users' email composition and settings to prevent attackers from using an account when it gets hacked or hijacked and stop them from setting forwarding on the victim's email account to a different account which automatically stops the user from receiving emails. A secure code is applied to the composition send button to curtail insider impersonation attack. Also, to secure open applications on public and private devices.
We discuss probabilistic neural networks with a fixed internal representation as models for machine understanding. Here understanding is intended as mapping data to an already existing representation which encodes an {\em a priori} organisation of the feature space. We derive the internal representation by requiring that it satisfies the principles of maximal relevance and of maximal ignorance about how different features are combined. We show that, when hidden units are binary variables, these two principles identify a unique model -- the Hierarchical Feature Model (HFM) -- which is fully solvable and provides a natural interpretation in terms of features. We argue that learning machines with this architecture enjoy a number of interesting properties, like the continuity of the representation with respect to changes in parameters and data, the possibility to control the level of compression and the ability to support functions that go beyond generalisation. We explore the behaviour of the model with extensive numerical experiments and argue that models where the internal representation is fixed reproduce a learning modality which is qualitatively different from that of traditional models such as Restricted Boltzmann Machines.
A semi-implicit in time, entropy stable finite volume scheme for the compressible barotropic Euler system is designed and analyzed and its weak convergence to a dissipative measure-valued (DMV) solution [E. Feireisl et al., Dissipative measure-valued solutions to the compressible Navier-Stokes system, Calc. Var. Partial Differential Equations, 2016] of the Euler system is shown. The entropy stability is achieved by introducing a shifted velocity in the convective fluxes of the mass and momentum balances, provided some CFL-like condition is satisfied to ensure stability. A consistency analysis is performed in the spirit of the Lax's equivalence theorem under some physically reasonable boundedness assumptions. The concept of K-convergence [E. Feireisl et al., K-convergence as a new tool in numerical analysis, IMA J. Numer. Anal., 2020] is used in order to obtain some strong convergence results, which are then illustrated via rigorous numerical case studies. The convergence of the scheme to a DMV solution, a weak solution and a strong solution of the Euler system using the weak-strong uniqueness principle and relative entropy are presented.
A growing number of scholars and data scientists are conducting randomized experiments to analyze causal relationships in network settings where units influence one another. A dominant methodology for analyzing these network experiments has been design-based, leveraging randomization of treatment assignment as the basis for inference. In this paper, we generalize this design-based approach so that it can be applied to more complex experiments with a variety of causal estimands with different target populations. An important special case of such generalized network experiments is a bipartite network experiment, in which the treatment assignment is randomized among one set of units and the outcome is measured for a separate set of units. We propose a broad class of causal estimands based on stochastic intervention for generalized network experiments. Using a design-based approach, we show how to estimate the proposed causal quantities without bias, and develop conservative variance estimators. We apply our methodology to a randomized experiment in education where a group of selected students in middle schools are eligible for the anti-conflict promotion program, and the program participation is randomized within this group. In particular, our analysis estimates the causal effects of treating each student or his/her close friends, for different target populations in the network. We find that while the treatment improves the overall awareness against conflict among students, it does not significantly reduce the total number of conflicts.
Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.