In the classic wiretap model, Alice wishes to reliably communicate to Bob without being overheard by Eve who is eavesdropping over a degraded channel. Systems for achieving that physical layer security often rely on an error correction code whose rate is below the Shannon capacity of Alice and Bob's channel, so Bob can reliably decode, but above Alice and Eve's, so Eve cannot reliably decode. For the finite block length regime, several metrics have been proposed to characterise information leakage. Here we assess a new metric, the success exponent, and demonstrate it can be operationalized through the use of Guessing Random Additive Noise Decoding (GRAND) to compromise the physical-layer security of any moderate length code. Success exponents are the natural beyond-capacity analogue of error exponents that characterise the probability that a maximum likelihood decoding is correct when the code-rate is above Shannon capacity, which is exponentially decaying in the code-length. Success exponents can be used to approximately evaluate the frequency with which Eve's decoding is correct in beyond-capacity channel conditions. Moreover, through GRAND, we demonstrate that Eve can constrain her decoding procedure so that when she does identify a decoding, it is correct with high likelihood, significantly compromising Alice and Bob's communication by truthfully revealing a proportion of it. We provide general mathematical expressions for the determination of success exponents as well as for the evaluation of Eve's query number threshold, using the binary symmetric channel as a worked example. As GRAND algorithms are code-book agnostic and can decode any code structure, we provide empirical results for Random Linear Codes as exemplars, since they achieve secrecy capacity. Simulation results demonstrate the practical possibility of compromising physical layer security.
Efficient structural reanalysis for high-rank modification plays an important role in engineering computations which require repeated evaluations of structural responses, such as structural optimization and probabilistic analysis. To improve the efficiency of engineering computations, a novel approximate static reanalysis method based on system reduction and iterative solution is proposed for statically indeterminate structures with high-rank modification. In this approach, a statically indeterminate structure is divided into the basis system and the additional components. Subsequently, the structural equilibrium equations are rewritten as the equation system with the stiffness matrix of the basis system and the pseudo forces derived from the additional elements. With the introduction of spectral decomposition, a reduced equation system with the element forces of the additional elements as the unknowns is established. Then, the approximate solutions of the modified structure can be obtained by solving the reduced equation system through a pre-conditioned iterative solution algorithm. The computational costs of the proposed method and the other two reanalysis methods are compared and numerical examples including static reanalysis and static nonlinear analysis are presented. The results demonstrate that the proposed method has excellent computational performance for both the structures with homogeneous material and structures composed of functionally graded beams. Meanwhile, the superiority of the proposed method indicates that the combination of system reduction and pre-conditioned iterative solution technology is an effective way to develop high-performance reanalysis methods.
Prior work has successfully incorporated optimization layers as the last layer in neural networks for various problems, thereby allowing joint learning and planning in one neural network forward pass. In this work, we identify a weakness in such a set-up where inputs to the optimization layer lead to undefined output of the neural network. Such undefined decision outputs can lead to possible catastrophic outcomes in critical real time applications. We show that an adversary can cause such failures by forcing rank deficiency on the matrix fed to the optimization layer which results in the optimization failing to produce a solution. We provide a defense for the failure cases by controlling the condition number of the input matrix. We study the problem in the settings of synthetic data, Jigsaw Sudoku, and in speed planning for autonomous driving, building on top of prior frameworks in end-to-end learning and optimization. We show that our proposed defense effectively prevents the framework from failing with undefined output. Finally, we surface a number of edge cases which lead to serious bugs in popular equation and optimization solvers which can be abused as well.
Low-capacity scenarios have become increasingly important in the technology of the Internet of Things (IoT) and the next generation of wireless networks. Such scenarios require efficient and reliable transmission over channels with an extremely small capacity. Within these constraints, the state-of-the-art coding techniques may not be directly applicable. Moreover, the prior work on the finite-length analysis of optimal channel coding provides inaccurate predictions of the limits in the low-capacity regime. In this paper, we study channel coding at low capacity from two perspectives: fundamental limits at finite length and code constructions. We first specify what a low-capacity regime means. We then characterize finite-length fundamental limits of channel coding in the low-capacity regime for various types of channels, including binary erasure channels (BECs), binary symmetric channels (BSCs), and additive white Gaussian noise (AWGN) channels. From the code construction perspective, we characterize the optimal number of repetitions for transmission over binary memoryless symmetric (BMS) channels, in terms of the code blocklength and the underlying channel capacity, such that the capacity loss due to the repetition is negligible. Furthermore, it is shown that capacity-achieving polar codes naturally adopt the aforementioned optimal number of repetitions.
In backscatter communication (BC), a passive tag transmits information by just affecting an external electromagnetic field through load modulation. Thereby, the feed current of the excited tag antenna is modulated by adapting the passive termination load. This paper studies the achievable information rates with a freely adaptable passive load. As a prerequisite, we unify monostatic, bistatic, and ambient BC with circuit-based system modeling. We present the crucial insight that channel capacity is described by existing results on peak-power-limited quadrature Gaussian channels, because the steady-state tag current phasor lies on a disk. Consequently, we derive the channel capacity for the case of an unmodulated external field, for general passive, purely reactive, or purely resistive tag loads. We find that modulating both resistance and reactance is important for very high rates. We discuss the capacity-achieving load statistics, the rate asymptotics, and also the capacity of ambient BC in important special cases. We then propose a capacity-approaching finite constellation design: a tailored amplitude-and-phase-shift keying on the reflection coefficient. Furthermore, we demonstrate high rates for simple loads of just a few switched resistors and capacitors. Finally, we investigate the rate loss from a value-range-constrained load, which is found to be small for moderate constraints.
Procedural content generation (PCG) is a growing field, with numerous applications in the video game industry, and great potential to help create better games at a fraction of the cost of manual creation. However, much of the work in PCG is focused on generating relatively straightforward levels in simple games, as it is challenging to design an optimisable objective function for complex settings. This limits the applicability of PCG to more complex and modern titles, hindering its adoption in industry. Our work aims to address this limitation by introducing a compositional level generation method, which recursively composes simple, low-level generators together to construct large and complex creations. This approach allows for easily-optimisable objectives and the ability to design a complex structure in an interpretable way by referencing lower-level components. We empirically demonstrate that our method outperforms a non-compositional baseline by more accurately satisfying a designer's functional requirements in several tasks. Finally, we provide a qualitative showcase (in Minecraft) illustrating the large and complex, but still coherent, structures that were generated using simple base generators.
This paper explores the use of reconfigurable intelligent surfaces (RIS) in mitigating cross-system interference in spectrum sharing and secure wireless applications. Unlike conventional RIS that can only adjust the phase of the incoming signal and essentially reflect all impinging energy, or active RIS, which also amplify the reflected signal at the cost of significantly higher complexity, noise, and power consumption, an absorptive RIS (ARIS) is considered. An ARIS can in principle modify both the phase and modulus of the impinging signal by absorbing a portion of the signal energy, providing a compromise between its conventional and active counterparts in terms of complexity, power consumption, and degrees of freedom (DoFs). We first use a toy example to illustrate the benefit of ARIS, and then we consider three applications: (1) Spectral coexistence of radar and communication systems, where a convex optimization problem is formulated to minimize the Frobenius norm of the channel matrix from the communication base station to the radar receiver; (2) Spectrum sharing in device-to-device (D2D) communications, where a max-min scheme that maximizes the worst-case signal-to-interference-plus-noise ratio (SINR) among the D2D links is developed and then solved via fractional programming; (3) The physical layer security of a downlink communication system, where the secrecy rate is maximized and the resulting nonconvex problem is solved by a fractional programming algorithm together with a sequential convex relaxation procedure. Numerical results are then presented to show the significant benefit of ARIS in these applications.
Autonomous vehicles are expected to operate safely in real-life road conditions in the next years. Nevertheless, unanticipated events such as the existence of unexpected objects in the range of the road, can put safety at risk. The advancement of sensing and communication technologies and Internet of Things may facilitate the recognition of hazardous situations and information exchange in a cooperative driving scheme, providing new opportunities for the increase of collaborative situational awareness. Safe and unobtrusive visualization of the obtained information may nowadays be enabled through the adoption of novel Augmented Reality (AR) interfaces in the form of windshields. Motivated by these technological opportunities, we propose in this work a saliency-based distributed, cooperative obstacle detection and rendering scheme for increasing the driver's situational awareness through (i) automated obstacle detection, (ii) AR visualization and (iii) information sharing (upcoming potential dangers) with other connected vehicles or road infrastructure. An extensive evaluation study using a variety of real datasets for pothole detection showed that the proposed method provides favorable results and features compared to other recent and relevant approaches.
It is common in modern prediction problems for many predictor variables to be counts of rarely occurring events. This leads to design matrices in which many columns are highly sparse. The challenge posed by such "rare features" has received little attention despite its prevalence in diverse areas, ranging from natural language processing (e.g., rare words) to biology (e.g., rare species). We show, both theoretically and empirically, that not explicitly accounting for the rareness of features can greatly reduce the effectiveness of an analysis. We next propose a framework for aggregating rare features into denser features in a flexible manner that creates better predictors of the response. Our strategy leverages side information in the form of a tree that encodes feature similarity. We apply our method to data from TripAdvisor, in which we predict the numerical rating of a hotel based on the text of the associated review. Our method achieves high accuracy by making effective use of rare words; by contrast, the lasso is unable to identify highly predictive words if they are too rare. A companion R package, called rare, implements our new estimator, using the alternating direction method of multipliers.
In this paper, the wireless hierarchical federated learning (HFL) is revisited by considering physical layer security (PLS). First, we establish a framework for this new problem. Then, we propose a practical finite blocklength (FBL) coding scheme for the wireless HFL in the presence of PLS, which is self-secure when the coding blocklength is lager than a certain threshold. Finally, the study of this paper is further explained via numerical examples and simulation results.
Classical results establish that ensembles of small models benefit when predictive diversity is encouraged, through bagging, boosting, and similar. Here we demonstrate that this intuition does not carry over to ensembles of deep neural networks used for classification, and in fact the opposite can be true. Unlike regression models or small (unconfident) classifiers, predictions from large (confident) neural networks concentrate in vertices of the probability simplex. Thus, decorrelating these points necessarily moves the ensemble prediction away from vertices, harming confidence and moving points across decision boundaries. Through large scale experiments, we demonstrate that diversity-encouraging regularizers hurt the performance of high-capacity deep ensembles used for classification. Even more surprisingly, discouraging predictive diversity can be beneficial. Together this work strongly suggests that the best strategy for deep ensembles is utilizing more accurate, but likely less diverse, component models.