With the recent developments in engineering quantum systems, the realization of scalable local-area quantum networks has become viable. However, the design and implementation of a quantum network is a holistic task that is way beyond the scope of an abstract design problem. As such, a testbed on which multiple disciplines can verify the design and implementation across a full networking stack has become a necessary infrastructure for the future development of quantum networks. In this work, we demonstrate the concept of quantum internet service provider (QISP), in analogy to the conventional ISP that allows for the sharing of classical information between the network nodes. The QISP is significant for the next-generation quantum networks as it coordinates the production, management, control, and sharing of quantum information across the end-users of a quantum network. We construct a reconfigurable QISP comprising both the quantum hardware and classical control software. Building on the fiber-based quantum-network testbed of the Center for Quantum Networks (CQN) at the University of Arizona (UA), we develop an integrated QISP prototype based on a Platform-as-a-Service (PaaS) architecture, whose classical control software is abstracted and modularized as an open-source QISP framework. To verify and characterize the QISP's performance, we demonstrate multi-channel entanglement distribution and routing among multiple quantum-network nodes with a time-energy entangled-photon source. We further perform field tests of concurrent services for multiple users across the quantum-network testbed. Our experiment demonstrates the robust capabilities of the QISP, laying the foundation for the design and verification of architectures and protocols for future quantum networks.
Origin-destination~(OD) flow modeling is an extensively researched subject across multiple disciplines, such as the investigation of travel demand in transportation and spatial interaction modeling in geography. However, researchers from different fields tend to employ their own unique research paradigms and lack interdisciplinary communication, preventing the cross-fertilization of knowledge and the development of novel solutions to challenges. This article presents a systematic interdisciplinary survey that comprehensively and holistically scrutinizes OD flows from utilizing fundamental theory to studying the mechanism of population mobility and solving practical problems with engineering techniques, such as computational models. Specifically, regional economics, urban geography, and sociophysics are adept at employing theoretical research methods to explore the underlying mechanisms of OD flows. They have developed three influential theoretical models: the gravity model, the intervening opportunities model, and the radiation model. These models specifically focus on examining the fundamental influences of distance, opportunities, and population on OD flows, respectively. In the meantime, fields such as transportation, urban planning, and computer science primarily focus on addressing four practical problems: OD prediction, OD construction, OD estimation, and OD forecasting. Advanced computational models, such as deep learning models, have gradually been introduced to address these problems more effectively. Finally, based on the existing research, this survey summarizes current challenges and outlines future directions for this topic. Through this survey, we aim to break down the barriers between disciplines in OD flow-related research, fostering interdisciplinary perspectives and modes of thinking.
This work studies the net sum-rate performance of a distributed reconfigurable intelligent surfaces (RISs)-assisted multi-user multiple-input-single-output (MISO) downlink communication system under imperfect instantaneous-channel state information (I-CSI) to implement precoding at the base station (BS) and statistical-CSI (S-CSI) to design the RISs phase-shifts. Two channel estimation (CE) protocols are considered for I-CSI acquisition: (i) a full CE protocol that estimates all direct and RISs-assisted channels over multiple training sub-phases, and (ii) a low-overhead direct estimation (DE) protocol that estimates the end-to-end channel in a single sub-phase. We derive the deterministic equivalents of signal-to-interference-plus-noise ratio (SINR) and ergodic net sum-rate under Rayleigh and Rician fading and both CE protocols, for given RISs phase-shifts, which are then optimized based on S-CSI. Simulation results reveal that the low-complexity DE protocol yields better net sum-rate than the full CE protocol when used to obtain CSI for precoding. A benchmark full I-CSI based RISs design is also outlined and shown to yield higher SINR but lower net sum-rate than the S-CSI based RISs design due to the large overhead associated with full I-CSI acquisition. Therefore the proposed DE-S-CSI based design for precoding and reflect beamforming achieves high net sum-rate with low complexity, overhead and power consumption.
Recent research has proposed approaches that modify speech to defend against gender inference attacks. The goal of these protection algorithms is to control the availability of information about a speaker's gender, a privacy-sensitive attribute. Currently, the common practice for developing and testing gender protection algorithms is "neural-on-neural", i.e., perturbations are generated and tested with a neural network. In this paper, we propose to go beyond this practice to strengthen the study of gender protection. First, we demonstrate the importance of testing gender inference attacks that are based on speech features historically developed by speech scientists, alongside the conventionally used neural classifiers. Next, we argue that researchers should use speech features to gain insight into how protective modifications change the speech signal. Finally, we point out that gender-protection algorithms should be compared with novel "vocal adversaries", human-executed voice adaptations, in order to improve interpretability and enable before-the-mic protection.
Quantum computing has emerged as a promising field with the potential to revolutionize various domains by harnessing the principles of quantum mechanics. As quantum hardware and algorithms continue to advance, the development of high-quality quantum software has become crucial. However, testing quantum programs poses unique challenges due to the distinctive characteristics of quantum systems and the complexity of multi-subroutine programs. In this paper, we address the specific testing requirements of multi-subroutine quantum programs. We begin by investigating critical properties through a survey of existing quantum libraries, providing insights into the challenges associated with testing these programs. Building upon this understanding, we present a systematic testing process tailored to the intricacies of quantum programming. The process covers unit testing and integration testing, with a focus on aspects such as IO analysis, quantum relation checking, structural testing, behavior testing, and test case generation. We also introduce novel testing principles and criteria to guide the testing process. To evaluate our proposed approach, we conduct comprehensive testing on typical quantum subroutines, including diverse mutations and randomized inputs. The analysis of failures provides valuable insights into the effectiveness of our testing methodology. Additionally, we present case studies on representative multi-subroutine quantum programs, demonstrating the practical application and effectiveness of our proposed testing processes, principles, and criteria.
This paper explores the use of reconfigurable intelligent surfaces (RIS) in mitigating cross-system interference in spectrum sharing and secure wireless applications. Unlike conventional RIS that can only adjust the phase of the incoming signal and essentially reflect all impinging energy, or active RIS, which also amplify the reflected signal at the cost of significantly higher complexity, noise, and power consumption, an absorptive RIS (ARIS) is considered. An ARIS can in principle modify both the phase and modulus of the impinging signal by absorbing a portion of the signal energy, providing a compromise between its conventional and active counterparts in terms of complexity, power consumption, and degrees of freedom (DoFs). We first use a toy example to illustrate the benefit of ARIS, and then we consider three applications: (1) Spectral coexistence of radar and communication systems, where a convex optimization problem is formulated to minimize the Frobenius norm of the channel matrix from the communication base station to the radar receiver; (2) Spectrum sharing in device-to-device (D2D) communications, where a max-min scheme that maximizes the worst-case signal-to-interference-plus-noise ratio (SINR) among the D2D links is developed and then solved via fractional programming; (3) The physical layer security of a downlink communication system, where the secrecy rate is maximized and the resulting nonconvex problem is solved by a fractional programming algorithm together with a sequential convex relaxation procedure. Numerical results are then presented to show the significant benefit of ARIS in these applications.
Semantic communication, which focuses on conveying the meaning of information rather than exact bit reconstruction, has gained considerable attention in recent years. Meanwhile, reconfigurable intelligent surface (RIS) is a promising technology that can achieve high spectral and energy efficiency by dynamically reflecting incident signals through programmable passive components. In this paper, we put forth a semantic communication scheme aided by RIS. Using text transmission as an example, experimental results demonstrate that the RIS-assisted semantic communication system outperforms the point-to-point semantic communication system in terms of bilingual evaluation understudy (BLEU) scores in Rayleigh fading channels, especially at low signal-to-noise ratio (SNR) regimes. In addition, the RIS-assisted semantic communication system exhibits superior robustness against channel estimation errors compared to its point-to-point counterpart. RIS can improve performance as it provides extra line-of-sight (LoS) paths and enhances signal propagation conditions compared to point-to-point systems.
Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.