Reconfigurable Intelligent Surfaces (RISs) are envisioned to play a key role in future wireless communications, enabling programmable radio propagation environments. They are usually considered as almost passive planar structures that operate as adjustable reflectors, giving rise to a multitude of implementation challenges, including the inherent difficulty in estimating the underlying wireless channels. In this paper, we focus on the recently conceived concept of Hybrid Reconfigurable Intelligent Surfaces (HRISs), which do not solely reflect the impinging waveform in a controllable fashion, but are also capable of sensing and processing an adjustable portion of it. We first present implementation details for this metasurface architecture and propose a convenient mathematical model for characterizing its dual operation. As an indicative application of HRISs in wireless communications, we formulate the individual channel estimation problem for the uplink of a multi-user HRIS-empowered communication system. Considering first a noise-free setting, we theoretically quantify the advantage of HRISs in notably reducing the amount of pilots needed for channel estimation, as compared to the case of purely reflective RISs. We then present closed-form expressions for the MSE performance in estimating the individual channels at the HRISs and the base station for the noisy model. Based on these derivations, we propose an automatic differentiation-based first-order optimization approach to efficiently determine the HRIS phase and power splitting configurations for minimizing the weighted sum-MSE performance. Our numerical evaluations demonstrate that HRISs do not only enable the estimation of the individual channels in HRIS-empowered communication systems, but also improve the ability to recover the cascaded channel, as compared to existing methods using passive and reflective RISs.
Are intelligent machines really intelligent? Is the underlying philosophical concept of intelligence satisfactory for describing how the present systems work? Is understanding a necessary and sufficient condition for intelligence? If a machine could understand, should we attribute subjectivity to it? This paper addresses the problem of deciding whether the so-called "intelligent machines" are capable of understanding, instead of merely processing signs. It deals with the relationship between syntaxis and semantics. The main thesis concerns the inevitability of semantics for any discussion about the possibility of building conscious machines, condensed into the following two tenets: "If a machine is capable of understanding (in the strong sense), then it must be capable of combining rules and intuitions"; "If semantics cannot be reduced to syntaxis, then a machine cannot understand." Our conclusion states that it is not necessary to attribute understanding to a machine in order to explain its exhibited "intelligent" behavior; a merely syntactic and mechanistic approach to intelligence as a task-solving tool suffices to justify the range of operations that it can display in the current state of technological development.
Prototype Generation (PG) methods are typically considered for improving the efficiency of the $k$-Nearest Neighbour ($k$NN) classifier when tackling high-size corpora. Such approaches aim at generating a reduced version of the corpus without decreasing the classification performance when compared to the initial set. Despite their large application in multiclass scenarios, very few works have addressed the proposal of PG methods for the multilabel space. In this regard, this work presents the novel adaptation of four multiclass PG strategies to the multilabel case. These proposals are evaluated with three multilabel $k$NN-based classifiers, 12 corpora comprising a varied range of domains and corpus sizes, and different noise scenarios artificially induced in the data. The results obtained show that the proposed adaptations are capable of significantly improving -- both in terms of efficiency and classification performance -- the only reference multilabel PG work in the literature as well as the case in which no PG method is applied, also presenting a statistically superior robustness in noisy scenarios. Moreover, these novel PG strategies allow prioritising either the efficiency or efficacy criteria through its configuration depending on the target scenario, hence covering a wide area in the solution space not previously filled by other works.
The flow-driven spectral chaos (FSC) is a recently developed method for tracking and quantifying uncertainties in the long-time response of stochastic dynamical systems using the spectral approach. The method uses a novel concept called 'enriched stochastic flow maps' as a means to construct an evolving finite-dimensional random function space that is both accurate and computationally efficient in time. In this paper, we present a multi-element version of the FSC method (the ME-FSC method for short) to tackle (mainly) those dynamical systems that are inherently discontinuous over the probability space. In ME-FSC, the random domain is partitioned into several elements, and then the problem is solved separately on each random element using the FSC method. Subsequently, results are aggregated to compute the probability moments of interest using the law of total probability. To demonstrate the effectiveness of the ME-FSC method in dealing with discontinuities and long-time integration of stochastic dynamical systems, four representative numerical examples are presented in this paper, including the Van-der-Pol oscillator problem and the Kraichnan-Orszag three-mode problem. Results show that the ME-FSC method is capable of solving problems that have strong nonlinear dependencies over the probability space, both reliably and at low computational cost.
It was observed in \citet{gupta2009differentially} that the Set Cover problem has strong impossibility results under differential privacy. In our work, we observe that these hardness results dissolve when we turn to the Partial Set Cover problem, where we only need to cover a $\rho$-fraction of the elements in the universe, for some $\rho\in(0,1)$. We show that this relaxation enables us to avoid the impossibility results: under loose conditions on the input set system, we give differentially private algorithms which output an explicit set cover with non-trivial approximation guarantees. In particular, this is the first differentially private algorithm which outputs an explicit set cover. Using our algorithm for Partial Set Cover as a subroutine, we give a differentially private (bicriteria) approximation algorithm for a facility location problem which generalizes $k$-center/$k$-supplier with outliers. Like with the Set Cover problem, no algorithm has been able to give non-trivial guarantees for $k$-center/$k$-supplier-type facility location problems due to the high sensitivity and impossibility results. Our algorithm shows that relaxing the covering requirement to serving only a $\rho$-fraction of the population, for $\rho\in(0,1)$, enables us to circumvent the inherent hardness. Overall, our work is an important step in tackling and understanding impossibility results in private combinatorial optimization.
Teleoperation is a widely adopted strategy to control robotic manipulators executing complex tasks that require highly dexterous movements and critical high-level intelligence. Classical teleoperation schemes are based on either joystick control, or on more intuitive interfaces which map directly the user arm motions into one robot arm's motions. These approaches have limits when the execution of a given task requires reconfigurable multiple robotic arm systems. Indeed, the simultaneous teleoperation of two or more robot arms could extend the workspace of the manipulation cell, or increase its total payload, or afford other advantages. In different phases of a reconfigurable multi-arm system, each robot could act as an independent arm, or as one of a pair of cooperating arms, or as one of the fingers of a virtual, large robot hand. This manuscript proposes a novel telemanipulation framework that enables both the individual and combined control of any number of robotic arms. Thanks to the designed control architecture, the human operator can intuitively choose the proposed control modalities and the manipulators that make the task convenient to execute through the user interface. Moreover, through the tele-impedance paradigm, the system can address complex tasks that require physical interaction by letting the robot mimic the arm impedance and position references of the human operator. The proposed framework is validated with 8 subjects controlling 4 Franka Emika Panda robots with 7-DoFs to execute a telemanipulation task. Qualitative results of the experiments show us the promising applicability of our framework.
This paper presents recent progress on integrating speech separation and enhancement (SSE) into the ESPnet toolkit. Compared with the previous ESPnet-SE work, numerous features have been added, including recent state-of-the-art speech enhancement models with their respective training and evaluation recipes. Importantly, a new interface has been designed to flexibly combine speech enhancement front-ends with other tasks, including automatic speech recognition (ASR), speech translation (ST), and spoken language understanding (SLU). To showcase such integration, we performed experiments on carefully designed synthetic datasets for noisy-reverberant multi-channel ST and SLU tasks, which can be used as benchmark corpora for future research. In addition to these new tasks, we also use CHiME-4 and WSJ0-2Mix to benchmark multi- and single-channel SE approaches. Results show that the integration of SE front-ends with back-end tasks is a promising research direction even for tasks besides ASR, especially in the multi-channel scenario. The code is available online at //github.com/ESPnet/ESPnet. The multi-channel ST and SLU datasets, which are another contribution of this work, are released on HuggingFace.
When users exchange data with Unmanned Aerial vehicles - (UAVs) over air-to-ground (A2G) wireless communication networks, they expose the link to attacks that could increase packet loss and might disrupt connectivity. For example, in emergency deliveries, losing control information (i.e data related to the UAV control communication) might result in accidents that cause UAV destruction and damage to buildings or other elements in a city. To prevent these problems, these issues must be addressed in 5G and 6G scenarios. This research offers a deep learning (DL) approach for detecting attacks in UAVs equipped with orthogonal frequency division multiplexing (OFDM) receivers on Clustered Delay Line (CDL) channels in highly complex scenarios involving authenticated terrestrial users, as well as attackers in unknown locations. We use the two observable parameters available in 5G UAV connections: the Received Signal Strength Indicator (RSSI) and the Signal to Interference plus Noise Ratio (SINR). The prospective algorithm is generalizable regarding attack identification, which does not occur during training. Further, it can identify all the attackers in the environment with 20 terrestrial users. A deeper investigation into the timing requirements for recognizing attacks show that after training, the minimum time necessary after the attack begins is 100 ms, and the minimum attack power is 2 dBm, which is the same power that the authenticated UAV uses. Our algorithm also detects moving attackers from a distance of 500 m.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Deep learning applies multiple processing layers to learn representations of data with multiple levels of feature extraction. This emerging technique has reshaped the research landscape of face recognition since 2014, launched by the breakthroughs of Deepface and DeepID methods. Since then, deep face recognition (FR) technique, which leverages the hierarchical architecture to learn discriminative face representation, has dramatically improved the state-of-the-art performance and fostered numerous successful real-world applications. In this paper, we provide a comprehensive survey of the recent developments on deep FR, covering the broad topics on algorithms, data, and scenes. First, we summarize different network architectures and loss functions proposed in the rapid evolution of the deep FR methods. Second, the related face processing methods are categorized into two classes: `one-to-many augmentation' and `many-to-one normalization'. Then, we summarize and compare the commonly used databases for both model training and evaluation. Third, we review miscellaneous scenes in deep FR, such as cross-factor, heterogenous, multiple-media and industry scenes. Finally, potential deficiencies of the current methods and several future directions are highlighted.
Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.