Receivers with joint channel estimation and signal detection using superimposed pilots (SP) can achieve high transmission efficiency in orthogonal time frequency space (OTFS) systems. However, existing receivers have high computational complexity, hindering their practical applications. In this work, with SP in the delay-Doppler (DD) domain and the generalized complex exponential (GCE) basis expansion modeling (BEM) for channels, a message passing-based SP-DD iterative receiver is proposed, which drastically reduces the computational complexity while with marginal performance loss, compared to existing ones. To facilitate channel estimation (CE) in the proposed receiver, we design pilot signal to achieve pilot power concentration in the frequency domain, thereby developing an SP-DD-D receiver that can effectively reduce the power of the pilot signal and almost no loss of CE accuracy. Extensive simulation results are provided to demonstrate the superiority of the proposed SP-DD-D receiver.
Nonlinear distortion stemming from low-cost power amplifiers may severely affect wireless communication performance through out-of-band (OOB) radiation and in-band distortion. The distortion is correlated between different transmit antennas in an antenna array, which results in a beamforming gain at the receiver side that grows with the number of antennas. In this paper, we investigate how the strength of the distortion is affected by the frequency selectivity of the channel. A closed-form expression for the received distortion power is derived as a function of the number of multipath components (MPCs) and the delay spread, which highlight their impact. The performed analysis, which is verified via numerical simulations, reveals that as the number of MPCs increases, distortion exhibits distinct characteristics for in-band and OOB frequencies. It is shown that the received in-band and OOB distortion power is inversely proportional to the number of MPCs, and it is reported that as the delay spread gets narrower, the in-band distortion power is beamformed towards the intended user, which yields higher received in-band distortion compared to the OOB distortion.
This paper presents a distributed rule-based Lloyd algorithm (RBL) for multi-robot motion planning and control. The main limitations of the basic Loyd-based algorithm (LB) concern deadlock issues and the failure to address dynamic constraints effectively. Our contribution is twofold. First, we show how RBL is able to provide safety and convergence to the goal region without relying on communication between robots, nor neighbors control inputs, nor synchronization between the robots. We considered both case of holonomic and non-holonomic robots with control inputs saturation. Second, we show that the Lloyd-based algorithm (without rules) can be successfully used as a safety layer for learning-based approaches, leading to non-negligible benefits. We further prove the soundness, reliability, and scalability of RBL through extensive simulations, an updated comparison with the state of the art, and experimental validations on small-scale car-like robots.
The openness and influence of video-sharing platforms (VSPs) such as YouTube and TikTok attracted creators to share videos on various social issues. Although social issue videos (SIVs) affect public opinions and breed misinformation, how VSP users obtain information and interact with SIVs is under-explored. This work surveyed 659 YouTube and 127 TikTok users to understand the motives for consuming SIVs on VSPs. We found that VSP users are primarily motivated by the information and entertainment gratifications to use the platform. VSP users use SIVs for information-seeking purposes and find YouTube and TikTok convenient to interact with SIVs. VSP users moderately watch SIVs for entertainment and inactively engage in social interactions. SIV consumption is associated with information and socialization gratifications of the platform. VSP users appreciate the diversity of information and opinions but would also do their own research and are concerned about the misinformation and echo chamber problems.
In the face of challenges encountered during femur fracture surgery, such as the high rates of malalignment and X-ray exposure to operating personnel, robot-assisted surgery has emerged as an alternative to conventional state-of-the-art surgical methods. This paper introduces the development of Robossis, a haptic system for robot-assisted femur fracture surgery. Robossis comprises a 7-DOF haptic controller and a 6-DOF surgical robot. A unilateral control architecture is developed to address the kinematic mismatch and the motion transfer between the haptic controller and the Robossis surgical robot. A real-time motion control pipeline is designed to address the motion transfer and evaluated through experimental testing. The analysis illustrates that the Robossis surgical robot can adhere to the desired trajectory from the haptic controller with an average translational error of 0.32 mm and a rotational error of 0.07 deg. Additionally, a haptic rendering pipeline is developed to resolve the kinematic mismatch by constraining the haptic controller (user hand) movement within the permissible joint limits of the Robossis surgical robot. Lastly, in a cadaveric lab test, the Robossis system assisted surgeons during a mock femur fracture surgery. The result shows that Robossis can provide an intuitive solution for surgeons to perform femur fracture surgery.
This study addresses the application of deep learning techniques in joint sound signal classification and localization networks. Current state-of-the-art sound source localization deep learning networks lack feature aggregation within their architecture. Feature aggregation enhances model performance by enabling the consolidation of information from different feature scales, thereby improving feature robustness and invariance. This is particularly important in SSL networks, which must differentiate direct and indirect acoustic signals. To address this gap, we adapt feature aggregation techniques from computer vision neural networks to signal detection neural networks. Additionally, we propose the Scale Encoding Network (SEN) for feature aggregation to encode features from various scales, compressing the network for more computationally efficient aggregation. To evaluate the efficacy of feature aggregation in SSL networks, we integrated the following computer vision feature aggregation sub-architectures into a SSL control architecture: Path Aggregation Network (PANet), Weighted Bi-directional Feature Pyramid Network (BiFPN), and SEN. These sub-architectures were evaluated using two metrics for signal classification and two metrics for direction-of-arrival regression. PANet and BiFPN are established aggregators in computer vision models, while the proposed SEN is a more compact aggregator. The results suggest that models incorporating feature aggregations outperformed the control model, the Sound Event Localization and Detection network (SELDnet), in both sound signal classification and localization. The feature aggregation techniques enhance the performance of sound detection neural networks, particularly in direction-of-arrival regression.
We analyze low-power short-range wireless communications through a low-rank fading channel - a bonafide use case in many communication scenarios requiring simple wireless connectivity with much relaxed constraints on throughput and data latency. This is certainly true, for instance, in low-complexity wireless channels in the low-rate wireless personal area networks (LR-WPANs). Low-rate communication on control channels in wireless networks is another relevant example. Specifically, we characterize the capacity of a low-rank wireless channel with varying fading severity at low signal-to-noise ratios (SNRs). The rank deficiency is incorporated by introducing pinhole condition in the channel. The channel capacity degradation with fading severity at high SNRs is well known: the probability of deep fades increases significantly with higher fading severity resulting in poor performance. Our analysis of the double-fading pinhole channel at low-SNR shows a very counter-intuitive result that - \emph{higher fading severity enables higher capacity at sufficiently low SNR}. The underlying reason is that at low SNRs, ergodic capacity depends crucially on the probability distribution of channel peaks (simply tail distribution); for the pinhole channel, the tail distribution improves with increased fading severity. This allows a transmitter operating at low SNR to exploit channel peaks `more efficiently' resulting in net improvement in achievable spectral efficiency. We derive a new key result quantifying the above dependence for the double-Nakagami-$m$ fading pinhole channel - that is, the ergodic capacity ${C} \propto (m_T m_R)^{-1}$ at low SNR, where $m_T m_R$ is the product of fading (severity) parameters of the two independent Nakagami-$m$ fadings involved.
A significant challenge in control theory and technology is to devise agile and less resource-intensive experiments for evaluating the performance and feasibility of control algorithms for the collective coordination of large-scale complex systems. Many new methodologies are based on macroscopic representations of the emerging system behavior, and can be easily validated only through numerical simulations, because of the inherent hurdle of developing full scale experimental platforms. In this paper, we introduce a novel hybrid mixed reality set-up for testing swarm robotics techniques, focusing on the collective motion of robotic swarms. This hybrid apparatus combines both real differential drive robots and virtual agents to create a heterogeneous swarm of tunable size. We validate the methodology by extending to higher dimensions, and investigating experimentally, continuification-based control methods for swarms. Our study demonstrates the versatility and effectiveness of the platform for conducting large-scale swarm robotics experiments. Also, it contributes new theoretical insights into control algorithms exploiting continuification approaches.
At modern warehouses, mobile robots transport packages and drop them into collection bins/chutes based on shipping destinations grouped by, e.g., the ZIP code. System throughput, measured as the number of packages sorted per unit of time, determines the efficiency of the warehouse. This research develops a scalable, high-throughput multi-robot parcel sorting solution, decomposing the task into two related processes, bin assignment and offline/online multi-robot path planning, and optimizing both. Bin assignment matches collection bins with package types to minimize traveling costs. Subsequently, robots are assigned to pick up and drop packages into assigned bins. Multiple highly effective bin assignment algorithms are proposed that can work with an arbitrary planning algorithm. We propose a decentralized path planning routine using only local information to route the robots over a carefully constructed directed road network for multi-robot path planning. Our decentralized planner, provably probabilistically deadlock-free, consistently delivers near-optimal results on par with some top-performing centralized planners while significantly reducing computation times by orders of magnitude. Extensive simulations show that our overall framework delivers promising performances.
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.