亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper explores the potential of 5G new radio (NR) Time-of-Arrival (TOA) data for indoor drone localization under different scenarios and conditions when fused with inertial measurement unit (IMU) data. Our approach involves performing graph-based optimization to estimate the drone's position and orientation from the multiple sensor measurements. Due to the lack of real-world data, we use Matlab 5G toolbox and QuaDRiGa (quasi-deterministic radio channel generator) channel simulator to generate TOA measurements for the EuRoC MAV indoor dataset that provides IMU readings and ground truths 6DoF poses of a flying drone. Hence, we create twelve sequences combining three predefined indoor scenarios setups of QuaDRiGa with 2 to 5 base station antennas. Therefore, experimental results demonstrate that, for a sufficient number of base stations and a high bandwidth 5G configuration, the pose graph optimization approach achieves accurate drone localization, with an average error of less than 15 cm on the overall trajectory. Furthermore, the adopted graph-based optimization algorithm is fast and can be easily implemented for onboard real-time pose tracking on a micro aerial vehicle (MAV).

相關內容

Future wireless networks and sensing systems will benefit from access to large chunks of spectrum above 100 GHz, to achieve terabit-per-second data rates in 6th Generation (6G) cellular systems and improve accuracy and reach of Earth exploration and sensing and radio astronomy applications. These are extremely sensitive to interference from artificial signals, thus the spectrum above 100 GHz features several bands which are protected from active transmissions under current spectrum regulations. To provide more agile access to the spectrum for both services, active and passive users will have to coexist without harming passive sensing operations. In this paper, we provide the first, fundamental analysis of Radio Frequency Interference (RFI) that large-scale terrestrial deployments introduce in different satellite sensing systems now orbiting the Earth. We develop a geometry-based analysis and extend it into a data-driven model which accounts for realistic propagation, building obstruction, ground reflection, for network topology with up to $10^5$ nodes in more than $85$ km$^2$. We show that the presence of harmful RFI depends on several factors, including network load, density and topology, satellite orientation, and building density. The results and methodology provide the foundation for the development of coexistence solutions and spectrum policy towards 6G.

Transport engineers employ various interventions to enhance traffic-network performance. Quantifying the impacts of Cycle Superhighways is complicated due to the non-random assignment of such an intervention over the transport network. Treatment effects on asymmetric and heavy-tailed distributions are better reflected at extreme tails rather than at the median. We propose a novel method to estimate the treatment effect at extreme tails incorporating heavy-tailed features in the outcome distribution. The analysis of London transport data using the proposed method indicates that the extreme traffic flow increased substantially after Cycle Superhighways came into operation.

The combination of Visual Guidance and Extended Reality (XR) technology holds the potential to greatly improve the performance of human workforces in numerous areas, particularly industrial environments. Focusing on virtual assembly tasks and making use of different forms of supportive visualisations, this study investigates the potential of XR Visual Guidance. Set in a web-based immersive environment, our results draw from a heterogeneous pool of 199 participants. This research is designed to significantly differ from previous exploratory studies, which yielded conflicting results on user performance and associated human factors. Our results clearly show the advantages of XR Visual Guidance based on an over 50\% reduction in task completion times and mistakes made; this may further be enhanced and refined using specific frameworks and other forms of visualisations/Visual Guidance. Discussing the role of other factors, such as cognitive load, motivation, and usability, this paper also seeks to provide concrete avenues for future research and practical takeaways for practitioners.

With the surge of theoretical work investigating Reconfigurable Intelligent Surfaces (RISs) for wireless communication and sensing, there exists an urgent need of hardware solutions for the evaluation of these theoretical results and further advancing the field. The most common solutions proposed in the literature are based on varactors, Positive Intrinsic-Negative (PIN) diodes, and Micro-Electro-Mechanical Systems (MEMS). This paper presents the use of Liquid Crystal (LC) technology for the realization of continuously tunable extremely large millimeter-wave RISs. We review the basic physical principles of LC theory, introduce two different realizations of LC-RISs, namely reflect-array and phased-array, and highlight their key properties that have an impact on the system design and RIS reconfiguration strategy. Moreover, the LC technology is compared with the competing technologies in terms of feasibility, cost, power consumption, reconfiguration speed, and bandwidth. Furthermore, several important open problems for both theoretical and experimental research on LC-RISs are presented.

This paper explores the Achievable Information Rate (AIR) of a diffusive Molecular Communication (MC) channel featuring a fully absorbing receiver that counts the absorbed particles during symbol time intervals (STIs) and resets the counter at the start of each interval. The MC channel, influenced by memory effect, experiences inter-symbol interference (ISI) arising from the molecules' delayed arrival. The channel's memory is quantified as an integer multiple of the STI and a single-sample memoryless detector is employed to mitigate complexity in computing the mutual information (MI). To maximize MI, the detector threshold is optimized under Gaussian approximation of its input. The channel's MI is calculated, considering the influence of ISI, in the context of binary concentration shift keying modulation. Two distinct scenarios were considered; independent and correlated source-generated symbols, the latter modeled as a first-order Markov process. For each communication scenario, two degrees of knowledge: ISI-Aware and ISI-Unaware were considered. Remarkably, it is demonstrated that employing a correlated source enables the attainment of higher capacity. The results indicate that the capacity-achieving input distribution is not necessarily uniform. Notably, when the STI is small, corresponding to the case of strong ISI, the maximum AIR is not achieved through equiprobable symbol transmission.

The increasing demand for the web-based digital assistants has given a rapid rise in the interest of the Information Retrieval (IR) community towards the field of conversational question answering (ConvQA). However, one of the critical aspects of ConvQA is the effective selection of conversational history turns to answer the question at hand. The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area. The selected relevant context can better guide the system so as to where exactly in the passage to look for an answer. Irrelevant context, on the other hand, brings noise to the system, thereby resulting in a decline in the model's performance. In this paper, we propose a framework, DHS-ConvQA (Dynamic History Selection in Conversational Question Answering), that first generates the context and question entities for all the history turns, which are then pruned on the basis of similarity they share in common with the question at hand. We also propose an attention-based mechanism to re-rank the pruned terms based on their calculated weights of how useful they are in answering the question. In the end, we further aid the model by highlighting the terms in the re-ranked conversational history using a binary classification task and keeping the useful terms (predicted as 1) and ignoring the irrelevant terms (predicted as 0). We demonstrate the efficacy of our proposed framework with extensive experimental results on CANARD and QuAC -- the two popularly utilized datasets in ConvQA. We demonstrate that selecting relevant turns works better than rewriting the original question. We also investigate how adding the irrelevant history turns negatively impacts the model's performance and discuss the research challenges that demand more attention from the IR community.

Augmenting automated vehicles to wirelessly detect and respond to external events before they are detectable by onboard sensors is crucial for developing context-aware driving strategies. To this end, we present an automated vehicle platform, designed with connectivity, ease of use and modularity in mind, both in hardware and software. It is based on the Kia Soul EV with a modified version of the Open-Source Car Control (OSCC) drive-by-wire module, uses the open-source Robot Operating System (ROS and ROS 2) in its software architecture, and provides a straightforward solution for transitioning from simulations to real-world tests. We demonstrate the effectiveness of the platform through a synchronised driving test, where sensor data is exchanged wirelessly, and a model-predictive controller is used to actuate the automated vehicle.

This is part II of a two-part paper. Part I presented a universal Birkhoff theory for fast and accurate trajectory optimization. The theory rested on two main hypotheses. In this paper, it is shown that if the computational grid is selected from any one of the Legendre and Chebyshev family of node points, be it Lobatto, Radau or Gauss, then, the resulting collection of trajectory optimization methods satisfy the hypotheses required for the universal Birkhoff theory to hold. All of these grid points can be generated at an $\mathcal{O}(1)$ computational speed. Furthermore, all Birkhoff-generated solutions can be tested for optimality by a joint application of Pontryagin's- and Covector-Mapping Principles, where the latter was developed in Part~I. More importantly, the optimality checks can be performed without resorting to an indirect method or even explicitly producing the full differential-algebraic boundary value problem that results from an application of Pontryagin's Principle. Numerical problems are solved to illustrate all these ideas. The examples are chosen to particularly highlight three practically useful features of Birkhoff methods: (1) bang-bang optimal controls can be produced without suffering any Gibbs phenomenon, (2) discontinuous and even Dirac delta covector trajectories can be well approximated, and (3) extremal solutions over dense grids can be computed in a stable and efficient manner.

We consider a downlink multicast and unicast superposition transmission in muti-layer Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiple Access (OFDMA) systems when only the statistical channel state information is available at the transmitter (CSIT). Multiple users can be scheduled by using the time/frequency resources in OFDMA, while for each scheduled user MIMO spatial multiplexing is used to transmit multiple information layers, i.e., single user (SU)-MIMO. The users only need to feedback to the base-station the rank-indicator and the long-term average channel signal-to-noise ratio, to indicate a suitable number of transmission layers, a suitable modulation and coding scheme and allow the base-station to perform user scheduling. This approach is especially relevant for the delivery of common (e.g., popular live event) and independent (e.g., user personalized) content to a high number of users in deployments in the lower frequency bands operating in Frequency-Division-Duplex (FDD) mode, e.g., sub-1 GHz. We show that the optimal resource allocation that maximizes the ergodic sum-rate involves greedy user selection per OFDM subchannel and superposition transmission of one multicast signal across all subchannels and single unicast signal per subchannel. Degree-of-freedom (DoF) analysis shows that while the lack of instantaneous CSI limits DoF of unicast messages to the minimum number of transmit antennas and receiver antennas, the multicast message obtains full DoF that increases linearly with the number of users. We present resource allocation algorithms consisting of user selection and power allocation between multicast and unicast signals in each OFDM subchannel. System level simulations in 5G rural macro-cell scenarios show overall network throughput gains in realistic network environments by superposition transmission of multicast and unicast signals.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司