亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unmanned aerial vehicles (UAVs) can be utilized as aerial base stations (ABSs) to provide wireless connectivity for ground users (GUs) in various emergency scenarios. However, it is a NP-hard problem with exponential complexity in $M$ and $N$, in order to maximize the coverage rate of $M$ GUs by jointly placing $N$ ABSs with limited coverage range. The problem is further complicated when the coverage range becomes irregular due to site-specific blockages (e.g., buildings) on the air-ground channel, and/or when the GUs are moving. To address the above challenges, we study a multi-ABS movement optimization problem to maximize the average coverage rate of mobile GUs in a site-specific environment. The Spatial Deep Learning with Multi-dimensional Archive of Phenotypic Elites (SDL-ME) algorithm is proposed to tackle this challenging problem by 1) partitioning the complicated ABS movement problem into ABS placement sub-problems each spanning finite time horizon; 2) using an encoder-decoder deep neural network (DNN) as the emulator to capture the spatial correlation of ABSs/GUs and thereby reducing the cost of interaction with the actual environment; 3) employing the emulator to speed up a quality-diversity search for the optimal placement solution; and 4) proposing a planning-exploration-serving scheme for multi-ABS movement coordination. Numerical results demonstrate that the proposed approach significantly outperforms the benchmark Deep Reinforcement Learning (DRL)-based method and other two baselines in terms of average coverage rate, training time and/or sample efficiency. Moreover, with one-time training, our proposed method can be applied in scenarios where the number of ABSs/GUs dynamically changes on site and/or with different/varying GU speeds, which is thus more robust and flexible compared with conventional DRL-based methods.

相關內容

Sixth-generation (6G) mobile communication networks are expected to have dense infrastructures, large antenna size, wide bandwidth, cost-effective hardware, diversified positioning methods, and enhanced intelligence. Such trends bring both new challenges and opportunities for the practical design of 6G. On one hand, acquiring channel state information (CSI) in real time for all wireless links becomes quite challenging in 6G. On the other hand, there would be numerous data sources in 6G containing high-quality location-tagged channel data, e.g., the estimated channels or beams between base station (BS) and user equipment (UE), making it possible to better learn the local wireless environment. By exploiting this new opportunity and for tackling the CSI acquisition challenge, there is a promising paradigm shift from the conventional environment-unaware communications to the new environment-aware communications based on the novel approach of channel knowledge map (CKM). This article aims to provide a comprehensive overview on environment-aware communications enabled by CKM to fully harness its benefits for 6G. First, the basic concept of CKM is presented, followed by the comparison of CKM with various existing channel inference techniques. Next, the main techniques for CKM construction are discussed, including both environment model-free and environment model-assisted approaches. Furthermore, a general framework is presented for the utilization of CKM to achieve environment-aware communications, followed by some typical CKM-aided communication scenarios. Finally, important open problems in CKM research are highlighted and potential solutions are discussed to inspire future work.

Multi-antenna relays and intelligent reflecting surfaces (IRSs) have been utilized to construct favorable channels to improve the performance of wireless systems. A common feature between relay systems and IRS-aided systems is the two-hop multiple-input multiple-output (MIMO) channel. As a result, the mutual information (MI) of two-hop MIMO channels has been widely investigated with very engaging results. However, a rigorous investigation on the fundamental limits of two-hop MIMO channels, i.e., the first and second-order analysis, is not yet available in the literature, due to the difficulties caused by the two-hop (product) channel and the noise introduced by the relay (active IRS). In this paper, we employ large-scale random matrix theory (RMT), specifically Gaussian tools, to derive the closed-form deterministic approximation for the mean and variance of the MI. Additionally, we determine the convergence rate for the mean, variance and the characteristic function of the MI, and prove the asymptotic Gaussianity. Furthermore, we also investigate the analytical properties of the fundamental equations that describe the closed-form approximation and prove the existence and uniqueness of the solution. An iterative algorithm is then proposed to obtain the solution for the fundamental equations. Numerical results validate the accuracy of the theoretical analysis.

The large number of sensors and actuators that make up the Internet of Things obliges these systems to use diverse technologies and protocols. This means that IoT networks are more heterogeneous than traditional networks. This gives rise to new challenges in cybersecurity to protect these systems and devices which are characterized by being connected continuously to the Internet. Intrusion detection systems (IDS) are used to protect IoT systems from the various anomalies and attacks at the network level. Intrusion Detection Systems (IDS) can be improved through machine learning techniques. Our work focuses on creating classification models that can feed an IDS using a dataset containing frames under attacks of an IoT system that uses the MQTT protocol. We have addressed two types of method for classifying the attacks, ensemble methods and deep learning models, more specifically recurrent networks with very satisfactory results.

Indoor visible light communication (VLC) is considered secure against attackers outside the confined area where the light propagates, but it is still susceptible to interception from inside the coverage area. A new technology, intelligent reflecting surfaces (IRS), has been recently introduced, offering a way to enhance physical layer security (PLS). Most research on IRS-assisted VLC assumes the same time of arrival from all reflecting elements and overlooks the effect of time delay and the associated intersymbol interference. This paper tackles, for the first time, the effect of time delay on the secrecy rate in VLC systems. Our results show that, at a fixed light-emitting diode (LED) power of 3W, the secrecy rate can be enhanced by up to 253\% at random positions for the legitimate user when the eavesdropper is located within a 1-meter radius of the LED. Our results also show that careful allocation of the IRS elements can lead to enhanced PLS even when the eavesdropper has a more favourable position and, thus, a better channel gain than the legitimate user.

The performance of prediction-based assistance for robot teleoperation degrades in unseen or goal-rich environments due to incorrect or quickly-changing intent inferences. Poor predictions can confuse operators or cause them to change their control input to implicitly signal their goal, resulting in unnatural movement. We present a new assistance algorithm and interface for robotic manipulation where an operator can explicitly communicate a manipulation goal by pointing the end-effector. Rapid optimization and parallel collision checking in a local region around the pointing target enable direct, interactive control over grasp and place pose candidates. We compare the explicit pointing interface to an implicit inference-based assistance scheme in a within-subjects user study (N=20) where participants teleoperate a simulated robot to complete a multi-step singulation and stacking task in cluttered environments. We find that operators prefer the explicit interface, which improved completion time, pick and place success rates, and NASA TLX scores. Our code is available at //github.com/NVlabs/fast-explicit-teleop

With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.

Extremely large aperture arrays can enable unprecedented spatial multiplexing in beyond 5G systems due to their extremely narrow beamfocusing capabilities. However, acquiring the spatial correlation matrix to enable efficient channel estimation is a complex task due to the vast number of antenna dimensions. Recently, a new estimation method called the "reduced-subspace least squares (RS-LS) estimator" has been proposed for densely packed arrays. This method relies solely on the geometry of the array to limit the estimation resources. In this paper, we address a gap in the existing literature by deriving the average spectral efficiency for a certain distribution of user equipments (UEs) and a lower bound on it when using the RS-LS estimator. This bound is determined by the channel gain and the statistics of the normalized spatial correlation matrices of potential UEs but, importantly, does not require knowledge of a specific UE's spatial correlation matrix. We establish that there exists a pilot length that maximizes this expression. Additionally, we derive an approximate expression for the optimal pilot length under low signal-to-noise ratio (SNR) conditions. Simulation results validate the tightness of the derived lower bound and the effectiveness of using the optimized pilot length.

Large language models (LLMs) have significantly advanced the field of natural language processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of applications. The great promise of LLMs as general task solvers motivated people to extend their functionality largely beyond just a ``chatbot'', and use it as an assistant or even replacement for domain experts and tools in specific domains such as healthcare, finance, and education. However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles, caused by the heterogeneity of domain data, the sophistication of domain knowledge, the uniqueness of domain objectives, and the diversity of the constraints (e.g., various social norms, cultural conformity, religious beliefs, and ethical standards in the domain applications). To fill such a gap, explosively-increase research, and practices have been conducted in very recent years on the domain specialization of LLMs, which, however, calls for a comprehensive and systematic review to better summarizes and guide this promising domain. In this survey paper, first, we propose a systematic taxonomy that categorizes the LLM domain-specialization techniques based on the accessibility to LLMs and summarizes the framework for all the subcategories as well as their relations and differences to each other. We also present a comprehensive taxonomy of critical application domains that can benefit from specialized LLMs, discussing their practical significance and open challenges. Furthermore, we offer insights into the current research status and future trends in this area.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.

北京阿比特科技有限公司