亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

6G networks are envisioned to deliver a large diversity of applications and meet stringent quality of service (QoS) requirements. Hence, integrated terrestrial and non-terrestrial networks (TN-NTNs) are anticipated to be key enabling technologies. However, the TN-NTNs integration faces a number of challenges that could be addressed through network virtualization technologies such as Software-Defined Networking (SDN), Network Function Virtualization (NFV) and network slicing. In this survey, we provide a comprehensive review on the adaptation of these networking paradigms in 6G networks. We begin with a brief overview on NTNs and virtualization techniques. Then, we highlight the integral role of Artificial Intelligence in improving network virtualization by summarizing major research areas where AI models are applied. Building on this foundation, the survey identifies the main issues arising from the adaptation of SDN, NFV, and network slicing in integrated TN-NTNs, and proposes a taxonomy of integrated TN-NTNs virtualization offering a thorough review of relevant contributions. The taxonomy is built on a four-level classification indicating for each study the level of TN-NTNs integration, the used virtualization technology, the addressed problem, the type of the study and the proposed solution, which can be based on conventional or AI-enabled methods. Moreover, we present a summary on the simulation tools commonly used in the testing and validation of such networks. Finally, we discuss open issues and give insights on future research directions for the advancement of integrated TN-NTNs virtualization in the 6G era.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜(za)志。 Publisher:Elsevier。 SIT:

Simulating user interactions enables a more user-oriented evaluation of information retrieval (IR) systems. While user simulations are cost-efficient and reproducible, many approaches often lack fidelity regarding real user behavior. Most notably, current user models neglect the user's context, which is the primary driver of perceived relevance and the interactions with the search results. To this end, this work introduces the simulation of context-driven query reformulations. The proposed query generation methods build upon recent Large Language Model (LLM) approaches and consider the user's context throughout the simulation of a search session. Compared to simple context-free query generation approaches, these methods show better effectiveness and allow the simulation of more efficient IR sessions. Similarly, our evaluations consider more interaction context than current session-based measures and reveal interesting complementary insights in addition to the established evaluation protocols. We conclude with directions for future work and provide an entirely open experimental setup.

Researchers commonly use difference-in-differences (DiD) designs to evaluate public policy interventions. While established methodologies exist for estimating effects in the context of binary interventions, policies often result in varied exposures across regions implementing the policy. Yet, existing approaches for incorporating continuous exposures face substantial limitations in addressing confounding variables associated with intervention status, exposure levels, and outcome trends. These limitations significantly constrain policymakers' ability to fully comprehend policy impacts and design future interventions. In this study, we propose innovative estimators for causal effect curves within the DiD framework, accounting for multiple sources of confounding. Our approach accommodates misspecification of a subset of treatment, exposure, and outcome models while avoiding any parametric assumptions on the effect curve. We present the statistical properties of the proposed methods and illustrate their application through simulations and a study investigating the diverse effects of a nutritional excise tax.

As a critical technology for next-generation communication networks, integrated sensing and communication (ISAC) aims to achieve the harmonious coexistence of communication and sensing. The degrees-of-freedom (DoF) of ISAC is limited due to multiple performance metrics used for communication and sensing. Reconfigurable Intelligent Surfaces (RIS) composed of metamaterials can enhance the DoF in the spatial domain of ISAC systems. However, the availability of perfect Channel State Information (CSI) is a prerequisite for the gain brought by RIS, which is not realistic in practical environments. Therefore, under the imperfect CSI condition, we propose a decomposition-based large deviation inequality approach to eliminate the impact of CSI error on communication rate and sensing Cram\'er-Rao bound (CRB). Then, an alternating optimization (AO) algorithm based on semi-definite relaxation (SDR) and gradient extrapolated majorization-maximization (GEMM) is proposed to solve the transmit beamforming and discrete RIS beamforming problems. We also analyze the complexity and convergence of the proposed algorithm. Simulation results show that the proposed algorithms can effectively eliminate the influence of CSI error and have good convergence performance. Notably, when CSI error exists, the gain brought by RIS will decrease with the increase of the number of RIS elements. Finally, we summarize and outline future research directions.

The ever-increasing demand for data services and the proliferation of user equipment (UE) have resulted in a significant rise in the volume of mobile traffic. Moreover, in multi-band networks, non-uniform traffic distribution among different operational bands can lead to congestion, which can adversely impact the user's quality of experience. Load balancing is a critical aspect of network optimization, where it ensures that the traffic is evenly distributed among different bands, avoiding congestion and ensuring better user experience. Traditional load balancing approaches rely only on the band channel quality as a load indicator and to move UEs between bands, which disregards the UE's demands and the band resource, and hence, leading to a suboptimal balancing and utilization of resources. To address this challenge, we propose an event-based algorithm, in which we model the load balancing problem as a multi-objective stochastic optimization, and assign UEs to bands in a probabilistic manner. The goal is to evenly distribute traffic across available bands according to their resources, while maintaining minimal number of inter-frequency handovers to avoid the signaling overhead and the interruption time. Simulation results show that the proposed algorithm enhances the network's performance and outperforms traditional load balancing approaches in terms of throughput and interruption time.

Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment. To address this challenge, we introduce the outlier-aware weight quantization (OWQ) method, which aims to minimize LLM's footprint through low-precision representation. OWQ prioritizes a small subset of structured weights sensitive to quantization, storing them in high-precision, while applying highly tuned quantization to the remaining dense weights. This sensitivity-aware mixed-precision scheme reduces the quantization error notably, and extensive experiments demonstrate that 3.1-bit models using OWQ perform comparably to 4-bit models optimized by OPTQ. Furthermore, OWQ incorporates a parameter-efficient fine-tuning for task-specific adaptation, called weak column tuning (WCT), enabling accurate task-specific LLM adaptation with minimal memory overhead in the optimized format. OWQ represents a notable advancement in the flexibility, efficiency, and practicality of LLM optimization literature. The source code is available at //github.com/xvyaward/owq

Accurate localization of mobile terminals is crucial for integrated sensing and communication systems. Existing fingerprint localization methods, which deduce coordinates from channel information in pre-defined rectangular areas, struggle with the heterogeneous fingerprint distribution inherent in non-line-of-sight (NLOS) scenarios. To address the problem, we introduce a novel multi-source information fusion learning framework referred to as the Autosync Multi-Domain NLOS Localization (AMDNLoc). Specifically, AMDNLoc employs a two-stage matched filter fused with a target tracking algorithm and iterative centroid-based clustering to automatically and irregularly segment NLOS regions, ensuring uniform fingerprint distribution within channel state information across frequency, power, and time-delay domains. Additionally, the framework utilizes a segment-specific linear classifier array, coupled with deep residual network-based feature extraction and fusion, to establish the correlation function between fingerprint features and coordinates within these regions. Simulation results demonstrate that AMDNLoc significantly enhances localization accuracy by over 55% compared with traditional convolutional neural network on the wireless artificial intelligence research dataset.

6G networks are expected to provide more diverse capabilities than their predecessors and are likely to support applications beyond current mobile applications, such as virtual and augmented reality (VR/AR), AI, and the Internet of Things (IoT). In contrast to typical multiple-input multiple-output (MIMO) systems, THz MIMO precoding cannot be conducted totally at baseband using digital precoders due to the restricted number of signal mixers and analog-to-digital converters that can be supported due to their cost and power consumption. In this thesis, we analyzed the performance of multiuser massive MIMO-OFDM THz wireless systems with hybrid beamforming. Carrier frequency offset (CFO) is one of the most well-known disturbances for OFDM. For practicality, we accounted for CFO, which results in Intercarrier Interference. Incorporating the combined impact of molecular absorption, high sparsity, and multi-path fading, we analyzed a three-dimensional wideband THz channel and the carrier frequency offset in multi-carrier systems. With this model, we first presented a two-stage wideband hybrid beamforming technique comprising Riemannian manifolds optimization for analog beamforming and then a zero-forcing (ZF) approach for digital beamforming. We adjusted the objective function to reduce complexity, and instead of maximizing the bit rate, we determined parameters by minimizing interference. Numerical results demonstrate the significance of considering ICI for practical implementation for the THz system. We demonstrated how our change in problem formulation minimizes latency without compromising results. We also evaluated spectral efficiency by varying the number of RF chains and antennas. The spectral efficiency grows as the number of RF chains and antennas increases, but the spectral efficiency of antennas declines when the number of users increases.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Current models for event causality identification (ECI) mainly adopt a supervised framework, which heavily rely on labeled data for training. Unfortunately, the scale of current annotated datasets is relatively limited, which cannot provide sufficient support for models to capture useful indicators from causal statements, especially for handing those new, unseen cases. To alleviate this problem, we propose a novel approach, shortly named CauSeRL, which leverages external causal statements for event causality identification. First of all, we design a self-supervised framework to learn context-specific causal patterns from external causal statements. Then, we adopt a contrastive transfer strategy to incorporate the learned context-specific causal patterns into the target ECI model. Experimental results show that our method significantly outperforms previous methods on EventStoryLine and Causal-TimeBank (+2.0 and +3.4 points on F1 value respectively).

北京阿比特科技有限公司