亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In past years, non-terrestrial networks (NTNs) have emerged as a viable solution for providing ubiquitous connectivity for future wireless networks due to their ability to reach large geographical areas. However, the efficient integration and operation of an NTN with a classic terrestrial network (TN) is challenging due the large amount of parameters to tune. In this paper, we consider the downlink scenario of an integrated TN-NTN transmitting over the S band, comprised of low-earth orbit (LEO) satellites overlapping a large-scale ground cellular network. We propose a new resource management framework to optimize the user equipment (UE) performance by properly controlling the spectrum allocation, the UE association and the transmit power of ground base stations (BSs) and satellites. Our study reveals that, in rural scenarios, NTNs, combined with the proposed radio resource management framework, reduce the number of UEs that are out of coverage, highlighting the important role of NTNs in providing ubiquitous connectivity, and greatly improve the overall capacity of the network. Specifically, our solution leads to more than 200% gain in terms of mean data rate with respect to a network without satellites and a standard integrated TN-NTN when the resource allocation setting follows 3GPP recommendation.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

The unprecedented development of non-terrestrial networks (NTN) utilizes the low-altitude airspace for commercial and social flying activities. The integration of NTN and terres- trial networks leads to the emergence of low-altitude economy (LAE). A series of LAE application scenarios are enabled by the sensing, communication, and transportation functionalities of the aircrafts. The prerequisite technologies supporting LAE are introduced in this paper, including the network coverage and aircrafts detection. The LAE functionalities assisted by aircrafts with respect to sensing and communication are then summarized, including the terrestrial and non-terrestrial targets sensing, ubiquitous coverage, relaying, and traffic offloading. Finally, several future directions are identified, including aircrafts collaboration, energy efficiency, and artificial intelligence enabled LAE.

In the sixth-generation (6G) networks, massive low-power devices are expected to sense environment and deliver tremendous data. To enhance the radio resource efficiency, the integrated sensing and communication (ISAC) technique exploits the sensing and communication functionalities of signals, while the simultaneous wireless information and power transfer (SWIPT) techniques utilizes the same signals as the carriers for both information and power delivery. The further combination of ISAC and SWIPT leads to the advanced technology namely integrated sensing, communication, and power transfer (ISCPT). In this paper, a multi-user multiple-input multiple-output (MIMO) ISCPT system is considered, where a base station equipped with multiple antennas transmits messages to multiple information receivers (IRs), transfers power to multiple energy receivers (ERs), and senses a target simultaneously. The sensing target can be regarded as a point or an extended surface. When the locations of IRs and ERs are separated, the MIMO beamforming designs are optimized to improve the sensing performance while meeting the communication and power transfer requirements. The resultant non-convex optimization problems are solved based on a series of techniques including Schur complement transformation and rank reduction. Moreover, when the IRs and ERs are co-located, the power splitting factors are jointly optimized together with the beamformers to balance the performance of communication and power transfer. To better understand the performance of ISCPT, the target positioning problem is further investigated. Simulations are conducted to verify the effectiveness of our proposed designs, which also reveal a performance tradeoff among sensing, communication, and power transfer.

As the third-generation neural network, the Spiking Neural Network (SNN) has the advantages of low power consumption and high energy efficiency, making it suitable for implementation on edge devices. More recently, the most advanced SNN, Spikformer, combines the self-attention module from Transformer with SNN to achieve remarkable performance. However, it adopts larger channel dimensions in MLP layers, leading to an increased number of redundant model parameters. To effectively decrease the computational complexity and weight parameters of the model, we explore the Lottery Ticket Hypothesis (LTH) and discover a very sparse ($\ge$90%) subnetwork that achieves comparable performance to the original network. Furthermore, we also design a lightweight token selector module, which can remove unimportant background information from images based on the average spike firing rate of neurons, selecting only essential foreground image tokens to participate in attention calculation. Based on that, we present SparseSpikformer, a co-design framework aimed at achieving sparsity in Spikformer through token and weight pruning techniques. Experimental results demonstrate that our framework can significantly reduce 90% model parameters and cut down Giga Floating-Point Operations (GFLOPs) by 20% while maintaining the accuracy of the original model.

Recent years have seen the emergence of object-centric process mining techniques. Born as a response to the limitations of traditional process mining in analyzing event data from prevalent information systems like CRM and ERP, these techniques aim to tackle the deficiency, convergence, and divergence issues seen in traditional event logs. Despite the promise, the adoption in real-world process mining analyses remains limited. This paper embarks on a comprehensive literature review of object-centric process mining, providing insights into the current status of the discipline and its historical trajectory.

The development of 6G/B5G wireless networks, which have requirements that go beyond current 5G networks, is gaining interest from academic and industrial. However, to increase 6G/B5G network quality, conventional cellular networks that rely on terrestrial base stations are constrained geographically and economically. Meanwhile, NOMA allows multiple users to share the same resources, which improves the spectral efficiency of the system and has the advantage of supporting a larger number of users. Additionally, by intelligently manipulating the phase and amplitude of both the reflected and transmitted signals, STAR-RISs can achieve improved coverage, increased spectral efficiency, and enhanced communication reliability. However, STAR-RISs must simultaneously optimize the Amplitude and Phase-shift corresponding to reflection and transmission, which makes the existing terrestiral networks more complicated and is considered a major challenging issue. Motivated by the above, we study the joint user pairing for NOMA and beamforming design of Multi-STAR-RISs in an indoor environment. Then, we formulate the optimization problem with the objective of maximizing the total throughput of MUs by jointly optimizing the decoding order, user pairing, active beamforming, and passive beamforming. However, the formulated problem is a MINLP. To tackle this challenge, we first introduce the decoding order for NOMA networks. Next, we decompose the original problem into two subproblems namely: 1) MU pairing and 2) Beamforming optimization under the optimal decoding order. For the first subproblem, we employ correlation-based K-means clustering to solve the user pairing problem. Then, to jointly deal with beamforming vector optimizations, we propose MAPPO, which can make quick decisions in the given environment owing to its low complexity.

The advent of Web3 has ushered in a new era of decentralized digital economy, promising a shift from centralized authority to distributed, peer-to-peer interactions. However, the underlying infrastructure of this decentralized ecosystem often relies on centralized cloud providers, creating a paradoxical concentration of value and power. This paper investigates the mechanics of value accrual and extraction within the Web3 ecosystem, focusing on the roles and revenues of centralized clouds. Through an analysis of publicly available material, we elucidate the financial implications of cloud services in purportedly decentralized contexts. We further explore the individual's perspective of value creation and accumulation, examining the interplay between user participation and centralized monetization strategies. Key findings indicate that while blockchain technology has the potential to significantly reduce infrastructure costs for financial services, the current Web3 landscape is marked by a substantial reliance on cloud providers for hosting, scalability, and performance.

Sixth-generation (6G) networks pose substantial security risks because confidential information is transmitted over wireless channels with a broadcast nature, and various attack vectors emerge. Physical layer security (PLS) exploits the dynamic characteristics of wireless environments to provide secure communications, while reconfigurable intelligent surfaces (RISs) can facilitate PLS by controlling wireless transmissions. With RIS-aided PLS, a lightweight security solution can be designed for low-end Internet of Things (IoT) devices, depending on the design scenario and communication objective. This article discusses RIS-aided PLS designs for 6G-IoT networks against eavesdropping and jamming attacks. The theoretical background and literature review of RIS-aided PLS are discussed, and design solutions related to resource allocation, beamforming, artificial noise, and cooperative communication are presented. We provide simulation results to show the effectiveness of RIS in terms of PLS. In addition, we examine the research issues and possible solutions for RIS modeling, channel modeling and estimation, optimization, and machine learning. Finally, we discuss recent advances, including STAR-RIS and malicious RIS.

A near-field integrated sensing, positioning, and communication (ISPAC) framework is proposed, where a base station (BS) simultaneously serves multiple communication users and carries out target sensing and positioning. A novel double-array structure is proposed to enable the near-field ISPAC at the BS. Specifically, a small-scale assisting transceiver (AT) is attached to the large-scale main transceiver (MT) to empower the communication system with the ability of sensing and positioning. Based on the proposed framework, the joint angle and distance Cram\'er-Rao bound (CRB) is first derived. Then, the CRB is minimized subject to the minimum communication rate requirement in both downlink and uplink ISPAC scenarios: 1) For downlink ISPAC, a downlink target positioning algorithm is proposed and a penalty dual decomposition (PDD)-based double-loop algorithm is developed to tackle the non-convex optimization problem. 2) For uplink ISPAC, an uplink target positioning algorithm is proposed and an efficient alternating optimization algorithm is conceived to solve the non-convex CRB minimization problem with coupled user communication and target probing design. Both proposed optimization algorithms can converge to a stationary point of the CRB minimization problem. Numerical results show that: 1) The proposed ISPAC system can locate the target in both angle and distance domains merely relying on single BS and limited bandwidths; and 2) the positioning performance achieved by the hybrid-analog-and-digital ISPAC approaches that achieved by fully digital ISPAC when the communication rate requirement is not stringent.

Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

北京阿比特科技有限公司