亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The wireless blockchain network (WBN) concept, born from the blockchain deployed in wireless networks, has appealed to many network scenarios. Blockchain consensus mechanisms (CMs) are key to enabling nodes in a wireless network to achieve consistency without any trusted entity. However, consensus reliability will be seriously affected by the instability of communication links in wireless networks. Meanwhile, it is difficult for nodes in wireless scenarios to obtain a timely energy supply. Energy-intensive blockchain functions can quickly drain the power of nodes, thus degrading consensus performance. Fortunately, a symbiotic radio (SR) system enabled by cognitive backscatter communications can solve the above problems. In SR, the secondary transmitter (STx) transmits messages over the radio frequency (RF) signal emitted from a primary transmitter (PTx) with extremely low energy consumption, and the STx can provide multipath gain to the PTx in return. Such an approach is useful for almost all vote-based CMs, such as the Practical Byzantine Fault-tolerant (PBFT)-like and the RAFT-like CMs. This paper proposes symbiotic blockchain consensus (SBC) by transforming 6 PBFT-like and 4 RAFT-like state-of-the-art (SOTA) CMs to demonstrate universality. These new CMs will benefit from mutualistic transmission relationships in SR, making full use of the limited spectrum resources in WBN. Simulation results show that SBC can increase the consensus success rate of PBFT-like and RAFT-like by 54.1% and 5.8%, respectively, and reduce energy consumption by 9.2% and 23.7%, respectively.

相關內容

CMS:內容管理系統

We consider unsourced random access (uRA) in a cell-free (CF) user-centric wireless network, where a large number of potential users compete for a random access slot, while only a finite subset is active. The random access users transmit codewords of length $L$ symbols from a shared codebook, which are received by $B$ geographically distributed radio units (RUs) equipped with $M$ antennas each. Our goal is to devise and analyze a \emph{centralized} decoder to detect the transmitted messages (without prior knowledge of the active users) and estimate the corresponding channel state information. A specific challenge lies in the fact that, due to the geographically distributed nature of the CF network, there is no fixed correspondence between codewords and large-scale fading coefficients (LSFCs). To overcome this problem, we propose a scheme where the access codebook is partitioned in "location-based" subcodes, such that users in a particular location make use of the corresponding subcode. The joint message detection and channel estimation is obtained via a novel {\em Approximated Message Passing} (AMP) algorithm to estimate the linear superposition of matrix-valued "sources" corrupted by Gaussian noise. The matrices to be estimated exhibit zero rows for inactive messages and Gaussian-distributed rows corresponding to the active messages. The asymmetry in the LSFCs and message activity probabilities leads to \emph{different statistics} for the matrix sources, which distinguishes the AMP formulation from previous cases.In the regime where the codebook size scales linearly with $L$, while $B$ and $M$ are fixed, we present a rigorous high-dimensional analysis of the proposed AMP algorithm. Then, exploiting the fundamental decoupling principle of AMP, we provide a comprehensive analysis of Neyman-Pearson message detection, along with the subsequent channel estimation.

With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.

Federated recommendations (FRs), facilitating multiple local clients to collectively learn a global model without disclosing user private data, have emerged as a prevalent architecture for privacy-preserving recommendations. In conventional FRs, a dominant paradigm is to utilize discrete identities to represent users/clients and items, which are subsequently mapped to domain-specific embeddings to participate in model training. Despite considerable performance, we reveal three inherent limitations that can not be ignored in federated settings, i.e., non-transferability across domains, unavailability in cold-start settings, and potential privacy violations during federated training. To this end, we propose a transferable federated recommendation model with universal textual representations, TransFR, which delicately incorporates the general capabilities empowered by pre-trained language models and the personalized abilities by fine-tuning local private data. Specifically, it first learns domain-agnostic representations of items by exploiting pre-trained models with public textual corpora. To tailor for federated recommendation, we further introduce an efficient federated fine-tuning and a local training mechanism. This facilitates personalized local heads for each client by utilizing their private behavior data. By incorporating pre-training and fine-tuning within FRs, it greatly improves the adaptation efficiency transferring to a new domain and the generalization capacity to address cold-start issues. Through extensive experiments on several datasets, we demonstrate that our TransFR model surpasses several state-of-the-art FRs in terms of accuracy, transferability, and privacy.

Existing learned video compression models employ flow net or deformable convolutional networks (DCN) to estimate motion information. However, the limited receptive fields of flow net and DCN inherently direct their attentiveness towards the local contexts. Global contexts, such as large-scale motions and global correlations among frames are ignored, presenting a significant bottleneck for capturing accurate motions. To address this issue, we propose a joint local and global motion compensation module (LGMC) for leaned video coding. More specifically, we adopt flow net for local motion compensation. To capture global context, we employ the cross attention in feature domain for motion compensation. In addition, to avoid the quadratic complexity of vanilla cross attention, we divide the softmax operations in attention into two independent softmax operations, leading to linear complexity. To validate the effectiveness of our proposed LGMC, we integrate it with DCVC-TCM and obtain learned video compression with joint local and global motion compensation (LVC-LGMC). Extensive experiments demonstrate that our LVC-LGMC has significant rate-distortion performance improvements over baseline DCVC-TCM.

While physics-informed neural networks (PINNs) have become a popular deep learning framework for tackling forward and inverse problems governed by partial differential equations (PDEs), their performance is known to degrade when larger and deeper neural network architectures are employed. Our study identifies that the root of this counter-intuitive behavior lies in the use of multi-layer perceptron (MLP) architectures with non-suitable initialization schemes, which result in poor trainablity for the network derivatives, and ultimately lead to an unstable minimization of the PDE residual loss. To address this, we introduce Physics-informed Residual Adaptive Networks (PirateNets), a novel architecture that is designed to facilitate stable and efficient training of deep PINN models. PirateNets leverage a novel adaptive residual connection, which allows the networks to be initialized as shallow networks that progressively deepen during training. We also show that the proposed initialization scheme allows us to encode appropriate inductive biases corresponding to a given PDE system into the network architecture. We provide comprehensive empirical evidence showing that PirateNets are easier to optimize and can gain accuracy from considerably increased depth, ultimately achieving state-of-the-art results across various benchmarks. All code and data accompanying this manuscript will be made publicly available at \url{//github.com/PredictiveIntelligenceLab/jaxpi}.

Edge Intelligence (EI) allows Artificial Intelligence (AI) applications to run at the edge, where data analysis and decision-making can be performed in real-time and close to data sources. To protect data privacy and unify data silos among end devices in EI, Federated Learning (FL) is proposed for collaborative training of shared AI models across devices without compromising data privacy. However, the prevailing FL approaches cannot guarantee model generalization and adaptation on heterogeneous clients. Recently, Personalized Federated Learning (PFL) has drawn growing awareness in EI, as it enables a productive balance between local-specific training requirements inherent in devices and global-generalized optimization objectives for satisfactory performance. However, most existing PFL methods are based on the Parameters Interaction-based Architecture (PIA) represented by FedAvg, which causes unaffordable communication burdens due to large-scale parameters transmission between devices and the edge server. In contrast, Logits Interaction-based Architecture (LIA) allows to update model parameters with logits transfer and gains the advantages of communication lightweight and heterogeneous on-device model allowance compared to PIA. Nevertheless, previous LIA methods attempt to achieve satisfactory performance either relying on unrealistic public datasets or increasing communication overhead for additional information transmission other than logits. To tackle this dilemma, we propose a knowledge cache-driven PFL architecture, named FedCache, which reserves a knowledge cache on the server for fetching personalized knowledge from the samples with similar hashes to each given on-device sample. During the training phase, ensemble distillation is applied to on-device models for constructive optimization with personalized knowledge transferred from the server-side knowledge cache.

Despite the basic premise that next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native, to date, most existing efforts remain either qualitative or incremental extensions to existing "AI for wireless" paradigms. Indeed, creating AI-native wireless networks faces significant technical challenges due to the limitations of data-driven, training-intensive AI. These limitations include the black-box nature of the AI models, their curve-fitting nature, which can limit their ability to reason and adapt, their reliance on large amounts of training data, and the energy inefficiency of large neural networks. In response to these limitations, this article presents a comprehensive, forward-looking vision that addresses these shortcomings by introducing a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning. Causal reasoning, founded on causal discovery, causal representation learning, and causal inference, can help build explainable, reasoning-aware, and sustainable wireless networks. Towards fulfilling this vision, we first highlight several wireless networking challenges that can be addressed by causal discovery and representation, including ultra-reliable beamforming for terahertz (THz) systems, near-accurate physical twin modeling for digital twins, training data augmentation, and semantic communication. We showcase how incorporating causal discovery can assist in achieving dynamic adaptability, resilience, and cognition in addressing these challenges. Furthermore, we outline potential frameworks that leverage causal inference to achieve the overarching objectives of future-generation networks, including intent management, dynamic adaptability, human-level cognition, reasoning, and the critical element of time sensitivity.

Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.

A large number of real-world graphs or networks are inherently heterogeneous, involving a diversity of node types and relation types. Heterogeneous graph embedding is to embed rich structural and semantic information of a heterogeneous graph into low-dimensional node representations. Existing models usually define multiple metapaths in a heterogeneous graph to capture the composite relations and guide neighbor selection. However, these models either omit node content features, discard intermediate nodes along the metapath, or only consider one metapath. To address these three limitations, we propose a new model named Metapath Aggregated Graph Neural Network (MAGNN) to boost the final performance. Specifically, MAGNN employs three major components, i.e., the node content transformation to encapsulate input node attributes, the intra-metapath aggregation to incorporate intermediate semantic nodes, and the inter-metapath aggregation to combine messages from multiple metapaths. Extensive experiments on three real-world heterogeneous graph datasets for node classification, node clustering, and link prediction show that MAGNN achieves more accurate prediction results than state-of-the-art baselines.

Most existing knowledge graphs suffer from incompleteness, which can be alleviated by inferring missing links based on known facts. One popular way to accomplish this is to generate low-dimensional embeddings of entities and relations, and use these to make inferences. ConvE, a recently proposed approach, applies convolutional filters on 2D reshapings of entity and relation embeddings in order to capture rich interactions between their components. However, the number of interactions that ConvE can capture is limited. In this paper, we analyze how increasing the number of these interactions affects link prediction performance, and utilize our observations to propose InteractE. InteractE is based on three key ideas -- feature permutation, a novel feature reshaping, and circular convolution. Through extensive experiments, we find that InteractE outperforms state-of-the-art convolutional link prediction baselines on FB15k-237. Further, InteractE achieves an MRR score that is 9%, 7.5%, and 23% better than ConvE on the FB15k-237, WN18RR and YAGO3-10 datasets respectively. The results validate our central hypothesis -- that increasing feature interaction is beneficial to link prediction performance. We make the source code of InteractE available to encourage reproducible research.

北京阿比特科技有限公司