亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In 6G, mobile networks are poised to transition from monolithic structures owned and operated by single mobile network operators into multi-stakeholder networks where various parties contribute with infrastructure, resources, and services. This shift brings forth a critical challenge: Ensuring secure and trustful cross-domain access control. This paper introduces a novel technical concept and a prototype, outlining and implementing a 5G Service-based Architecture that utilizes Decentralized Identifiers and Verifiable Credentials to authenticate and authorize network functions among each other rather than relying on traditional X.509 certificates or OAuth2.0 access tokens. This decentralized approach to identity and permission management for network functions in 6G reduces the risk of a single point of failure associated with centralized public key infrastructures, unifies access control mechanisms, and paves the way for lesser complex and more trustful cross-domain key management for highly collaborative network functions of a future Service-based Architecture in 6G.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際(ji)網絡(luo)會(hui)議。 Publisher:IFIP。 SIT:

Spiking neural networks (SNNs) are recurrent models that can leverage sparsity in input time series to efficiently carry out tasks such as classification. Additional efficiency gains can be obtained if decisions are taken as early as possible as a function of the complexity of the input time series. The decision on when to stop inference and produce a decision must rely on an estimate of the current accuracy of the decision. Prior work demonstrated the use of conformal prediction (CP) as a principled way to quantify uncertainty and support adaptive-latency decisions in SNNs. In this paper, we propose to enhance the uncertainty quantification capabilities of SNNs by implementing ensemble models for the purpose of improving the reliability of stopping decisions. Intuitively, an ensemble of multiple models can decide when to stop more reliably by selecting times at which most models agree that the current accuracy level is sufficient. The proposed method relies on different forms of information pooling from ensemble models, and offers theoretical reliability guarantees. We specifically show that variational inference-based ensembles with p-variable pooling significantly reduce the average latency of state-of-the-art methods, while maintaining reliability guarantees.

Explaining predictions of black-box neural networks is crucial when applied to decision-critical tasks. Thus, attribution maps are commonly used to identify important image regions, despite prior work showing that humans prefer explanations based on similar examples. To this end, ProtoPNet learns a set of class-representative feature vectors (prototypes) for case-based reasoning. During inference, similarities of latent features to prototypes are linearly classified to form predictions and attribution maps are provided to explain the similarity. In this work, we evaluate whether architectures for case-based reasoning fulfill established axioms required for faithful explanations using the example of ProtoPNet. We show that such architectures allow the extraction of faithful explanations. However, we prove that the attribution maps used to explain the similarities violate the axioms. We propose a new procedure to extract explanations for trained ProtoPNets, named ProtoPFaith. Conceptually, these explanations are Shapley values, calculated on the similarity scores of each prototype. They allow to faithfully answer which prototypes are present in an unseen image and quantify each pixel's contribution to that presence, thereby complying with all axioms. The theoretical violations of ProtoPNet manifest in our experiments on three datasets (CUB-200-2011, Stanford Dogs, RSNA) and five architectures (ConvNet, ResNet, ResNet50, WideResNet50, ResNeXt50). Our experiments show a qualitative difference between the explanations given by ProtoPNet and ProtoPFaith. Additionally, we quantify the explanations with the Area Over the Perturbation Curve, on which ProtoPFaith outperforms ProtoPNet on all experiments by a factor $>10^3$.

Compressing a predefined deep neural network (DNN) into a compact sub-network with competitive performance is crucial in the efficient machine learning realm. This topic spans various techniques, from structured pruning to neural architecture search, encompassing both pruning and erasing operators perspectives. Despite advancements, existing methods suffers from complex, multi-stage processes that demand substantial engineering and domain knowledge, limiting their broader applications. We introduce the third-generation Only-Train-Once (OTOv3), which first automatically trains and compresses a general DNN through pruning and erasing operations, creating a compact and competitive sub-network without the need of fine-tuning. OTOv3 simplifies and automates the training and compression process, minimizes the engineering efforts required from users. It offers key technological advancements: (i) automatic search space construction for general DNNs based on dependency graph analysis; (ii) Dual Half-Space Projected Gradient (DHSPG) and its enhanced version with hierarchical search (H2SPG) to reliably solve (hierarchical) structured sparsity problems and ensure sub-network validity; and (iii) automated sub-network construction using solutions from DHSPG/H2SPG and dependency graphs. Our empirical results demonstrate the efficacy of OTOv3 across various benchmarks in structured pruning and neural architecture search. OTOv3 produces sub-networks that match or exceed the state-of-the-arts. The source code will be available at //github.com/tianyic/only_train_once.

New applications are being supported by current and future networks. In particular, it is expected that Metaverse applications will be deployed in the near future, as 5G and 6G network provide sufficient bandwidth and sufficiently low latency to provide a satisfying end-user experience. However, networks still need to evolve to better support this type of application. We present here a basic taxonomy of the metaverse, which allows to identify some of the networking requirements for such an application; we also provide an overview of the current state of balthe standardization efforts in different standardization organizations, including ITU-T, 3GPP, IETF and MPAI.

Existing studies analyzing electromagnetic field (EMF) exposure in wireless networks have primarily considered downlink (DL) communications. In the uplink (UL), the EMF exposure caused by the user's smartphone is usually the only considered source of radiation, thereby ignoring contributions caused by other active neighboring devices. In addition, the network coverage and EMF exposure are typically analyzed independently for both the UL and DL, while a joint analysis would be necessary to fully understand the network performance. This paper aims at bridging the resulting gaps by presenting a comprehensive stochastic geometry framework including the above aspects. The proposed topology features base stations (BS) modeled via a homogeneous Poisson point process as well as a user process of type II (with users uniformly distributed in the Voronoi cell of each BS). In addition to the UL to DL exposure ratio, we derive joint probability metrics considering the UL and DL coverage and EMF exposure. These metrics are evaluated in two scenarios considering BS and/or user densifications. Our numerical results highlight the existence of optimal node densities maximizing these joint probabilities.

The wireless blockchain network (WBN) concept, born from the blockchain deployed in wireless networks, has appealed to many network scenarios. Blockchain consensus mechanisms (CMs) are key to enabling nodes in a wireless network to achieve consistency without any trusted entity. However, consensus reliability will be seriously affected by the instability of communication links in wireless networks. Meanwhile, it is difficult for nodes in wireless scenarios to obtain a timely energy supply. Energy-intensive blockchain functions can quickly drain the power of nodes, thus degrading consensus performance. Fortunately, a symbiotic radio (SR) system enabled by cognitive backscatter communications can solve the above problems. In SR, the secondary transmitter (STx) transmits messages over the radio frequency (RF) signal emitted from a primary transmitter (PTx) with extremely low energy consumption, and the STx can provide multipath gain to the PTx in return. Such an approach is useful for almost all vote-based CMs, such as the Practical Byzantine Fault-tolerant (PBFT)-like and the RAFT-like CMs. This paper proposes symbiotic blockchain consensus (SBC) by transforming 6 PBFT-like and 4 RAFT-like state-of-the-art (SOTA) CMs to demonstrate universality. These new CMs will benefit from mutualistic transmission relationships in SR, making full use of the limited spectrum resources in WBN. Simulation results show that SBC can increase the consensus success rate of PBFT-like and RAFT-like by 54.1% and 5.8%, respectively, and reduce energy consumption by 9.2% and 23.7%, respectively.

The resilience of internet service is crucial for ensuring consistent communication, facilitating emergency response in digitally-dependent society. Due to empirical data constraints, there has been limited research on internet service disruptions during extreme weather events. To bridge this gap, this study utilizes observational datasets on internet performance to quantitatively assess extent of internet disruption during two recent extreme weather events. Taking Harris County in United States as study region, we jointly analyzed the hazard severity and the associated internet disruptions in two extreme weather events. The results show that hazard events significantly impacted regional internet connectivity. There exists a pronounced temporal synchronicity between magnitude of disruption and hazard severity: as severity of hazards intensifies, internet disruptions correspondingly escalate, and eventually return to baseline levels post-event. Spatial analyses show internet service disruptions can happen even in areas not directly impacted by hazards, demonstrating that repercussions of hazards extend beyond immediate area of impact. This interplay of temporal synchronization and spatial variance underscores complex relationships between hazard severity and Internet disruption. Socio-demographic analysis suggests vulnerable communities, already grappling with myriad challenges, face exacerbated service disruptions during hazard events, emphasizing the need for prioritized disaster mitigation strategiesfor improving the resilience of internet services. To the best of our knowledge, this research is among the first studies to examine the Internet disruptions during hazardous events using a quantitative observational dataset. Insights obtained hold significant implications for city administrators, guiding them towards more resilient and equitable infrastructure planning.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching text labels of stickers with previous utterances. However, due to their large quantities, it is impractical to require text labels for the all stickers. Hence, in this paper, we propose to recommend an appropriate sticker to user based on multi-turn dialog context history without any external labels. Two main challenges are confronted in this task. One is to learn semantic meaning of stickers without corresponding text labels. Another challenge is to jointly model the candidate sticker with the multi-turn dialog context. To tackle these challenges, we propose a sticker response selector (SRS) model. Specifically, SRS first employs a convolutional based sticker image encoder and a self-attention based multi-turn dialog encoder to obtain the representation of stickers and utterances. Next, deep interaction network is proposed to conduct deep matching between the sticker with each utterance in the dialog history. SRS then learns the short-term and long-term dependency between all interaction results by a fusion network to output the the final matching score. To evaluate our proposed method, we collect a large-scale real-world dialog dataset with stickers from one of the most popular online chatting platform. Extensive experiments conducted on this dataset show that our model achieves the state-of-the-art performance for all commonly-used metrics. Experiments also verify the effectiveness of each component of SRS. To facilitate further research in sticker selection field, we release this dataset of 340K multi-turn dialog and sticker pairs.

北京阿比特科技有限公司