亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The evolution of cellular networks has played a pivotal role in shaping the modern telecommunications landscape. This paper explores the journey of cellular network generations, beginning with the introduction of Japan's first commercial 1G network by Nippon Telegraph and Telephone (NTT) Corporation in 1979. This analog wireless network quickly expanded to become the country's first national 1G network within a remarkably short period. The transition from analog to digital networks marked a significant turning point in the wireless industry, enabled by advancements in MOSFET (Metal-Oxide-Semiconductor Field Effect Transistor) technology. MOSFET, originally developed at Bell Labs in 1959, underwent modifications to suit cellular networks in the early 1990s, facilitating the shift to digital wireless mobile networks. The advent of the 2G generation brought forth the first commercial digital cellular network in 1991, sparking recognition among manufacturers and mobile network operators of the importance of robust networks and efficient architecture. As the wireless industry continued to experience exponential growth, the significance of effective network infrastructure became increasingly evident. In this research, our aim is to provide a comprehensive overview of the entire spectrum of cellular network generations, ranging from 1G to the potential future of 7G. By tracing the evolution of these networks, we aim to shed light on the transformative developments that have shaped the telecommunications landscape and explore the possibilities that lie ahead in the realm of cellular technology.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Explaining predictions of black-box neural networks is crucial when applied to decision-critical tasks. Thus, attribution maps are commonly used to identify important image regions, despite prior work showing that humans prefer explanations based on similar examples. To this end, ProtoPNet learns a set of class-representative feature vectors (prototypes) for case-based reasoning. During inference, similarities of latent features to prototypes are linearly classified to form predictions and attribution maps are provided to explain the similarity. In this work, we evaluate whether architectures for case-based reasoning fulfill established axioms required for faithful explanations using the example of ProtoPNet. We show that such architectures allow the extraction of faithful explanations. However, we prove that the attribution maps used to explain the similarities violate the axioms. We propose a new procedure to extract explanations for trained ProtoPNets, named ProtoPFaith. Conceptually, these explanations are Shapley values, calculated on the similarity scores of each prototype. They allow to faithfully answer which prototypes are present in an unseen image and quantify each pixel's contribution to that presence, thereby complying with all axioms. The theoretical violations of ProtoPNet manifest in our experiments on three datasets (CUB-200-2011, Stanford Dogs, RSNA) and five architectures (ConvNet, ResNet, ResNet50, WideResNet50, ResNeXt50). Our experiments show a qualitative difference between the explanations given by ProtoPNet and ProtoPFaith. Additionally, we quantify the explanations with the Area Over the Perturbation Curve, on which ProtoPFaith outperforms ProtoPNet on all experiments by a factor $>10^3$.

Existing studies analyzing electromagnetic field (EMF) exposure in wireless networks have primarily considered downlink (DL) communications. In the uplink (UL), the EMF exposure caused by the user's smartphone is usually the only considered source of radiation, thereby ignoring contributions caused by other active neighboring devices. In addition, the network coverage and EMF exposure are typically analyzed independently for both the UL and DL, while a joint analysis would be necessary to fully understand the network performance. This paper aims at bridging the resulting gaps by presenting a comprehensive stochastic geometry framework including the above aspects. The proposed topology features base stations (BS) modeled via a homogeneous Poisson point process as well as a user process of type II (with users uniformly distributed in the Voronoi cell of each BS). In addition to the UL to DL exposure ratio, we derive joint probability metrics considering the UL and DL coverage and EMF exposure. These metrics are evaluated in two scenarios considering BS and/or user densifications. Our numerical results highlight the existence of optimal node densities maximizing these joint probabilities.

This paper presents a novel solution to address the challenges in achieving energy efficiency and cooperation for collision avoidance in UAV swarms. The proposed method combines Artificial Potential Field (APF) and Particle Swarm Optimization (PSO) techniques. APF provides environmental awareness and implicit coordination to UAVs, while PSO searches for collision-free and energy-efficient trajectories for each UAV in a decentralized manner under the implicit coordination. This decentralized approach is achieved by minimizing a novel cost function that leverages the advantages of the active contour model from image processing. Additionally, future trajectories are predicted by approximating the minima of the novel cost function using calculus of variation, which enables proactive actions and defines the initial conditions for PSO. We propose a two-branch trajectory planning framework that ensures UAVs only change altitudes when necessary for energy considerations. Extensive experiments are conducted to evaluate the effectiveness and efficiency of our method in various situations.

Over the last decade, applications of neural networks have spread to cover all aspects of life. A large number of companies base their businesses on building products that use neural networks for tasks such as face recognition, machine translation, and autonomous cars. They are being used in safety and security-critical applications like high definition maps and medical wristbands, or in globally used products like Google Translate and ChatGPT. Much of the intellectual property underpinning these products is encoded in the exact configuration of the neural networks. Consequently, protecting these is of utmost priority to businesses. At the same time, many of these products need to operate under a strong threat model, in which the adversary has unfettered physical control of the product. Past work has demonstrated that with physical access, attackers can reverse engineer neural networks that run on scalar microcontrollers, like ARM Cortex M3. However, for performance reasons, neural networks are often implemented on highly-parallel general purpose graphics processing units (GPGPUs), and so far, attacks on these have only recovered course-grained information on the structure of the neural network, but failed to retrieve the weights and biases. In this work, we present BarraCUDA, a novel attack on GPGPUs that can completely extract the parameters of neural networks. BarraCUDA uses correlation electromagnetic analysis to recover the weights and biases in the convolutional layers of neural networks. We use BarraCUDA to attack the popular NVIDIA Jetson Nano device, demonstrating successful parameter extraction of neural networks in a highly parallel and noisy environment.

This paper presents a large-scale analysis of the cryptocurrency community on Reddit, shedding light on the intricate relationship between the evolution of their activity, emotional dynamics, and price movements. We analyze over 130M posts on 122 cryptocurrency-related subreddits using temporal analysis, statistical modeling, and emotion detection. While /r/CryptoCurrency and /r/dogecoin are the most active subreddits, we find an overall surge in cryptocurrency-related activity in 2021, followed by a sharp decline. We also uncover a strong relationship in terms of cross-correlation between online activity and the price of various coins, with the changes in the number of posts mostly leading the price changes. Backtesting analysis shows that a straightforward strategy based on the cross-correlation where one buys/sells a coin if the daily number of posts about it is greater/less than the previous would have led to a 3x return on investment. Finally, we shed light on the emotional dynamics of the cryptocurrency communities, finding that joy becomes a prominent indicator during upward market performance, while a decline in the market manifests an increase in anger.

The advent of large language models marks a revolutionary breakthrough in artificial intelligence. With the unprecedented scale of training and model parameters, the capability of large language models has been dramatically improved, leading to human-like performances in understanding, language synthesizing, and common-sense reasoning, etc. Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted. For one thing, it will reform the way of interaction between humans and personalization systems. Instead of being a passive medium of information filtering, large language models present the foundation for active user engagement. On top of such a new foundation, user requests can be proactively explored, and user's required information can be delivered in a natural and explainable way. For another thing, it will also considerably expand the scope of personalization, making it grow from the sole function of collecting personalized information to the compound function of providing personalized services. By leveraging large language models as general-purpose interface, the personalization systems may compile user requests into plans, calls the functions of external tools to execute the plans, and integrate the tools' outputs to complete the end-to-end personalization tasks. Today, large language models are still being developed, whereas the application in personalization is largely unexplored. Therefore, we consider it to be the right time to review the challenges in personalization and the opportunities to address them with LLMs. In particular, we dedicate this perspective paper to the discussion of the following aspects: the development and challenges for the existing personalization system, the newly emerged capabilities of large language models, and the potential ways of making use of large language models for personalization.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司