亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The intelligent reflecting surface (IRS) alters the behavior of wireless media and, consequently, has potential to improve the performance and reliability of wireless systems such as communications and radar remote sensing. Recently, integrated sensing and communications (ISAC) has been widely studied as a means to efficiently utilize spectrum and thereby save cost and power. This article investigates the role of IRS in the future ISAC paradigms. While there is a rich heritage of recent research into IRS-assisted communications, the IRS-assisted radars and ISAC remain relatively unexamined. We discuss the putative advantages of IRS deployment, such as coverage extension, interference suppression, and enhanced parameter estimation, for both communications and radar. We introduce possible IRS-assisted ISAC scenarios with common and dedicated surfaces. The article provides an overview of related signal processing techniques and the design challenges, such as wireless channel acquisition, waveform design, and security.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Despite the recent success of machine learning algorithms, most of these models still face several drawbacks when considering more complex tasks requiring interaction between different sources, such as multimodal input data and logical time sequence. On the other hand, the biological brain is highly sharpened in this sense, empowered to automatically manage and integrate such a stream of information through millions of years of evolution. In this context, this paper finds inspiration from recent discoveries on cortical circuits in the brain to propose a more biologically plausible self-supervised machine learning approach that combines multimodal information using intra-layer modulations together with canonical correlation analysis (CCA), as well as a memory mechanism to keep track of temporal data, the so-called Canonical Cortical Graph Neural networks. The approach outperformed recent state-of-the-art results considering both better clean audio reconstruction and energy efficiency, described by a reduced and smother neuron firing rate distribution, suggesting the model as a suitable approach for speech enhancement in future audio-visual hearing aid devices.

The reconfigurable intelligent surface (RIS) is a promising technology that is anticipated to enable high spectrum and energy efficiencies in future wireless communication networks. This paper investigates optimum location-based RIS selection policies in RIS-aided wireless networks to maximize the end-to-end signal-to-noise ratio for product-scaling and sum-scaling path-loss models where the received power scales with the product and sum of the transmitter-to-RIS and RIS-to-receiver distances, respectively. These scaling laws cover the important cases of end-to-end path-loss models in RIS-aided wireless systems. The random locations of all available RISs are modeled as a Poisson point process. To quantify the network performance, the outage probabilities and average rates attained by the proposed RIS selection policies are evaluated by deriving the distance distribution of the chosen RIS node as per the selection policies for both product-scaling and sum-scaling path-loss models. We also propose a limited-feedback RIS selection framework to achieve distributed network operation. The outage probabilities and average rates obtained by the limited-feedback RIS selection policies are derived for both path-loss models as well. The numerical results show notable performance gains obtained by the proposed RIS selection policies.

In this paper, we investigate secure transmissions in integrated satellite-terrestrial communications and the green interference based symbiotic security scheme is proposed. Particularly, the co-channel interference induced by the spectrum sharing between satellite and terrestrial networks and the inter-beam interference due to frequency reuse among satellite multi-beam serve as the green interference to assist the symbiotic secure transmission, where the secure transmissions of both satellite and terrestrial links are guaranteed simultaneously. Specifically, to realize the symbiotic security, we formulate a problem to maximize the sum secrecy rate of satellite users by cooperatively beamforming optimizing and a constraint of secrecy rate of each terrestrial user is guaranteed. Since the formulated problem is non-convex and intractable, the Taylor expansion and semi-definite relaxation (SDR) are adopted to further reformulate this problem, and the successive convex approximation (SCA) algorithm is designed to solve it. Finally, the tightness of the relaxation is proved. In addition, numerical results verify the efficiency of our proposed approach.

Federated learning (FL) over resource-constrained wireless networks has recently attracted much attention. However, most existing studies consider one FL task in single-cell wireless networks and ignore the impact of downlink/uplink inter-cell interference on the learning performance. In this paper, we investigate FL over a multi-cell wireless network, where each cell performs a different FL task and over-the-air computation (AirComp) is adopted to enable fast uplink gradient aggregation. We conduct convergence analysis of AirComp-assisted FL systems, taking into account the inter-cell interference in both the downlink and uplink model/gradient transmissions, which reveals that the distorted model/gradient exchanges induce a gap to hinder the convergence of FL. We characterize the Pareto boundary of the error-induced gap region to quantify the learning performance trade-off among different FL tasks, based on which we formulate an optimization problem to minimize the sum of error-induced gaps in all cells. To tackle the coupling between the downlink and uplink transmissions as well as the coupling among multiple cells, we propose a cooperative multi-cell FL optimization framework to achieve efficient interference management for downlink and uplink transmission design. Results demonstrate that our proposed algorithm achieves much better average learning performance over multiple cells than non-cooperative baseline schemes.

Reconfigurable intelligent surface (RIS) can effectively control the wavefront of the impinging signals and has emerged as a cost-effective promising solution to improve the spectrum and energy efficiency of wireless systems. Most existing researches on RIS assume that the hardware operations are perfect. However, both physical transceiver and RIS suffer from inevitable hardware impairments in practice, which can lead to severe system performance degradation and increase the complexity of beamforming optimization. Consequently, the existing researches on RIS, including channel estimation, beamforming optimization, spectrum and energy efficiency analysis, etc., cannot directly apply to the case of hardware impairments. In this paper, by taking hardware impairments into consideration, we conduct the joint transmit and reflect beamforming optimization, and reevaluate the system performance. First, we characterize the closed-form estimators of direct and cascaded channels in both single-user and multi-user cases, and analyze the impact of hardware impairments on channel estimation accuracy. Then, the optimal transmit beamforming solution is derived, and a gradient descent method-based algorithm is also proposed to optimize the reflect beamforming. Moreover, we analyze the three types of asymptotic channel capacities with respect to the transmit power, the antenna number, and the reflecting element number. Finally, in terms of the system energy consumption, we analyze the power scaling law and the energy efficiency. Our experimental results also reveal an encouraging phenomenon that the RIS-assisted wireless system with massive reflecting elements can achieve both high spectrum and energy efficiency without the need for massive antennas and without allocating too many resources to optimize the reflect beamforming.

Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.

Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.

Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Current deep learning research is dominated by benchmark evaluation. A method is regarded as favorable if it empirically performs well on the dedicated test set. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving sets of benchmark data are investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten due to the iterative parameter updates. However, comparison of individual methods is nevertheless treated in isolation from real world application and typically judged by monitoring accumulated test set performance. The closed world assumption remains predominant. It is assumed that during deployment a model is guaranteed to encounter data that stems from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown instances and break down in the face of corrupted data. In this work we argue that notable lessons from open set recognition, the identification of statistically deviating data outside of the observed dataset, and the adjacent field of active learning, where data is incrementally queried such that the expected performance gain is maximized, are frequently overlooked in the deep learning era. Based on these forgotten lessons, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Our results show that this not only benefits each individual paradigm, but highlights the natural synergies in a common framework. We empirically demonstrate improvements when alleviating catastrophic forgetting, querying data in active learning, selecting task orders, while exhibiting robust open world application where previously proposed methods fail.

北京阿比特科技有限公司