亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physical-layer key generation (PKG) exploits the reciprocity and randomness of wireless channels to generate a symmetric key between two legitimate communication ends. However, in multi-cell systems, PKG suffers from severe pilot contamination due to the reuse of pilots in different cells. In this paper, we invoke multiple reconfigurable intelligent surfaces (RISs) for adaptively shaping the environment and enhancing the PKG performance. To this end, we formulate an optimization problem to maximize the weighted sum key rate (WSKR) by jointly optimizing the precoding matrices at the base stations (BSs) and the phase shifts at the RISs. For addressing the non-convexity of the problem, we derive an upper bound of the WSKR and prove its tightness. To tackle the upper bound maximization problem, we apply an alternating optimization (AO)-based algorithm to divide the joint optimization into two sub-problems. We apply the Lagrangian dual approach based on the Karush-Kuhn-Tucker (KKT) conditions for the sub-problem of precoding matrices and adopt a projected gradient ascent (PGA) algorithm for the sub-problem of phase shifts. Simulation results confirm the near-optimal performance of the proposed algorithm and the effectiveness of RISs for improving the WSKR via mitigating pilot contamination.

相關內容

Artificial intelligence (AI) is envisioned to play a key role in future wireless technologies, with deep neural networks (DNNs) enabling digital receivers to learn to operate in challenging communication scenarios. However, wireless receiver design poses unique challenges that fundamentally differ from those encountered in traditional deep learning domains. The main challenges arise from the limited power and computational resources of wireless devices, as well as from the dynamic nature of wireless communications, which causes continual changes to the data distribution. These challenges impair conventional AI based on highly-parameterized DNNs, motivating the development of adaptive, flexible, and light-weight AI for wireless communications, which is the focus of this article. Here, we propose that AI-based design of wireless receivers requires rethinking of the three main pillars of AI: architecture, data, and training algorithms. In terms of architecture, we review how to design compact DNNs via model-based deep learning. Then, we discuss how to acquire training data for deep receivers without compromising spectral efficiency. Finally, we review efficient, reliable, and robust training algorithms via meta-learning and generalized Bayesian learning. Numerical results are presented to demonstrate the complementary effectiveness of each of the surveyed methods. We conclude by presenting opportunities for future research on the development of practical deep receivers

We consider an atomic congestion game in which each player $i$ either participates in the game with an exogenous and known probability $p_{i}\in(0,1]$, independently of everybody else, or stays out and incurs no cost. We compute the parameterized price of anarchy to characterize the impact of demand uncertainty on the efficiency of selfish behavior, considering two different notions of a social planner. A prophet planner knows the realization of the random participation in the game; the ordinary planner does not. As a consequence, a prophet planner can compute an adaptive social optimum that selects different solutions depending on the players that turn out to be active, whereas an ordinary planner faces the same uncertainty as the players and can only compute social optima with respect to the player participation distribution. For both planners, we derive the precise price of anarchy, which arises from an optimization problem parameterized by the maximum participation probability $q=\max_{i} p_{i}$. For the case of affine costs, we provide an analytic expression for the ordinary and prophet price of anarchy, parameterized as a function of $q$.

Real-time perception and motion planning are two crucial tasks for autonomous driving. While there are many research works focused on improving the performance of perception and motion planning individually, it is still not clear how a perception error may adversely impact the motion planning results. In this work, we propose a joint simulation framework with LiDAR-based perception and motion planning for real-time automated driving. Taking the sensor input from the CARLA simulator with additive noise, a LiDAR perception system is designed to detect and track all surrounding vehicles and to provide precise orientation and velocity information. Next, we introduce a new collision bound representation that relaxes the communication cost between the perception module and the motion planner. A novel collision checking algorithm is implemented using line intersection checking that is more efficient for long distance range in comparing to the traditional method of occupancy grid. We evaluate the joint simulation framework in CARLA for urban driving scenarios. Experiments show that our proposed automated driving system can execute at 25 Hz, which meets the real-time requirement. The LiDAR perception system has high accuracy within 20 meters when evaluated with the ground truth. The motion planning results in consistent safe distance keeping when tested in CARLA urban driving scenarios.

In order to satisfy their ever increasing capacity and compute requirements, many machine learning models are distributed across multiple nodes using space-efficient parallelism strategies. As a result, collective communications are often on the critical path, and hiding their latency by overlapping kernel-granular communication and computation is difficult due to the absence of independent computation. In this work, we propose fusing computation with communication using GPU-initiated networking, and leverage GPUs' massive parallelism to enable fine-grained overlap of the fused operations. We have developed a single, self-contained GPU kernel where workgroups (WGs) immediately communicate their results to remote GPUs when they complete their computation. Meanwhile, other WGs within the same kernel perform overlapping computation, maintaining high ALU utilization. Furthermore, we propose zero-copy optimizations for peer-to-peer GPU communication where the data computed by one GPU is directly written to the destination buffers within the peer GPUs, eliminating intermediate stores and extra buffering. Our approach leverages the emerging multi-node GPU system trend where GPUs are physically close to network with direct GPU-NIC interconnects. We demonstrate our approach by creating an embedding + All-to-All fused kernel which overlaps embedding operations and the dependent all-to-all collective in DLRM models. We evaluate our approach both using simulation and real hardware. Our evaluations show that our approach can effectively overlap All-to-All communication with embedding computations, subsequently reducing their combined execution time by 31% on average (up to 58%) for inter-node and by 25% (up to 35%) for intra-node configurations. Scale-out simulations indicate that our approach reduces DLRM execution time by ~10% for 128 node system.

Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses censoring in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other competing risks that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice.

Enabled by the emerging industrial agent (IA) technology, swarm intelligence (SI) is envisaged to play an important role in future industrial Internet of Things (IIoT) that is shaped by Sixth Generation (6G) mobile communications and digital twin (DT). However, its fragility against data injection attack may halt it from practical deployment. In this paper we propose an efficient trust approach to address this security concern for SI.

Intelligent reflecting surfaces (IRSs) have emerged as a promising technology to improve the efficiency of wireless communication systems. However, passive IRSs suffer from the ``multiplicative fading" effect, because the transmit signal will go through two fading hops. With the ability to amplify and reflect signals, active IRSs offer a potential way to tackle this issue, where the amplification energy only experiences the second hop. However, the fundamental limit and system design for active IRSs have not been fully understood, especially for multiple-input multiple-output (MIMO) systems. In this work, we consider the analysis and design for the large-scale active IRS-aided MIMO system assuming only statistical channel state information (CSI) at the transmitter and the IRS. The evaluation of the fundamental limit, i.e., ergodic rate, turns out to be a very difficult problem. To this end, we leverage random matrix theory (RMT) to derive the deterministic approximation (DA) for the ergodic rate, and then design an algorithm to jointly optimize the transmit covariance matrix at the transmitter and the reflection matrix at the active IRS. Numerical results demonstrate the accuracy of the derived DA and the effectiveness of the proposed optimization algorithm. The results in this work reveal interesting physical insights with respect to the advantage of active IRSs over their passive counterparts.

In this paper, we propose and evaluate a method of generating low-cost device signatures for distributed wireless brain implants, using a Pseudo-Random Binary Sequence (PRBS) Generator that utilizes a modified Ring-Oscillator-based Physical Unclonable Function (RO-PUF). The modified RO-PUF's output is used as a seed for the PRBS generator, which creates a multi-bit output that can be mapped to a time-slot when the implant is allowed to communicate with the external world using duty-cycled time-division multiplexing. A 9-bit PRBS generator is shown in hardware (with a TSMC 65 nm test chip implementation) that demonstrates < 100 nW Power consumption in measurement (72% lower power and 78% lower area than a traditional 9-bit RO-PUF implementation), which supports 26 implants with the probability of time-slot collision being < 50%. This potentially creates a pathway for low-cost device signature generation for highly resource-constrained scenarios such as wireless, distributed neural implants.

Learning with Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST is based on module~LWE, and current publicly available PQ Homomorphic Encryption (HE) libraries are based on ring LWE. The security of LWE-based PQ cryptosystems is critical, but certain implementation choices could weaken them. One such choice is sparse binary secrets, desirable for PQ HE schemes for efficiency reasons. Prior work, SALSA, demonstrated a machine learning-based attack on LWE with sparse binary secrets in small dimensions ($n \le 128$) and low Hamming weights ($h \le 4$). However, this attack assumes access to millions of eavesdropped LWE samples and fails at higher Hamming weights or dimensions. We present PICANTE, an enhanced machine learning attack on LWE with sparse binary secrets, which recovers secrets in much larger dimensions (up to $n=350$) and with larger Hamming weights (roughly $n/10$, and up to $h=60$ for $n=350$). We achieve this dramatic improvement via a novel preprocessing step, which allows us to generate training data from a linear number of eavesdropped LWE samples ($4n$) and changes the distribution of the data to improve transformer training. We also improve the secret recovery methods of SALSA and introduce a novel cross-attention recovery mechanism allowing us to read off the secret directly from the trained models. While PICANTE does not threaten NIST's proposed LWE standards, it demonstrates significant improvement over SALSA and could scale further, highlighting the need for future investigation into machine learning attacks on LWE with sparse binary secrets.

There is a resurgent interest in developing intelligent open-domain dialog systems due to the availability of large amounts of conversational data and the recent progress on neural approaches to conversational AI. Unlike traditional task-oriented bots, an open-domain dialog system aims to establish long-term connections with users by satisfying the human need for communication, affection, and social belonging. This paper reviews the recent works on neural approaches that are devoted to addressing three challenges in developing such systems: semantics, consistency, and interactiveness. Semantics requires a dialog system to not only understand the content of the dialog but also identify user's social needs during the conversation. Consistency requires the system to demonstrate a consistent personality to win users trust and gain their long-term confidence. Interactiveness refers to the system's ability to generate interpersonal responses to achieve particular social goals such as entertainment, conforming, and task completion. The works we select to present here is based on our unique views and are by no means complete. Nevertheless, we hope that the discussion will inspire new research in developing more intelligent dialog systems.

北京阿比特科技有限公司