亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Millimeter wave (mmWave) and terahertz MIMO systems rely on pre-defined beamforming codebooks for both initial access and data transmission. However, most of the existing codebooks adopt pre-defined beams that focus mainly on improving the gain of their target users, without taking interference into account, which could incur critical performance degradation in dense networks. To address this problem, in this paper, we propose a sample-efficient digital twin-assisted beam pattern design framework that learns how to form the beam pattern to reject the signals from the interfering directions. The proposed approach does not require any explicit channel knowledge or any coordination with the interferers. The adoption of the digital twin improves the sample efficiency by better leveraging the underlying signal relationship and by incorporating a demand-based data acquisition strategy. Simulation results show that the developed signal model-based learning framework can significantly reduce the actual interaction with the radio environment (i.e., the number of measurements) compared to the model-unaware design, leading to a more practical and efficient interference-aware beam design approach.

相關內容

Cooperative coordination at unsignalized road intersections, which aims to improve the driving safety and traffic throughput for connected and automated vehicles, has attracted increasing interests in recent years. However, most existing investigations either suffer from computational complexity or cannot harness the full potential of the road infrastructure. To this end, we first present a dedicated intersection coordination framework, where the involved vehicles hand over their control authorities and follow instructions from a centralized coordinator. Then a unified cooperative trajectory optimization problem will be formulated to maximize the traffic throughput while ensuring the driving safety and long-term stability of the coordination system. To address the key computational challenges in the real-world deployment, we reformulate this non-convex sequential decision problem into a model-free Markov Decision Process (MDP) and tackle it by devising a Twin Delayed Deep Deterministic Policy Gradient (TD3)-based strategy in the deep reinforcement learning (DRL) framework. Simulation and practical experiments show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve the traffic throughput in the realistic continuous traffic flow. The most remarkable advantage is that our strategy could reduce the time complexity of computation to milliseconds, and is shown scalable when the road lanes increase.

As the demand for wireless connectivity continues to soar, the fifth generation and beyond wireless networks are exploring new ways to efficiently utilize the wireless spectrum and reduce hardware costs. One such approach is the integration of sensing and communications (ISAC) paradigms to jointly access the spectrum. Recent ISAC studies have focused on upper millimeter-wave and low terahertz bands to exploit ultrawide bandwidths. At these frequencies, hybrid beamformers that employ fewer radio-frequency chains are employed to offset expensive hardware but at the cost of lower multiplexing gains. Wideband hybrid beamforming also suffers from the beam-split effect arising from the subcarrier-independent (SI) analog beamformers. To overcome these limitations, this paper introduces a spatial path index modulation (SPIM) ISAC architecture, which transmits additional information bits via modulating the spatial paths between the base station and communications users. We design the SPIM-ISAC beamformers by first estimating both radar and communications parameters by developing beam-split-aware algorithms. Then, we propose to employ a family of hybrid beamforming techniques such as hybrid, SI, and subcarrier-dependent analog-only, and beam-split-aware beamformers. Numerical experiments demonstrate that the proposed SPIM-ISAC approach exhibits significantly improved spectral efficiency performance in the presence of beam-split than that of even fully digital non-SPIM beamformers.

The study of market equilibria is central to economic theory, particularly in efficiently allocating scarce resources. However, the computation of equilibrium prices at which the supply of goods matches their demand typically relies on having access to complete information on private attributes of agents, e.g., suppliers' cost functions, which are often unavailable in practice. Motivated by this practical consideration, we consider the problem of setting equilibrium prices in the incomplete information setting wherein a market operator seeks to satisfy the customer demand for a commodity by purchasing the required amount from competing suppliers with privately known cost functions unknown to the market operator. In this incomplete information setting, we consider the online learning problem of learning equilibrium prices over time while jointly optimizing three performance metrics -- unmet demand, cost regret, and payment regret -- pertinent in the context of equilibrium pricing over a horizon of $T$ periods. We first consider the setting when suppliers' cost functions are fixed and develop algorithms that achieve a regret of $O(\log \log T)$ when the customer demand is constant over time, or $O(\sqrt{T} \log \log T)$ when the demand is variable over time. Next, we consider the setting when the suppliers' cost functions can vary over time and illustrate that no online algorithm can achieve sublinear regret on all three metrics when the market operator has no information about how the cost functions change over time. Thus, we consider an augmented setting wherein the operator has access to hints/contexts that, without revealing the complete specification of the cost functions, reflect the variation in the cost functions over time and propose an algorithm with sublinear regret in this augmented setting.

We investigate the age of information (AoI) of a relay-assisted cooperative communication system, where a source node sends status update packets to the destination node as timely as possible with the aid of a relay node. For time-slotted systems without relaying, prior works have shown that the source should generate and send a new packet to the destination every time slot to minimize the average AoI, regardless of whether the destination has successfully decoded the packet in the previous slot. However, when a dedicated relay is involved, whether the relay can improve the AoI performance requires an in-depth study. In particular, the packet generation and transmission strategy of the source should be carefully designed to cooperate with the relay. Depending on whether the source and the relay are allowed to transmit simultaneously, two relay-assisted schemes are investigated: time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) schemes. A key challenge in deriving their theoretical average AoI is that the destination has different probabilities of successfully receiving an update packet in different time slots. We model each scheme using a Markov chain to derive the corresponding closed-form average AoI. Interestingly, our theoretical analysis indicates that the relay-assisted schemes can only outperform the non-relay scheme in average AoI when the signal-to-noise ratio of the source-destination link is below -2dB. Furthermore, comparing the merits of relay-assisted schemes, simulation results show that the TDMA scheme has a lower energy consumption, while the NOMA counterpart typically achieves a lower average AoI.

Verified compositional compilation (VCC) is a notion of modular verification of compilers that supports compilation of heterogeneous programs. The key to achieve VCC is to design a semantic interface that enables composition of correctness theorems for compiling individual modules. Most of the existing techniques for VCC fix a semantic interface from the very beginning and force it down to every single compiler pass. This requires significant changes to the existing framework and makes it difficult to understand the relationship between conditions enforced by the semantic interface and the actual requirements of compiler passes. A different approach is to design appropriate semantic interfaces for individual compiler passes and combine them into a unified interface which faithfully reflects the requirements of underlying compiler passes. However, this requires vertically composable simulation relations, which were traditionally considered very difficult to construct even with extensive changes to compiler verification frameworks. We propose a solution to construction of unified semantic interfaces for VCC with a bottom-up approach. Our starting point is CompCertO, an extension of CompCert -- the state-of-the-art verified compiler -- that supports VCC but lacks a unified interface. We discover that a CompCert Kripke Logical Relation (CKLR) in CompCertO provides a uniform notion of memory protection for evolving memory states across modules and is transitively composable. Based on this uniform and composable CKLR, we then merge the simulation relations for all the compiler pass in CompCertO (except for three value analysis passes) into a unified interface. We demonstrate the conciseness and effectiveness of this unified interface by applying it to verify the compositional compilation of a non-trivial heterogeneous program with mutual recursion.

Safety is critical in robotic tasks. Energy function based methods have been introduced to address the problem. To ensure safety in the presence of control limits, we need to design an energy function that results in persistently feasible safe control at all system states. However, designing such an energy function for high-dimensional nonlinear systems remains challenging. Considering the fact that there are redundant dynamics in high dimensional systems with respect to the safety specifications, this paper proposes a novel approach called abstract safe control. We propose a system abstraction method that enables the design of energy functions on a low-dimensional model. Then we can synthesize the energy function with respect to the low-dimensional model to ensure persistent feasibility. The resulting safe controller can be directly transferred to other systems with the same abstraction, e.g., when a robot arm holds different tools. The proposed approach is demonstrated on a 7-DoF robot arm (14 states) both in simulation and real-world. Our method always finds feasible control and achieves zero safety violations in 500 trials on 5 different systems.

Having reliable specifications is an unavoidable challenge in achieving verifiable correctness, robustness, and interpretability of AI systems. Existing specifications for neural networks are in the paradigm of data as specification. That is, the local neighborhood centering around a reference input is considered to be correct (or robust). While existing specifications contribute to verifying adversarial robustness, a significant problem in many research domains, our empirical study shows that those verified regions are somewhat tight, and thus fail to allow verification of test set inputs, making them impractical for some real-world applications. To this end, we propose a new family of specifications called neural representation as specification, which uses the intrinsic information of neural networks - neural activation patterns (NAPs), rather than input data to specify the correctness and/or robustness of neural network predictions. We present a simple statistical approach to mining neural activation patterns. To show the effectiveness of discovered NAPs, we formally verify several important properties, such as various types of misclassifications will never happen for a given NAP, and there is no ambiguity between different NAPs. We show that by using NAP, we can verify a significant region of the input space, while still recalling 84% of the data on MNIST. Moreover, we can push the verifiable bound to 10 times larger on the CIFAR10 benchmark. Thus, we argue that NAPs can potentially be used as a more reliable and extensible specification for neural network verification.

The flock-guidance problem enjoys a challenging structure where multiple optimization objectives are solved simultaneously. This usually necessitates different control approaches to tackle various objectives, such as guidance, collision avoidance, and cohesion. The guidance schemes, in particular, have long suffered from complex tracking-error dynamics. Furthermore, techniques that are based on linear feedback strategies obtained at equilibrium conditions either may not hold or degrade when applied to uncertain dynamic environments. Pre-tuned fuzzy inference architectures lack robustness under such unmodeled conditions. This work introduces an adaptive distributed technique for the autonomous control of flock systems. Its relatively flexible structure is based on online fuzzy reinforcement learning schemes which simultaneously target a number of objectives; namely, following a leader, avoiding collision, and reaching a flock velocity consensus. In addition to its resilience in the face of dynamic disturbances, the algorithm does not require more than the agent position as a feedback signal. The effectiveness of the proposed method is validated with two simulation scenarios and benchmarked against a similar technique from the literature.

Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.

To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance. This paper considers the knowledge graph as the source of side information. To address the limitations of existing embedding-based and path-based methods for knowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end framework that naturally incorporates the knowledge graph into recommender systems. Similar to actual ripples propagating on the surface of water, Ripple Network stimulates the propagation of user preferences over the set of knowledge entities by automatically and iteratively extending a user's potential interests along links in the knowledge graph. The multiple "ripples" activated by a user's historically clicked items are thus superposed to form the preference distribution of the user with respect to a candidate item, which could be used for predicting the final clicking probability. Through extensive experiments on real-world datasets, we demonstrate that Ripple Network achieves substantial gains in a variety of scenarios, including movie, book and news recommendation, over several state-of-the-art baselines.

北京阿比特科技有限公司