亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Next-generation wireless networks strive for higher communication rates, ultra-low latency, seamless connectivity, and high-resolution sensing capabilities. To meet these demands, terahertz (THz)-band signal processing is envisioned as a key technology offering wide bandwidth and sub-millimeter wavelength. Furthermore, THz integrated sensing and communications (ISAC) paradigm has emerged jointly access spectrum and reduced hardware costs through a unified platform. To address the challenges in THz propagation, THz-ISAC systems employ extremely large antenna arrays to improve the beamforming gain for communications with high data rates and sensing with high resolution. However, the cost and power consumption of implementing fully digital beamformers are prohibitive. While hybrid analog/digital beamforming can be a potential solution, the use of subcarrier-independent analog beamformers leads to the beam-squint phenomenon where different subcarriers observe distinct directions because of adopting the same analog beamformer across all subcarriers. In this paper, we develop a sparse array architecture for THz-ISAC with hybrid beamforming to provide a cost-effective solution. We analyze the antenna selection problem under beam-squint influence and introduce a manifold optimization approach for hybrid beamforming design. To reduce computational and memory costs, we propose novel algorithms leveraging grouped subarrays, quantized performance metrics, and sequential optimization. These approaches yield a significant reduction in the number of possible subarray configurations, which enables us to devise a neural network with classification model to accurately perform antenna selection.

相關內容

This work develops a novel approach toward performance guarantees for all links in arbitrarily large wireless networks. It introduces a spatial network calculus, consisting of spatial regulation properties for stationary point processes and the first steps of a calculus for this regulation, which can be seen as an extension to space of the classical network calculus. Specifically, two classes of regulations are defined: one includes ball regulation and shot-noise regulation, which are shown to be equivalent and upper constraint interference; the other one includes void regulation, which lower constraints the signal power. These regulations are defined both in the strong and weak sense: the former requires the regulations to hold everywhere in space, whereas the latter only requires the regulations to hold as observed by a jointly stationary point process. Using this approach, we derive performance guarantees in device-to-device, ad hoc, and cellular networks under proper regulations. We give universal bounds on the SINR for all links, which gives link service guarantees based on information-theoretic achievability. They are combined with classical network calculus to provide end-to-end latency guarantees for all packets in wireless queuing networks. Such guarantees do not exist in networks that are not spatially regulated, e.g., Poisson networks.

Accurate and real-time traffic state prediction is of great practical importance for urban traffic control and web mapping services. With the support of massive data, deep learning methods have shown their powerful capability in capturing the complex spatialtemporal patterns of traffic networks. However, existing approaches use pre-defined graphs and a simple set of spatial-temporal components, making it difficult to model multi-scale spatial-temporal dependencies. In this paper, we propose a novel dynamic graph convolution network with attention fusion to tackle this gap. The method first enhances the interaction of temporal feature dimensions, and then it combines a dynamic graph learner with GRU to jointly model synchronous spatial-temporal correlations. We also incorporate spatial-temporal attention modules to effectively capture longrange, multifaceted domain spatial-temporal patterns. We conduct extensive experiments in four real-world traffic datasets to demonstrate that our method surpasses state-of-the-art performance compared to 18 baseline methods.

Intelligent metasurface has recently emerged as a promising technology that enables the customization of wireless environments by harnessing large numbers of inexpensive configurable scattering elements. However, prior studies have predominantly focused on single-layer metasurfaces, which have limitations in terms of the number of beam patterns they can steer accurately due to practical hardware restrictions. In contrast, this paper introduces a novel stacked intelligent metasurface (SIM) design. Specifically, we investigate the integration of SIM into the downlink of a multiuser multiple-input single-output (MISO) communication system, where a SIM, consisting of a multilayer metasurface structure, is deployed at the base station (BS) to facilitate transmit beamforming in the electromagnetic wave domain. This eliminates the need for conventional digital beamforming and high-resolution digital-to-analog converters at the BS. To this end, we formulate an optimization problem that aims to maximize the sum rate of all user equipments by jointly optimizing the transmit power allocation at the BS and the wave-based beamforming at the SIM, subject to both the transmit power budget and discrete phase shift constraints. Furthermore, we propose a computationally efficient algorithm for solving this joint optimization problem and elaborate on the potential benefits of employing SIM in wireless networks. Finally, the numerical results corroborate the effectiveness of the proposed SIM-enabled wave-based beamforming design and evaluate the performance improvement achieved by the proposed algorithm compared to various benchmark schemes. It is demonstrated that considering the same number of transmit antennas, the proposed SIM-based system achieves about 200\% improvement in terms of sum rate compared to conventional MISO systems.

Entanglement between quantum network nodes is often produced using intermediary devices - such as heralding stations - as a resource. When scaling quantum networks to many nodes, requiring a dedicated intermediary device for every pair of nodes introduces high costs. Here, we propose a cost-effective architecture to connect many quantum network nodes via a central quantum network hub called an Entanglement Generation Switch (EGS). The EGS allows multiple quantum nodes to be connected at a fixed resource cost, by sharing the resources needed to make entanglement. We propose an algorithm called the Rate Control Protocol (RCP) which moderates the level of competition for access to the hub's resources between sets of users. We proceed to prove a convergence theorem for rates yielded by the algorithm. To derive the algorithm we work in the framework of Network Utility Maximization (NUM) and make use of the theory of Lagrange multipliers and Lagrangian duality. Our EGS architecture lays the groundwork for developing control architectures compatible with other types of quantum network hubs as well as system models of greater complexity.

Deep neural networks have achieved remarkable progress in enhancing low-light images by improving their brightness and eliminating noise. However, most existing methods construct end-to-end mapping networks heuristically, neglecting the intrinsic prior of image enhancement task and lacking transparency and interpretability. Although some unfolding solutions have been proposed to relieve these issues, they rely on proximal operator networks that deliver ambiguous and implicit priors. In this work, we propose a paradigm for low-light image enhancement that explores the potential of customized learnable priors to improve the transparency of the deep unfolding paradigm. Motivated by the powerful feature representation capability of Masked Autoencoder (MAE), we customize MAE-based illumination and noise priors and redevelop them from two perspectives: 1) \textbf{structure flow}: we train the MAE from a normal-light image to its illumination properties and then embed it into the proximal operator design of the unfolding architecture; and m2) \textbf{optimization flow}: we train MAE from a normal-light image to its gradient representation and then employ it as a regularization term to constrain noise in the model output. These designs improve the interpretability and representation capability of the model.Extensive experiments on multiple low-light image enhancement datasets demonstrate the superiority of our proposed paradigm over state-of-the-art methods. Code is available at //github.com/zheng980629/CUE.

In traditional blockchain networks, transaction fees are only allocated to full nodes (i.e., miners) regardless of the contribution of forwarding behaviors of light nodes. However, the lack of forwarding incentive reduces the willingness of light nodes to relay transactions, especially in the energy-constrained Mobile Ad Hoc Network (MANET). This paper proposes a novel dual auction mechanism to allocate transaction fees for forwarding and validation behaviors in the wireless blockchain network. The dual auction mechanism consists of two auction models: the forwarding auction and the validation auction. In the forwarding auction, forwarding nodes use Generalized First Price (GFP) auction to choose transactions to forward. Besides, forwarding nodes adjust the forwarding probability through a no-regret algorithm to improve efficiency. In the validation auction, full nodes select transactions using Vickrey-Clarke-Grove (VCG) mechanism to construct the block. We prove that the designed dual auction mechanism is Incentive Compatibility (IC), Individual Rationality (IR), and Computational Efficiency (CE). Especially, we derive the upper bound of the social welfare difference between the social optimal auction and our proposed one. Extensive simulation results demonstrate that the proposed dual auction mechanism decreases energy and spectrum resource consumption and effectively improves social welfare without sacrificing the throughput and the security of the wireless blockchain network.

Matrix/array analysis of networks can provide significant insight into their behavior and aid in their operation and protection. Prior work has demonstrated the analytic, performance, and compression capabilities of GraphBLAS (graphblas.org) hypersparse matrices and D4M (d4m.mit.edu) associative arrays (a mathematical superset of matrices). Obtaining the benefits of these capabilities requires integrating them into operational systems, which comes with its own unique challenges. This paper describes two examples of real-time operational implementations. First, is an operational GraphBLAS implementation that constructs anonymized hypersparse matrices on a high-bandwidth network tap. Second, is an operational D4M implementation that analyzes daily cloud gateway logs. The architectures of these implementations are presented. Detailed measurements of the resources and the performance are collected and analyzed. The implementations are capable of meeting their operational requirements using modest computational resources (a couple of processing cores). GraphBLAS is well-suited for low-level analysis of high-bandwidth connections with relatively structured network data. D4M is well-suited for higher-level analysis of more unstructured data. This work demonstrates that these technologies can be implemented in operational settings.

Under mild assumptions, we investigate the structure of loss landscape of two-layer neural networks near global minima, determine the set of parameters which give perfect generalization, and fully characterize the gradient flows around it. With novel techniques, our work uncovers some simple aspects of the complicated loss landscape and reveals how model, target function, samples and initialization affect the training dynamics differently. Based on these results, we also explain why (overparametrized) neural networks could generalize well.

A large number of real-world graphs or networks are inherently heterogeneous, involving a diversity of node types and relation types. Heterogeneous graph embedding is to embed rich structural and semantic information of a heterogeneous graph into low-dimensional node representations. Existing models usually define multiple metapaths in a heterogeneous graph to capture the composite relations and guide neighbor selection. However, these models either omit node content features, discard intermediate nodes along the metapath, or only consider one metapath. To address these three limitations, we propose a new model named Metapath Aggregated Graph Neural Network (MAGNN) to boost the final performance. Specifically, MAGNN employs three major components, i.e., the node content transformation to encapsulate input node attributes, the intra-metapath aggregation to incorporate intermediate semantic nodes, and the inter-metapath aggregation to combine messages from multiple metapaths. Extensive experiments on three real-world heterogeneous graph datasets for node classification, node clustering, and link prediction show that MAGNN achieves more accurate prediction results than state-of-the-art baselines.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司