亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Extremely large-scale reconfigurable intelligent surface (XL-RIS) has recently been proposed and is recognized as a promising technology that can further enhance the capacity of communication systems and compensate for severe path loss . However, the pilot overhead of beam training in XL-RIS-assisted wireless communication systems is enormous because the near-field channel model needs to be taken into account, and the number of candidate codewords in the codebook increases dramatically accordingly. To tackle this problem, we propose two deep learning-based near-field beam training schemes in XL-RIS-assisted communication systems, where deep residual networks are employed to determine the optimal near-field RIS codeword. Specifically, we first propose a far-field beam-based beam training (FBT) scheme in which the received signals of all far-field RIS codewords are fed into the neural network to estimate the optimal near-field RIS codeword. In order to further reduce the pilot overhead, a partial near-field beam-based beam training (PNBT) scheme is proposed, where only the received signals corresponding to the partial near-field XL-RIS codewords are served as input to the neural network. Moreover, we further propose an improved PNBT scheme to enhance the performance of beam training by fully exploring the neural network's output. Finally, simulation results show that the proposed schemes outperform the existing beam training schemes and can reduce the beam sweeping overhead by approximately 95%.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網(wang)絡(luo)會議。 Publisher:IFIP。 SIT:

Producing thousands of simulations of the dark matter distribution in the Universe with increasing precision is a challenging but critical task to facilitate the exploitation of current and forthcoming cosmological surveys. Many inexpensive substitutes to full $N$-body simulations have been proposed, even though they often fail to reproduce the statistics of the smaller, non-linear scales. Among these alternatives, a common approximation is represented by the lognormal distribution, which comes with its own limitations as well, while being extremely fast to compute even for high-resolution density fields. In this work, we train a generative deep learning model, mainly made of convolutional layers, to transform projected lognormal dark matter density fields to more realistic dark matter maps, as obtained from full $N$-body simulations. We detail the procedure that we follow to generate highly correlated pairs of lognormal and simulated maps, which we use as our training data, exploiting the information of the Fourier phases. We demonstrate the performance of our model comparing various statistical tests with different field resolutions, redshifts and cosmological parameters, proving its robustness and explaining its current limitations. When evaluated on 100 test maps, the augmented lognormal random fields reproduce the power spectrum up to wavenumbers of $1 \ h \ \rm{Mpc}^{-1}$, and the bispectrum within 10%, and always within the error bars, of the fiducial target simulations. Finally, we describe how we plan to integrate our proposed model with existing tools to yield more accurate spherical random fields for weak lensing analysis.

Buildings' segmentation is a fundamental task in the field of earth observation and aerial imagery analysis. Most existing deep learning-based methods in the literature can be applied to a fixed or narrow-range spatial resolution imagery. In practical scenarios, users deal with a broad spectrum of image resolutions. Thus, a given aerial image often needs to be re-sampled to match the spatial resolution of the dataset used to train the deep learning model, which results in a degradation in segmentation performance. To overcome this challenge, we propose, in this manuscript, Scale-invariant Neural Network (Sci-Net) architecture that segments buildings from wide-range spatial resolution aerial images. Specifically, our approach leverages UNet hierarchical representation and Dense Atrous Spatial Pyramid Pooling to extract fine-grained multi-scale representations. Sci-Net significantly outperforms state of the art models on the Open Cities AI and the Multi-Scale Building datasets with a steady improvement margin across different spatial resolutions.

Link streams offer a good model for representing interactions over time. They consist of links $(b,e,u,v)$, where $u$ and $v$ are vertices interacting during the whole time interval $[b,e]$. In this paper, we deal with the problem of enumerating maximal cliques in link streams. A clique is a pair $(C,[t_0,t_1])$, where $C$ is a set of vertices that all interact pairwise during the full interval $[t_0,t_1]$. It is maximal when neither its set of vertices nor its time interval can be increased. Some of the main works solving this problem are based on the famous Bron-Kerbosch algorithm for enumerating maximal cliques in graphs. We take this idea as a starting point to propose a new algorithm which matches the cliques of the instantaneous graphs formed by links existing at a given time $t$ to the maximal cliques of the link stream. We prove its validity and compute its complexity, which is better than the state-of-the art ones in many cases of interest. We also study the output-sensitive complexity, which is close to the output size, thereby showing that our algorithm is efficient. To confirm this, we perform experiments on link streams used in the state of the art, and on massive link streams, up to 100 million links. In all cases our algorithm is faster, mostly by a factor of at least 10 and up to a factor of $10^4$. Moreover, it scales to massive link streams for which the existing algorithms are not able to provide the solution.

Latent variable discovery is a central problem in data analysis with a broad range of applications in applied science. In this work, we consider data given as an invertible mixture of two statistically independent components, and assume that one of the components is observed while the other is hidden. Our goal is to recover the hidden component. For this purpose, we propose an autoencoder equipped with a discriminator. Unlike the standard nonlinear ICA problem, which was shown to be non-identifiable, in the special case of ICA we consider here, we show that our approach can recover the component of interest up to entropy-preserving transformation. We demonstrate the performance of the proposed approach on several datasets, including image synthesis, voice cloning, and fetal ECG extraction.

With increasingly more computation being shifted to the edge of the network, monitoring of critical infrastructures, such as intermediate processing nodes in autonomous driving, is further complicated due to the typically resource-constrained environments. In order to reduce the resource overhead on the network link imposed by monitoring, various methods have been discussed that either follow a filtering approach for data-emitting devices or conduct dynamic sampling based on employed prediction models. Still, existing methods are mainly requiring adaptive monitoring on edge devices, which demands device reconfigurations, utilizes additional resources, and limits the sophistication of employed models. In this paper, we propose a sampling-based and cloud-located approach that internally utilizes probabilistic forecasts and hence provides means of quantifying model uncertainties, which can be used for contextualized adaptations of sampling frequencies and consequently relieves constrained network resources. We evaluate our prototype implementation for the monitoring pipeline on a publicly available streaming dataset and demonstrate its positive impact on resource efficiency in a method comparison.

In last decades, legal case search has received more and more attention. Legal practitioners need to work or enhance their efficiency by means of class case search. In the process of searching, legal practitioners often need the search results under several different causes of cases as reference. However, existing work tends to focus on the relevance of the judgments themselves, without considering the connection between the causes of action. Several well-established diversity search techniques already exist in open-field search efforts. However, these techniques do not take into account the specificity of legal search scenarios, e.g., the subtopic may not be independent of each other, but somehow connected. Therefore, we construct a diversity legal retrieval model. This model takes into account both diversity and relevance, and is well adapted to this scenario. At the same time, considering the lack of dataset with diversity labels, we constructed a diversity legal retrieval dataset and obtained labels by manual labeling. experiments confirmed that our model is effective.

Extremely large-scale array (XL-array) is envisioned to achieve super-high spectral efficiency in future wireless networks. Different from the existing works that mostly focus on the near-field communications, we consider in this paper a new and practical scenario, called mixed near- and far-field communications, where there exist both near- and far-field users in the network. For this scenario, we first obtain a closed-form expression for the inter-user interference at the near-field user caused by the far-field beam by using Fresnel functions, based on which the effects of the number of BS antennas, far-field user (FU) angle, near-field user (NU) angle and distance are analyzed. We show that the strong interference exists when the number of the BS antennas and the NU distance are relatively small, and/or the NU and FU angle-difference is small. Then, we further obtain the achievable rate of the NU as well as its rate loss caused by the FU interference. Last, numerical results are provided to corroborate our analytical results.

Precise load forecasting in buildings could increase the bill savings potential and facilitate optimized strategies for power generation planning. With the rapid evolution of computer science, data-driven techniques, in particular the Deep Learning models, have become a promising solution for the load forecasting problem. These models have showed accurate forecasting results; however, they need abundance amount of historical data to maintain the performance. Considering the new buildings and buildings with low resolution measuring equipment, it is difficult to get enough historical data from them, leading to poor forecasting performance. In order to adapt Deep Learning models for buildings with limited and scarce data, this paper proposes a Building-to-Building Transfer Learning framework to overcome the problem and enhance the performance of Deep Learning models. The transfer learning approach was applied to a new technique known as Transformer model due to its efficacy in capturing data trends. The performance of the algorithm was tested on a large commercial building with limited data. The result showed that the proposed approach improved the forecasting accuracy by 56.8% compared to the case of conventional deep learning where training from scratch is used. The paper also compared the proposed Transformer model to other sequential deep learning models such as Long-short Term Memory (LSTM) and Recurrent Neural Network (RNN). The accuracy of the transformer model outperformed other models by reducing the root mean square error to 0.009, compared to LSTM with 0.011 and RNN with 0.051.

XL-MIMO promises to provide ultrahigh data rates in Terahertz (THz) spectrum. However, the spherical-wavefront wireless transmission caused by large aperture array presents huge challenges for channel state information (CSI) acquisition. Two independent parameters (physical angles and transmission distance) should be simultaneously considered in XL-MIMO beamforming, which brings severe overhead consumption and beamforming degradation. To address this problem, we exploit the near-field channel characteristic and propose one low-overhead hierarchical beam training scheme for near-field XL-MIMO system. Firstly, we project near-field channel into spatial-angular domain and slope-intercept domain to capture detailed representations. Secondly, a novel spatial-chirp beam-aided codebook and corresponding hierarchical update policy are proposed. Theoretical analyses and numerical simulations are also displayed to verify the superior performances on beamforming and training overhead.

Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.

北京阿比特科技有限公司