亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The majority of stochastic channel models rely on the electromagnetic far-field assumption. This assumption breaks down in future applications that push towards the electromagnetic near-field region such as those where the use of very large antenna arrays is envisioned. Motivated by this consideration, we show how physical principles can be used to derive a channel model that is also valid in the electromagnetic near-field. We show that wave propagation through a three-dimensional scattered medium can be generally modeled as a linear and space-variant system. We first review the physics principles that lead to a closed-form deterministic angular representation of the channel response. This serves as a basis for deriving a stochastic representation of the channel in terms of statistically independent Gaussian random coefficients for randomly spatially-stationary propagation environments. The very desirable property of spatial stationarity can always be retained by excluding reactive propagation mechanisms confined in the extreme near-field propagation region. Remarkably, the provided stochastic representation is directly connected to the Fourier spectral representation of a general spatially-stationary random field.

相關內容

迄今為止,產品設計師最友(you)好的(de)交互動畫軟件。

We study downlink channel estimation in a multi-cell Massive multiple-input multiple-output (MIMO) system operating in time-division duplex. The users must know their effective channel gains to decode their received downlink data. Previous works have used the mean value as the estimate, motivated by channel hardening. However, this is associated with a performance loss in non-isotropic scattering environments. We propose two novel estimation methods that can be applied without downlink pilots. The first method is model-based and asymptotic arguments are utilized to identify a connection between the effective channel gain and the average received power during a coherence block. This second method is data-driven and trains a neural network to identify a mapping between the available information and the effective channel gain. Both methods can be utilized for any channel distribution and precoding. For the model-aided method, we derive closed-form expressions when using maximum ratio or zero-forcing precoding. We compare the proposed methods with the state-of-the-art using the normalized mean-squared error and spectral efficiency (SE). The results suggest that the two proposed methods provide better SE than the state-of-the-art when there is a low level of channel hardening, while the performance difference is relatively small with the uncorrelated channel model.

In this paper, the role of secret key with finite rate is studied to enhance the secrecy performance of the system when users are operating in interference limited scenarios. To address this problem, a 2-user Gaussian Z-IC with secrecy constraint at the receiver is considered. One of the fundamental problems here is how to use the secret key as a part of the encoding process. The paper proposes novel achievable schemes, where the schemes differ from each other based on how the key has been used in the encoding process. The first achievable scheme uses one part of the key for one-time pad and remaining part of the key for wiretap coding. The encoding is performed such that the receiver experiencing interference can decode some part of the interference without violating the secrecy constraint. As a special case of the derived result, one can obtain the secrecy rate region when the key is completely used for one-time pad or part of the wiretap coding. The second scheme uses the shared key to encrypt the message using one-time pad and in contrast to the previous case no interference is decoded at the receiver. The paper also derives an outer bound on the sum rate and secrecy rate of the transmitter which causes interference. The main novelty of deriving outer bound lies in the selection of side information provided to the receiver and using the secrecy constraint at the receiver. The derived outer bounds are found to be tight depending on the channel conditions and rate of the key. The scaling behaviour of key rate is also explored for different schemes using the notion of secure GDOF. The optimality of different schemes is characterized for some specific cases. The developed results show the importance of key rate splitting in enhancing the secrecy performance of the system when users are operating under interference limited environment.

True random number generators (TRNG) sample random physical processes to create large amounts of random numbers for various use cases, including security-critical cryptographic primitives, scientific simulations, machine learning applications, and even recreational entertainment. Unfortunately, not every computing system is equipped with dedicated TRNG hardware, limiting the application space and security guarantees for such systems. To open the application space and enable security guarantees for the overwhelming majority of computing systems that do not necessarily have dedicated TRNG hardware, we develop QUAC-TRNG. QUAC-TRNG exploits the new observation that a carefully-engineered sequence of DRAM commands activates four consecutive DRAM rows in rapid succession. This QUadruple ACtivation (QUAC) causes the bitline sense amplifiers to non-deterministically converge to random values when we activate four rows that store conflicting data because the net deviation in bitline voltage fails to meet reliable sensing margins. We experimentally demonstrate that QUAC reliably generates random values across 136 commodity DDR4 DRAM chips from one major DRAM manufacturer. We describe how to develop an effective TRNG (QUAC-TRNG) based on QUAC. We evaluate the quality of our TRNG using NIST STS and find that QUAC-TRNG successfully passes each test. Our experimental evaluations show that QUAC-TRNG generates true random numbers with a throughput of 3.44 Gb/s (per DRAM channel), outperforming the state-of-the-art DRAM-based TRNG by 15.08x and 1.41x for basic and throughput-optimized versions, respectively. We show that QUAC-TRNG utilizes DRAM bandwidth better than the state-of-the-art, achieving up to 2.03x the throughput of a throughput-optimized baseline when scaling bus frequencies to 12 GT/s.

Dark patterns have emerged as a set of methods to exploit cognitive biases to trick users to make decisions that are more aligned with a third party than to their own. These patterns can have consequences that might range from inconvenience to global disasters. We present a case of a drug company and an electronic medical record vendor who colluded to modify the medical record's interface to induce clinicians to increase the prescription of extended-release opioids, a class of drugs that has a high potential for addiction and has caused almost half a million additional deaths in the past two decades. Through this case, we present the use and effects of dark patterns in healthcare, discuss the current challenges, and offer some recommendations on how to address this pressing issue.

A proper orthogonal decomposition-based B-splines B\'ezier elements method (POD-BSBEM) is proposed as a non-intrusive reduced-order model for uncertainty propagation analysis for stochastic time-dependent problems. The method uses a two-step proper orthogonal decomposition (POD) technique to extract the reduced basis from a collection of high-fidelity solutions called snapshots. A third POD level is then applied on the data of the projection coefficients associated with the reduced basis to separate the time-dependent modes from the stochastic parametrized coefficients. These are approximated in the stochastic parameter space using B-splines basis functions defined in the corresponding B\'ezier element. The accuracy and the efficiency of the proposed method are assessed using benchmark steady-state and time-dependent problems and compared to the reduced order model-based artificial neural network (POD-ANN) and to the full-order model-based polynomial chaos expansion (Full-PCE). The POD-BSBEM is then applied to analyze the uncertainty propagation through a flood wave flow stemming from a hypothetical dam-break in a river with a complex bathymetry. The results confirm the ability of the POD-BSBEM to accurately predict the statistical moments of the output quantities of interest with a substantial speed-up for both offline and online stages compared to other techniques.

Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via //github.com/Coder-Yu/RecQ.

In this work, we propose a generally applicable transformation unit for visual recognition with deep convolutional neural networks. This transformation explicitly models channel relationships with explainable control variables. These variables determine the neuron behaviors of competition or cooperation, and they are jointly optimized with the convolutional weight towards more accurate recognition. In Squeeze-and-Excitation (SE) Networks, the channel relationships are implicitly learned by fully connected layers, and the SE block is integrated at the block-level. We instead introduce a channel normalization layer to reduce the number of parameters and computational complexity. This lightweight layer incorporates a simple l2 normalization, enabling our transformation unit applicable to operator-level without much increase of additional parameters. Extensive experiments demonstrate the effectiveness of our unit with clear margins on many vision tasks, i.e., image classification on ImageNet, object detection and instance segmentation on COCO, video classification on Kinetics.

This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: //github.com/balcilar/Spectral-Designed-Graph-Convolutions

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司