亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Wireless devices need spectrum to communicate. With the increase in the number of devices competing for the same spectrum, it has become nearly impossible to support the throughput requirements of all the devices through current spectrum sharing methods. In this work, we look at the problem of spectrum resource contention fundamentally, taking inspiration from the principles of globalization. We develop a distributed algorithm whereby the wireless nodes democratically share the spectrum resources and improve their spectral efficiency and throughput without additional power or spectrum resources. We validate the performance of our proposed democratic spectrum sharing (DSS) algorithm over real-world Wi-Fi networks and on synthetically generated networks with varying design parameters. Compared to the greedy approach, DSS achieves significant gains in throughput (~60%), area spectral efficiency ($\sim$50\%) and fairness in datarate distribution (~20%). Due to the distributed nature of the proposed algorithm, we can apply it to wireless networks of any size and density.

相關內容

Explanation:無線網。 Publisher:Springer。 SIT:

Non-Orthogonal Multiple Access (NOMA) is a promising technology for future Wi-Fi. In uplink NOMA, stations with different channel conditions transmit simultaneously at the same frequency by splitting the signal by power level. Since Wi-Fi uses random access, the implementation of uplink NOMA in Wi- Fi faces many challenges. The paper presents a data transmission mechanism in Wi-Fi networks that enables synchronous uplink NOMA, where multiple stations start data transmission to the access point simultaneously. The developed mechanism can work with the legacy Enhanced Distributed Channel Access (EDCA) mechanism in Wi-Fi. With simulation, it is shown that the developed mechanism can double the total throughput and geometric mean throughput compared with the legacy EDCA.

Multi-access edge computing (MEC) is viewed as an integral part of future wireless networks to support new applications with stringent service reliability and latency requirements. However, guaranteeing ultra-reliable and low-latency MEC (URLL MEC) is very challenging due to uncertainties of wireless links, limited communications and computing resources, as well as dynamic network traffic. Enabling URLL MEC mandates taking into account the statistics of the end-to-end (E2E) latency and reliability across the wireless and edge computing systems. In this paper, a novel framework is proposed to optimize the reliability of MEC networks by considering the distribution of E2E service delay, encompassing over-the-air transmission and edge computing latency. The proposed framework builds on correlated variational autoencoders (VAEs) to estimate the full distribution of the E2E service delay. Using this result, a new optimization problem based on risk theory is formulated to maximize the network reliability by minimizing the Conditional Value at Risk (CVaR) as a risk measure of the E2E service delay. To solve this problem, a new algorithm is developed to efficiently allocate users' processing tasks to edge computing servers across the MEC network, while considering the statistics of the E2E service delay learned by VAEs. The simulation results show that the proposed scheme outperforms several baselines that do not account for the risk analyses or statistics of the E2E service delay.

Cell-free (CF) massive multiple-input multiple-output (MIMO) systems are expected to implement advanced cooperative communication techniques to let geographically distributed access points jointly serve user equipments. Building on the \emph{Team Theory}, we design the uplink team minimum mean-squared error (TMMSE) combining under limited data and flexible channel state information (CSI) sharing. Taking into account the effect of both channel estimation errors and pilot contamination, a minimum MSE problem is formulated to derive unidirectional TMMSE, centralized TMMSE and statistical TMMSE combining functions, where CF massive MIMO systems operate in unidirectional CSI, centralized CSI and statistical CSI sharing schemes, respectively. We then derive the uplink spectral efficiency (SE) of the considered system. The results show that, compared to centralized TMMSE, the unidirectional TMMSE only needs nearly half the cost of CSI sharing burden with neglectable SE performance loss. Moreover, the performance gap between unidirectional and centralized TMMSE combining schemes can be effectively reduced by increasing the number of APs and antennas per AP.

The performance of spectral clustering heavily relies on the quality of affinity matrix. A variety of affinity-matrix-construction (AMC) methods have been proposed but they have hyperparameters to determine beforehand, which requires strong experience and lead to difficulty in real applications especially when the inter-cluster similarity is high or/and the dataset is large. In addition, we often need to choose different AMC methods for different datasets, which still depends on experience. To solve these two challenging problems, in this paper, we present a simple yet effective method for automated spectral clustering. The main idea is to find the most reliable affinity matrix among a set of candidates given by different AMC methods with different hyperparameters, where the reliability is quantified by the \textit{relative-eigen-gap} of graph Laplacian introduced in this paper. We also implement the method using Bayesian optimization.We extend the method to large-scale datasets such as MNIST, on which the time cost is less than 90s and the clustering accuracy is state-of-the-art. Extensive experiments of natural image clustering show that our method is more versatile, accurate, and efficient than baseline methods.

To improve diagnostic accuracy of breast cancer detection, several researchers have used the wavelet-based tools, which provide additional insight and information for aiding diagnostic decisions. The accuracy of such diagnoses, however, can be improved. This paper introduces a wavelet-based technique, non-decimated wavelet transform (NDWT)-based scaling estimation, that improves scaling parameter estimation over the traditional methods. One distinctive feature of NDWT is that it does not decimate wavelet coefficients at multiscale levels resulting in redundant outputs which are used to lower the variance of scaling estimators. Another interesting feature of the proposed methodology is the freedom of dyadic constraints for inputs, typical for standard wavelet-based approaches. To compare the estimation performance of the NDWT method to a conventional orthogonal wavelet transform-based method, we use simulation to estimate the Hurst exponent in two-dimensional fractional Brownian fields. The results of the simulation show that the proposed method improves the conventional estimators of scaling and yields estimators with smaller mean-squared errors. We apply the NDWT method to classification of mammograms as cancer or control and, for publicly available mammogram images from the database at the University of South Florida, find the the diagnostic accuracy in excess of 80%.

Computing in-memory (CiM) has emerged as an attractive technique to mitigate the von-Neumann bottleneck. Current digital CiM approaches for in-memory operands are based on multi-wordline assertion for computing bit-wise Boolean functions and arithmetic functions such as addition. However, most of these techniques, due to the many-to-one mapping of input vectors to bitline voltages, are limited to CiM of commutative functions, leaving out an important class of computations such as subtraction. In this paper, we propose a CiM approach, which solves the mapping problem through an asymmetric wordline biasing scheme, enabling (a) simultaneous single-cycle memory read and CiM of primitive Boolean functions (b) computation of any Boolean function and (c) CiM of non-commutative functions such as subtraction and comparison. While the proposed technique is technology-agnostic, we show its utility for ferroelectric transistor (FeFET)-based non-volatile memory. Compared to the standard near-memory methods (which require two full memory accesses per operation), we show that our method can achieve a full scale two-operand digital CiM using just one memory access, leading to a 23.2% - 72.6% decrease in energy-delay product (EDP).

Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, \emph{Fourier stabilization}, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.

Neural architecture search has attracted wide attentions in both academia and industry. To accelerate it, researchers proposed weight-sharing methods which first train a super-network to reuse computation among different operators, from which exponentially many sub-networks can be sampled and efficiently evaluated. These methods enjoy great advantages in terms of computational costs, but the sampled sub-networks are not guaranteed to be estimated precisely unless an individual training process is taken. This paper owes such inaccuracy to the inevitable mismatch between assembled network layers, so that there is a random error term added to each estimation. We alleviate this issue by training a graph convolutional network to fit the performance of sampled sub-networks so that the impact of random errors becomes minimal. With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates, which consequently leads to better performance of the final architecture. In addition, our approach also enjoys the flexibility of being used under different hardware constraints, since the graph convolutional network has provided an efficient lookup table of the performance of architectures in the entire search space.

Spectral clustering (SC) is a popular clustering technique to find strongly connected communities on a graph. SC can be used in Graph Neural Networks (GNNs) to implement pooling operations that aggregate nodes belonging to the same cluster. However, the eigendecomposition of the Laplacian is expensive and, since clustering results are graph-specific, pooling methods based on SC must perform a new optimization for each new sample. In this paper, we propose a graph clustering approach that addresses these limitations of SC. We formulate a continuous relaxation of the normalized minCUT problem and train a GNN to compute cluster assignments that minimize this objective. Our GNN-based implementation is differentiable, does not require to compute the spectral decomposition, and learns a clustering function that can be quickly evaluated on out-of-sample graphs. From the proposed clustering method, we design a graph pooling operator that overcomes some important limitations of state-of-the-art graph pooling techniques and achieves the best performance in several supervised and unsupervised tasks.

The area of Data Analytics on graphs promises a paradigm shift as we approach information processing of classes of data, which are typically acquired on irregular but structured domains (social networks, various ad-hoc sensor networks). Yet, despite its long history, current approaches mostly focus on the optimization of graphs themselves, rather than on directly inferring learning strategies, such as detection, estimation, statistical and probabilistic inference, clustering and separation from signals and data acquired on graphs. To fill this void, we first revisit graph topologies from a Data Analytics point of view, and establish a taxonomy of graph networks through a linear algebraic formalism of graph topology (vertices, connections, directivity). This serves as a basis for spectral analysis of graphs, whereby the eigenvalues and eigenvectors of graph Laplacian and adjacency matrices are shown to convey physical meaning related to both graph topology and higher-order graph properties, such as cuts, walks, paths, and neighborhoods. Next, to illustrate estimation strategies performed on graph signals, spectral analysis of graphs is introduced through eigenanalysis of mathematical descriptors of graphs and in a generic way. Finally, a framework for vertex clustering and graph segmentation is established based on graph spectral representation (eigenanalysis) which illustrates the power of graphs in various data association tasks. The supporting examples demonstrate the promise of Graph Data Analytics in modeling structural and functional/semantic inferences. At the same time, Part I serves as a basis for Part II and Part III which deal with theory, methods and applications of processing Data on Graphs and Graph Topology Learning from data.

北京阿比特科技有限公司