亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Integrated access and backhaul (IAB) facilitates cost-effective deployment of millimeter wave(mmWave) cellular networks through multihop self-backhauling. Full-duplex (FD) technology, particularly for mmWave systems, is a potential means to overcome latency and throughput challenges faced by IAB networks. We derive practical and tractable throughput and latency constraints using queueing theory and formulate a network utility maximization problem to evaluate both FD-IAB and half-duplex(HD)-IAB networks. We use this to characterize the network-level improvements seen when upgrading from conventional HD IAB nodes to FD ones by deriving closed-form expressions for (i) latency gain of FD-IAB over HD-IAB and (ii) the maximum number of hops that a HD- and FD-IAB network can support while satisfying latency and throughput targets. Extensive simulations illustrate that FD-IAB can facilitate reduced latency, higher throughput, deeper networks, and fairer service. Compared to HD-IAB,FD-IAB can improve throughput by 8x and reduce latency by 4x for a fourth-hop user. In fact, upgrading IAB nodes with FD capability can allow the network to support latency and throughput targets that its HD counterpart fundamentally cannot meet. The gains are more profound for users further from the donor and can be achieved even when residual self-interference is significantly above the noise floor.

相關內容

Reconfigurable intelligent surfaces (RISs) have emerged as a prospective technology for next-generation wireless networks due to their potential in coverage and capacity enhancement. The analysis and optimization of ergodic capacity for RIS-assisted communication systems have been investigated extensively. However, the Rayleigh or Rician channel model is usually utilized in the existing work, which is not suitable for millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. Thus, we fill the gap and consider the ergodic capacity of RIS-assisted mmWave MIMO communication systems under the Saleh-Valenzuela channel model. Firstly, we derive tight approximations of ergodic capacity and a tight upper bound in high signal-to-noise ratio regime. Then, we aim to maximize the ergodic capacity by jointly designing the transmit covariance matrix at the base station and the reflection coefficients at the RIS. Specifically, the transmit covariance matrix is optimized by the water-filling algorithm and the reflection coefficients are optimized using the Riemanian conjugate gradient algorithm. Simulation results validate the tightness of the derived ergodic capacity approximations and the effectiveness of the proposed algorithms.

Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE). In PINNs, the residual form of the PDE of interest and its boundary conditions are lumped into a composite objective function as soft penalties. Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach when applied to different kinds of PDEs. To address these limitations, we propose a versatile framework based on a constrained optimization problem formulation, where we use the augmented Lagrangian method (ALM) to constrain the solution of a PDE with its boundary conditions and any high-fidelity data that may be available. Our approach is adept at forward and inverse problems with multi-fidelity data fusion. We demonstrate the efficacy and versatility of our physics- and equality-constrained deep-learning framework by applying it to several forward and inverse problems involving multi-dimensional PDEs. Our framework achieves orders of magnitude improvements in accuracy levels in comparison with state-of-the-art physics-informed neural networks.

This paper investigates the throughput performance issue of the relay-assisted mmWave backhaul network. The maximum traffic demand of small-cell base stations (BSs) and the maximum throughput at the macro-cell BS have been found in a tree-style backhaul network through linear programming under different network settings, which concern both the number of radio chains available on BSs and the interference relationship between logical links in the backhaul network. A novel interference model for the relay-assisted mmWave backhaul network in the dense urban environment is proposed, which demonstrates the limited interference footprint of mmWave directional communications. Moreover, a scheduling algorithm is developed to find the optimal scheduling for tree-style mmWave backhaul networks. Extensive numerical analysis and simulations are conducted to show and validate the network throughput performance and the scheduling algorithm.

Deep learning techniques have recently shown promise in the field of anomaly detection, providing a flexible and effective method of modelling systems in comparison to traditional statistical modelling and signal processing-based methods. However, there are a few well publicised issues Neural Networks (NN)s face such as generalisation ability, requiring large volumes of labelled data to be able to train effectively and understanding spatial context in data. This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network in a branched input Autoencoder architecture for use on multivariate time series data. The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data. Experimental results show that without hyperparameter optimisation, using Capsules significantly reduces overfitting and improves the training efficiency. Additionally, results also show that the branched input models can learn multivariate data more consistently with or without Capsules in comparison to the non-branched input models. The proposed model architecture was also tested on an open-source benchmark, where it achieved state-of-the-art performance in outlier detection, and overall performs best over the metrics tested in comparison to current state-of-the art methods.

In this paper, we present an end-to-end attention-based convolutional recurrent autoencoder network (AB-CRAN) for data-driven modeling of wave propagation phenomena. To construct the low-dimensional learning model, we employ a denoising-based convolutional autoencoder from the full-order snapshots of wave propagation generated by solving hyperbolic partial differential equations. The proposed deep neural network architecture relies on the attention-based recurrent neural network (RNN) with long short-term memory (LSTM) cells for constructing the trajectory in the latent space. We assess the proposed AB-CRAN framework against the standard RNN-LSTM for the low-dimensional learning of wave propagation. To demonstrate the effectiveness of the AB-CRAN model, we consider three benchmark problems namely one-dimensional linear convection, nonlinear viscous Burgers, and two-dimensional Saint-Venant shallow water system. Using the time-series datasets from the benchmark problems, our novel AB-CRAN architecture accurately captures the wave amplitude and preserves the wave characteristics of the solution for long time horizons. The attention-based sequence-to-sequence network increases the time-horizon of prediction by five times compared to the standard RNN-LSTM. Denoising autoencoder further reduces the mean squared error of prediction and improves the generalization capability in the parameter space.

The system-level performance of multi-gateway downlink long-range (LoRa) networks is investigated in the present paper. Specifically, we first compute the active probability of a channel and the selection probability of an active end-device (ED) in the closed-form expressions. We then derive the coverage probability (Pcov) and the area spectral efficiency (ASE) under the impact of the capture effects and different spreading factor (SF) allocation schemes. Our findings show that both the Pcov and the ASE of the considered networks can be enhanced significantly by increasing both the duty cycle and the transmit power. Finally, Monte-Carlo simulations are provided to verify the accuracy of the proposed mathematical frameworks.

Radar-based materials detection received significant attention in recent years for its potential inclusion in consumer and industrial applications like object recognition for grasping and manufacturing quality assurance and control. Several radar publications were developed for material classification under controlled settings with specific materials' properties and shapes. Recent literature has challenged the earlier findings on radars-based materials classification claiming that earlier solutions are not easily scaled to industrial applications due to a variety of real-world issues. Published experiments on the impact of these factors on the robustness of the extracted radar-based traditional features have already demonstrated that the application of deep neural networks can mitigate, to some extent, the impact to produce a viable solution. However, previous studies lacked an investigation of the usefulness of lower frequency radar units, specifically <10GHz, against the higher range units around and above 60GHz. This research considers two radar units with different frequency ranges: Walabot-3D (6.3-8 GHz) cm-wave and IMAGEVK-74 (62-69 GHz) mm-wave imaging units by Vayyar Imaging. A comparison is presented on the applicability of each unit for material classification. This work extends upon previous efforts, by applying deep wavelet scattering transform for the identification of different materials based on the reflected signals. In the wavelet scattering feature extractor, data is propagated through a series of wavelet transforms, nonlinearities, and averaging to produce low-variance representations of the reflected radar signals. This work is unique in comparison of the radar units and algorithms in material classification and includes real-time demonstrations that show strong performance by both units, with increased robustness offered by the cm-wave radar unit.

It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as \textit{over-smoothing}. Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem. However, there is little explanation of why it works empirically from the viewpoint of learning theory. In this study, we derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs. Using the boosting theory, we prove the convergence of the training error under weak learning-type conditions. By combining it with generalization gap bounds in terms of transductive Rademacher complexity, we show that a test error bound of a specific type of multi-scale GNNs that decreases corresponding to the depth under the conditions. Our results offer theoretical explanations for the effectiveness of the multi-scale structure against the over-smoothing problem. We apply boosting algorithms to the training of multi-scale GNNs for real-world node prediction tasks. We confirm that its performance is comparable to existing GNNs, and the practical behaviors are consistent with theoretical observations. Code is available at //github.com/delta2323/GB-GNN

In this study, we investigate the limits of the current state of the art AI system for detecting buffer overflows and compare it with current static analysis tools. To do so, we developed a code generator, s-bAbI, capable of producing an arbitrarily large number of code samples of controlled complexity. We found that the static analysis engines we examined have good precision, but poor recall on this dataset, except for a sound static analyzer that has good precision and recall. We found that the state of the art AI system, a memory network modeled after Choi et al. [1], can achieve similar performance to the static analysis engines, but requires an exhaustive amount of training data in order to do so. Our work points towards future approaches that may solve these problems; namely, using representations of code that can capture appropriate scope information and using deep learning methods that are able to perform arithmetic operations.

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

北京阿比特科技有限公司