亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Sixth Generation (6G) of mobile networks is expected to use carrier frequencies in the spectrum above 100 GHz, to satisfy the demands for higher data rates and bandwidth of future digital applications. The development of networking solutions at such high frequencies is challenged by the harsh propagation environment, and by the need for directional communications and signal processing at high data rates. A fundamental step in defining and developing wireless networks above 100 GHz is given by an accurate performance evaluation. For simulations, this strongly depends on the accuracy of the modeling of the channel and of the interaction with the higher layers of the stack. This paper introduces the implementation of two recently proposed channel models (based on ray tracing and on a fully stochastic model) for the 140 GHz band for the ns-3 TeraSim module, which enables simulation of macro wireless networks in the sub-terahertz and terahertz spectrum. We also compare the two channel models with full-stack simulations in an indoor scenario, highlighting differences and similarities in how they interact with the protocol stack and antenna model of TeraSim.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Recent research investigates the decode-and-forward (DF) relaying for mixed radio frequency (RF) and terahertz (THz) wireless links with zero-boresight pointing errors. In this letter, we analyze the performance of a fixed-gain amplify-and-forward (AF) relaying for the RF-THz link to interface the access network on the RF technology with wireless THz transmissions. We develop probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end SNR for the relay-assisted system in terms of bivariate Fox's H function considering $\alpha$-$\mu$ fading for the THz system with non-zero boresight pointing errors and $\alpha$-$\kappa$-$\mu$ shadowed ($\alpha$-KMS) fading model for the RF link. Using the derived PDF and CDF, we present exact analytical expressions of the outage probability, average bit-error-rate (BER), and ergodic capacity of the considered system. We also analyze the outage probability and average BER asymptotically for a better insight into the system behavior at high SNR. We use simulations to compare the performance of the AF relaying having a semi-blind gain factor with the recently proposed DF relaying for THz-RF transmissions.

Nowadays, Vehicle-To-Everything (V2X) networks are actively developing. Most of the already deployed V2X networks are based on the IEEE 802.11p standard. However, these networks can provide only basic V2X applications and will unlikely fulfill stringent requirements of modern V2X applications. Thus, the IEEE has launched a new IEEE 802.11bd standard. A significant novelty of this standard is channel bonding. IEEE 802.11bd describes two channel bonding techniques, which differ from the legacy one used in modern Wi-Fi networks. Our study performs a comparative analysis of the various channel bonding techniques and a single-channel access method from IEEE 802.11p via simulation. We compare them under different contention window sizes and demonstrate that the legacy technique provides the best quality of service in terms of frame transmission delays and packet loss ratio. Moreover, we have found a quasi-optimal contention window size for the legacy technique.

Repeatedly solving the parameterized optimal mass transport (pOMT) problem is a frequent task in applications such as image registration and adaptive grid generation. It is thus critical to develop a highly efficient reduced solver that is equally accurate as the full order model. In this paper, we propose such a machine learning-like method for pOMT by adapting a new reduced basis (RB) technique specifically designed for nonlinear equations, the reduced residual reduced over-collocation (R2-ROC) approach, to the parameterized Monge-Amp$\grave{\rm e}$re equation. It builds on top of a narrow-stencil finite different method (FDM), a so-called truth solver, which we propose in this paper for the Monge-Amp$\grave{\rm e}$re equation with a transport boundary. Together with the R2-ROC approach, it allows us to handle the strong and unique nonlinearity pertaining to the Monge-Amp$\grave{\rm e}$re equation achieving online efficiency without resorting to any direct approximation of the nonlinearity. Several challenging numerical tests demonstrate the accuracy and high efficiency of our method for solving the Monge-Amp$\grave{\rm e}$re equation with various parametric boundary conditions.

Nonlinear model order reduction has opened the door to parameter optimization and uncertainty quantification in complex physics problems governed by nonlinear equations. In particular, the computational cost of solving these equations can be reduced by means of local reduced-order bases. This article examines the benefits of a physics-informed cluster analysis for the construction of cluster-specific reduced-order bases. We illustrate that the choice of the dissimilarity measure for clustering is fundamental and highly affects the performances of the local reduced-order bases. It is shown that clustering with an angle-based dissimilarity on simulation data efficiently decreases the intra-cluster Kolmogorov $N$-width. Additionally, an a priori efficiency criterion is introduced to assess the relevance of a ROM-net, a methodology for the reduction of nonlinear physics problems introduced in our previous work in [T. Daniel, F. Casenave, N. Akkari, D. Ryckelynck, Model order reduction assisted by deep neural networks (ROM-net), Advanced Modeling and Simulation in Engineering Sciences 7 (16), 2020]. This criterion also provides engineers with a very practical method for ROM-nets' hyperparameters calibration under constrained computational costs for the training phase. On five different physics problems, our physics-informed clustering strategy significantly outperforms classic strategies for the construction of local reduced-order bases in terms of projection errors.

In multiuser communication systems, user scheduling and beamforming design are two fundamental problems, which are usually investigated separately in the existing literature. In this work, we focus on the joint optimization of user scheduling and beamforming design with the goal of maximizing the set cardinality of scheduled users. Observing that this problem is computationally challenging due to the non-convex objective function and coupled constraints in continuous and binary variables. To tackle these difficulties, we first propose an iterative optimization algorithm (IOA) relying on the successive convex approximation and uplink-downlink duality theory. Then, motivated by IOA and graph neural networks, a joint user scheduling and power allocation network (JEEPON) is developed to address the investigated problem in an unsupervised manner. The effectiveness of IOA and JEEPON is verified by various numerical results, and the latter achieves a close performance but lower complexity compared with IOA and the greedy-based algorithm. Remarkably, the proposed JEEPON is also competitive in terms of the generalization ability in dynamic wireless network scenarios.

Due to the redundant nature of DNA synthesis and sequencing technologies, a basic model for a DNA storage system is a multi-draw "shuffling-sampling" channel. In this model, a random number of noisy copies of each sequence is observed at the channel output. Recent works have characterized the capacity of such a DNA storage channel under different noise and sequencing models, relying on sophisticated typicality-based approaches for the achievability. Here, we consider a multi-draw DNA storage channel in the setting of noise corruption by a binary erasure channel. We show that, in this setting, the capacity is achieved by linear coding schemes. This leads to a considerably simpler derivation of the capacity expression of a multi-draw DNA storage channel than existing results in the literature.

Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image classification networks. In its most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l-inf perturbations limited to eps = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite its general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l-inf, eps = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers. We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low-resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient-based attacks appear to become even more detectable with increasing resolutions.

Unmanned Aerial Vehicles are increasingly being used in surveillance and traffic monitoring thanks to their high mobility and ability to cover areas at different altitudes and locations. One of the major challenges is to use aerial images to accurately detect cars and count them in real-time for traffic monitoring purposes. Several deep learning techniques were recently proposed based on convolution neural network (CNN) for real-time classification and recognition in computer vision. However, their performance depends on the scenarios where they are used. In this paper, we investigate the performance of two state-of-the-art CNN algorithms, namely Faster R-CNN and YOLOv3, in the context of car detection from aerial images. We trained and tested these two models on a large car dataset taken from UAVs. We demonstrated in this paper that YOLOv3 outperforms Faster R-CNN in sensitivity and processing time, although they are comparable in the precision metric.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

Using the 6,638 case descriptions of societal impact submitted for evaluation in the Research Excellence Framework (REF 2014), we replicate the topic model (Latent Dirichlet Allocation or LDA) made in this context and compare the results with factor-analytic results using a traditional word-document matrix (Principal Component Analysis or PCA). Removing a small fraction of documents from the sample, for example, has on average a much larger impact on LDA than on PCA-based models to the extent that the largest distortion in the case of PCA has less effect than the smallest distortion of LDA-based models. In terms of semantic coherence, however, LDA models outperform PCA-based models. The topic models inform us about the statistical properties of the document sets under study, but the results are statistical and should not be used for a semantic interpretation - for example, in grant selections and micro-decision making, or scholarly work-without follow-up using domain-specific semantic maps.

北京阿比特科技有限公司