亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Queue length violation probability, i.e., the tail distribution of the queue length, is a widely used statistical quality-of-service (QoS) metric in wireless communications. Characterizing and optimizing the queue length violation probability have great significance in time sensitive networking (TSN) and ultra reliable and low-latency communications (URLLC). However, it still remains an open problem. In this paper, we put our focus on the analysis of the tail distribution of the queue length from the perspective of cross-layer design in wireless link transmission. We find that, under the finite average power consumption constraint, the queue length violation probability can achieve zero with diversity gains, while it can have a linear-decay-rate exponent according to large deviation theory (LDT) with limited receiver sensitivity. Besides, we find that the arbitrary-decay-rate queue length tail distribution with the finite average power consumption exists in the Rayleigh fading channel. Then, we generalize the sufficient conditions for the communication system belonging to these three scenarios, respectively. Moreover, we apply the above results to analyze the wireless link transmission in the Nakagami-m fading channel. Numerical results with approximation validate our analysis.

相關內容

This paper addresses the problem of determining all optimal integer solutions of a linear integer network flow problem, which we call the all optimal integer flow (AOF) problem. We derive an O(F (m + n) + mn + M ) time algorithm to determine all F many optimal integer flows in a directed network with n nodes and m arcs, where M is the best time needed to find one minimum cost flow. We remark that stopping Hamacher's well-known method for the determination of the K best integer flows at the first sub-optimal flow results in an algorithm with a running time of O(F m(n log n + m) + M ) for solving the AOF problem. Our improvement is essentially made possible by replacing the shortest path sub-problem with a more efficient way to determine a so called proper zero cost cycle using a modified depth-first search technique. As a byproduct, our analysis yields an enhanced algorithm to determine the K best integer flows that runs in O(Kn3 + M ). Besides, we give lower and upper bounds for the number of all optimal integer and feasible integer solutions. Our bounds are based on the fact that any optimal solution can be obtained by an initial optimal tree solution plus a conical combination of incidence vectors of all induced cycles with bounded coefficients.

This study introduces a novel computational framework for Robust Topology Optimization (RTO) considering imprecise random field parameters. Unlike the worst-case approach, the present method provides upper and lower bounds for the mean and standard deviation of compliance as well as the optimized topological layouts of a structure for various scenarios. In the proposed approach, the imprecise random field variables are determined utilizing parameterized p-boxes with different confidence intervals. The Karhunen-Lo\`eve (K-L) expansion is extended to provide a spectral description of the imprecise random field. The linear superposition method in conjunction with a linear combination of orthogonal functions is employed to obtain explicit mathematical expressions for the first and second order statistical moments of the structural compliance. Then, an interval sensitivity analysis is carried out, applying the Orthogonal Similarity Transformation (OST) method with the boundaries of each of the intermediate variable searched efficiently at every iteration using a Combinatorial Approach (CA). Finally, the validity, accuracy, and applicability of the work are rigorously checked by comparing the outputs of the proposed approach with those obtained using the particle swarm optimization (PSO) and Quasi-Monte-Carlo Simulation (QMCS) methods. Three different numerical examples with imprecise random field loads are presented to show the effectiveness and feasibility of the study.

Downlink precoding is considered for multi-path multi-user multi-input single-output (MU-MISO) channels where the base station uses orthogonal frequency-division multiplexing and low-resolution signaling. A quantized coordinate minimization (QCM) algorithm is proposed and its performance is compared to other precoding algorithms including squared infinity-norm relaxation (SQUID), multi-antenna greedy iterative quantization (MAGIQ), and maximum safety margin precoding. MAGIQ and QCM achieve the highest information rates and QCM has the lowest complexity measured in the number of multiplications. The information rates are computed for pilot-aided channel estimation and a blind detector that performs joint data and channel estimation. Bit error rates for a 5G low-density parity-check code confirm the information-theoretic calculations. Simulations with imperfect channel knowledge at the transmitter show that the performance of QCM and SQUID degrades in a similar fashion as zero-forcing precoding with high resolution quantizers.

Given a graph $G = (V,E)$, a threshold function $t~ :~ V \rightarrow \mathbb{N}$ and an integer $k$, we study the Harmless Set problem, where the goal is to find a subset of vertices $S \subseteq V$ of size at least $k$ such that every vertex $v\in V$ has less than $t(v)$ neighbors in $S$. We enhance our understanding of the problem from the viewpoint of parameterized complexity. Our focus lies on parameters that measure the structural properties of the input instance. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. On dense graphs, we show that the problem is W[1]-hard parameterized by cluster vertex deletion number. We also show that the Harmless Set problem with majority thresholds is W[1]-hard when parameterized by the treewidth of the input graph. We prove that the Harmless Set problem can be solved in polynomial time on graph with bounded cliquewidth. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to neighbourhood diversity, twin cover and vertex integrity of the input graph. We show that the problem parameterized by the solution size is fixed parameter tractable on planar graphs. We thereby resolve two open questions stated in C. Bazgan and M. Chopin (2014) concerning the complexity of {\sc Harmless Set} parameterized by the treewidth of the input graph and on planar graphs with respect to the solution size.

We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions. In certain cases, the optimal transport plan takes the form of a one-to-one mapping from the source support to the target support, but learning or even approximating such a map is computationally challenging for large and high-dimensional datasets due to the high cost of linear programming routines and an intrinsic curse of dimensionality. We study instead the Sinkhorn problem, a regularized form of optimal transport whose solutions are couplings between the source and the target distribution. We introduce a novel framework for learning the Sinkhorn coupling between two distributions in the form of a score-based generative model. Conditioned on source data, our procedure iterates Langevin Dynamics to sample target data according to the regularized optimal coupling. Key to this approach is a neural network parametrization of the Sinkhorn problem, and we prove convergence of gradient descent with respect to network parameters in this formulation. We demonstrate its empirical success on a variety of large scale optimal transport tasks.

We study the problem of deep joint source-channel coding (D-JSCC) for correlated image sources, where each source is transmitted through a noisy independent channel to the common receiver. In particular, we consider a pair of images captured by two cameras with probably overlapping fields of view transmitted over wireless channels and reconstructed in the center node. The challenging problem involves designing a practical code to utilize both source and channel correlations to improve transmission efficiency without additional transmission overhead. To tackle this, we need to consider the common information across two stereo images as well as the differences between two transmission channels. In this case, we propose a deep neural networks solution that includes lightweight edge encoders and a powerful center decoder. Besides, in the decoder, we propose a novel channel state information aware cross attention module to highlight the overlapping fields and leverage the relevance between two noisy feature maps.Our results show the impressive improvement of reconstruction quality in both links by exploiting the noisy representations of the other link. Moreover, the proposed scheme shows competitive results compared to the separated schemes with capacity-achieving channel codes.

Deterministic IP (DIP) networking is a promising technique that can provide delay-bounded transmission in large-scale networks. Nevertheless, DIP faces several challenges in the mixed traffic scenarios, including (i) the capability of ultra-low latency communications, (ii) the simultaneous satisfaction of diverse QoS requirements, and (iii) the network efficiency. The problems are more formidable in the dynamic surroundings without prior knowledge of traffic demands. To address the above-mentioned issues, this paper designs a flexible DIP (FDIP) network. In the proposed network, we classify the queues at the output port into multiple groups. Each group operates with different cycle lengths. FDIP can assign the time-sensitive flows with different groups, hence delivering diverse QoS requirements, simultaneously. The ultra-low latency communication can be achieved by specific groups with short cycle lengths. Moreover, the flexible scheduling with diverse cycle lengths improves resource utilization, hence increasing the throughput (i.e., the number of acceptable time-sensitive flows). We formulate a throughput maximization problem that jointly considers the admission control, transmission path selection, and cycle length assignment. A branch and bound (BnB)-based heuristic is developed. Simulation results show that the proposed FDIP significantly outperforms the standard DIP in terms of both the throughput and the latency guarantees.

This paper derives the asymptotic distribution of the normalized $k$-th maximum order statistics of a sequence of non-central chi-square random variables with non-identical non-centrality parameter. We demonstrate the utility of these results in characterizing the signal to noise ratio in three different applications in wireless communication systems where the statistics of the $k$-th maximum channel power over Rician fading links are of interest. Furthermore, we derive simple expressions for the asymptotic outage probability, average throughput, achievable throughput, and the average bit error probability. The proposed results are validated via extensive Monte Carlo simulations.

Research on machine learning for channel estimation, especially neural network solutions for wireless communications, is attracting significant current interest. This is because conventional methods cannot meet the present demands of the high speed communication. In the paper, we deploy a general residual convolutional neural network to achieve channel estimation for the orthogonal frequency-division multiplexing (OFDM) signals in a downlink scenario. Our method also deploys a simple interpolation layer to replace the transposed convolutional layer used in other networks to reduce the computation cost. The proposed method is more easily adapted to different pilot patterns and packet sizes. Compared with other deep learning methods for channel estimation, our results for 3GPP channel models suggest improved mean squared error performance for our approach.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司