亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The design of message passing (MP) algorithms on factor graphs is an effective manner to implement channel estimation (CE) in wireless communication systems, which performance can be further improved by exploiting prior probability models that accurately match the channel characteristics. In this work, we study the CE problem in a downlink massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. As the prior probability, we propose the Markov chain two-state Gaussian mixture with large variance differences (TSGM-LVD) model to exploit the structured sparsity in the angle-frequency domain of the channel. Existing single and combined MP rules cannot deal with the message computation of the proposed probability model. To overcome this issue, we present a general method to derive the hybrid message passing (HMP) rule, which allows the calculation of messages described by mixed linear and non-linear functions. Accordingly, we design the HMP-TSGM-LVD algorithm under the structured turbo framework (STF). Simulation results demonstrate that the proposed algorithm converges faster and obtains better and more stable performance than its counterparts. In particular, the gain of the proposed approach is maximum (3 dB) in the high signal-to-noise ratio regime, while benchmark approaches experience oscillating behavior due to the improper prior model characterization.

相關內容

Optical Wireless Communication networks (OWC) has emerged as a promising technology that enables high-speed and reliable communication bandwidth for a variety of applications. In this work, we investigated applying Random Linear Network Coding (RLNC) over NOMA-based OWC networks to improve the performance of the proposed high density indoor optical wireless network where users are divided into multicast groups, and each group contains users that slightly differ in their channel gains. Moreover, a fixed power allocation strategy is considered to manage interference among these groups and to avoid complexity. The performance of the proposed RLNC-NOMA scheme is evaluated in terms of average bit error rate and ergodic sum rate versus the power allocation ratio factor. The results show that the proposed scheme is more suitable for the considered network compared to the traditional NOMA and orthogonal transmission schemes.

Temporal modeling is crucial for multi-frame human pose estimation. Most existing methods directly employ optical flow or deformable convolution to predict full-spectrum motion fields, which might incur numerous irrelevant cues, such as a nearby person or background. Without further efforts to excavate meaningful motion priors, their results are suboptimal, especially in complicated spatiotemporal interactions. On the other hand, the temporal difference has the ability to encode representative motion information which can potentially be valuable for pose estimation but has not been fully exploited. In this paper, we present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts and engages mutual information objectively to facilitate useful motion information disentanglement. To be specific, we design a multi-stage Temporal Difference Encoder that performs incremental cascaded learning conditioned on multi-stage feature difference sequences to derive informative motion representation. We further propose a Representation Disentanglement module from the mutual information perspective, which can grasp discriminative task-relevant motion signals by explicitly defining useful and noisy constituents of the raw motion features and minimizing their mutual information. These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark dataset HiEve, and achieve state-of-the-art performance on three benchmarks PoseTrack2017, PoseTrack2018, and PoseTrack21.

Model independent techniques for constructing background data templates using generative models have shown great promise for use in searches for new physics processes at the LHC. We introduce a major improvement to the CURTAINs method by training the conditional normalizing flow between two side-band regions using maximum likelihood estimation instead of an optimal transport loss. The new training objective improves the robustness and fidelity of the transformed data and is much faster and easier to train. We compare the performance against the previous approach and the current state of the art using the LHC Olympics anomaly detection dataset, where we see a significant improvement in sensitivity over the original CURTAINs method. Furthermore, CURTAINsF4F requires substantially less computational resources to cover a large number of signal regions than other fully data driven approaches. When using an efficient configuration, an order of magnitude more models can be trained in the same time required for ten signal regions, without a significant drop in performance.

The ParaOpt algorithm was recently introduced as a time-parallel solver for optimal-control problems with a terminal-cost objective, and convergence results have been presented for the linear diffusive case with implicit-Euler time integrators. We reformulate ParaOpt for tracking problems and provide generalized convergence analyses for both objectives. We focus on linear diffusive equations and prove convergence bounds that are generic in the time integrators used. For large problem dimensions, ParaOpt's performance depends crucially on having a good preconditioner to solve the arising linear systems. For the case where ParaOpt's cheap, coarse-grained propagator is linear, we introduce diagonalization-based preconditioners inspired by recent advances in the ParaDiag family of methods. These preconditioners not only lead to a weakly-scalable ParaOpt version, but are themselves invertible in parallel, making maximal use of available concurrency. They have proven convergence properties in the linear diffusive case that are generic in the time discretization used, similarly to our ParaOpt results. Numerical results confirm that the iteration count of the iterative solvers used for ParaOpt's linear systems becomes constant in the limit of an increasing processor count. The paper is accompanied by a sequential MATLAB implementation.

Allocation and planning with a collection of tasks and a group of agents is an important problem in multiagent systems. One commonly faced bottleneck is scalability, as in general the multiagent model increases exponentially in size with the number of agents. We consider the combination of random task assignment and multiagent planning under multiple-objective constraints, and show that this problem can be decentralised to individual agent-task models. We present an algorithm of point-oriented Pareto computation, which checks whether a point corresponding to given cost and probability thresholds for our formal problem is feasible or not. If the given point is infeasible, our algorithm finds a Pareto-optimal point which is closest to the given point. We provide the first multi-objective model checking framework that simultaneously uses GPU and multi-core acceleration. Our framework manages CPU and GPU devices as a load balancing problem for parallel computation. Our experiments demonstrate that parallelisation achieves significant run time speed-up over sequential computation.

Recently, a versatile limited feedback scheme based on a Gaussian mixture model (GMM) was proposed for frequency division duplex (FDD) systems. This scheme provides high flexibility regarding various system parameters and is applicable to both point-to-point multiple-input multiple-output (MIMO) and multi-user MIMO (MU-MIMO) communications. The GMM is learned to cover the operation of all mobile terminals (MTs) located inside the base station (BS) cell, and each MT only needs to evaluate its strongest mixture component as feedback, eliminating the need for channel estimation at the MT. In this work, we extend the GMM-based feedback scheme to variable feedback lengths by leveraging a single learned GMM through merging or pruning of dispensable mixture components. Additionally, the GMM covariances are restricted to Toeplitz or circulant structure through model-based insights. These extensions significantly reduce the offloading amount and enhance the clustering ability of the GMM which, in turn, leads to an improved system performance. Simulation results for both point-to-point and multi-user systems demonstrate the effectiveness of the proposed extensions.

Training a neural network (NN) typically relies on some type of curve-following method, such as gradient descent (GD) (and stochastic gradient descent (SGD)), ADADELTA, ADAM or limited memory algorithms. Convergence for these algorithms usually relies on having access to a large quantity of observations in order to achieve a high level of accuracy and, with certain classes of functions, these algorithms could take multiple epochs of data points to catch on. Herein, a different technique with the potential of achieving dramatically better speeds of convergence, especially for shallow networks, is explored: it does not curve-follow but rather relies on 'decoupling' hidden layers and on updating their weighted connections through bootstrapping, resampling and linear regression. By utilizing resampled observations, the convergence of this process is empirically shown to be remarkably fast and to require a lower amount of data points: in particular, our experiments show that one needs a fraction of the observations that are required with traditional neural network training methods to approximate various classes of functions.

In this work we propose a low rank approximation of high fidelity finite element simulations by utilizing weights corresponding to areas of high stress levels for an abdominal aortic aneurysm, i.e. a deformed blood vessel. We focus on the van Mises stress, which corresponds to the rupture risk of the aorta. This is modeled as a Gaussian Markov random field and we define our approximation as a basis of vectors that solve a series of optimization problems. Each of these problems describes the minimization of an expected weighted quadratic loss. The weights, which encapsulate the importance of each grid point of the finite elements, can be chosen freely - either data driven or by incorporating domain knowledge. Along with a more general discussion of mathematical properties we provide an effective numerical heuristic to compute the basis under general conditions. We explicitly explore two such bases on the surface of a high fidelity finite element grid and show their efficiency for compression. We further utilize the approach to predict the van Mises stress in areas of interest using low and high fidelity simulations. Due to the high dimension of the data we have to take extra care to keep the problem numerically feasible. This is also a major concern of this work.

Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.

Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via //github.com/Coder-Yu/RecQ.

北京阿比特科技有限公司