亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a novel adaptive transmission scheme to amplify the prowess of coordinated direct and relay transmission (CDRT) systems rooted in non-orthogonal multiple access principles. Leveraging the maximum ratio transmission scheme, we seamlessly meet the prerequisites of CDRT while harnessing the potential of dynamic power allocation and directional antennas to elevate the system's operational efficiency. Through meticulous derivations, we unveil closed-form expressions depicting the exact effective sum throughput. Our simulation results adeptly validate the theoretical analysis and vividly showcase the effectiveness of the proposed scheme.

相關內容

This paper introduces a theory for assessing and optimizing the multiple-input-multiple-output performance of multi-port cluster antennas in terms of efficiency, channel correlation, and power distribution. A method based on a convex optimization of feeding coefficients is extended with additional constraints allowing the user to control a ratio between the power radiated by the clusters. The formulation of the problem makes it possible to simultaneously optimize total efficiency and channel correlation with a fixed ratio between power radiated by the clusters, thus examining a trade-off between these parameters. It is shown that channel correlation, total efficiency, and allocation of radiated power are mutually conflicting parameters. The trade-offs are shown and discussed. The theory is demonstrated on a four-element antenna array and on a mobile terminal antenna.

We explore computational aspects of maximum likelihood estimation of the mixture proportions of a nonparametric finite mixture model -- a convex optimization problem with old roots in statistics and a key member of the modern data analysis toolkit. Motivated by problems in shape constrained inference, we consider structured variants of this problem with additional convex polyhedral constraints. We propose a new cubic regularized Newton method for this problem and present novel worst-case and local computational guarantees for our algorithm. We extend earlier work by Nesterov and Polyak to the case of a self-concordant objective with polyhedral constraints, such as the ones considered herein. We propose a Frank-Wolfe method to solve the cubic regularized Newton subproblem; and derive efficient solutions for the linear optimization oracles that may be of independent interest. In the particular case of Gaussian mixtures without shape constraints, we derive bounds on how well the finite mixture problem approximates the infinite-dimensional Kiefer-Wolfowitz maximum likelihood estimator. Experiments on synthetic and real datasets suggest that our proposed algorithms exhibit improved runtimes and scalability features over existing benchmarks.

This paper studies the problem of encoding messages into sequences which can be uniquely recovered from some noisy observations about their substrings. The observed reads comprise consecutive substrings with some given minimum overlap. This coded reconstruction problem has applications to DNA storage. We consider both single-strand reconstruction codes and multi-strand reconstruction codes, where the message is encoded into a single strand or a set of multiple strands, respectively. Various parameter regimes are studied. New codes are constructed, some of whose rates asymptotically attain the upper bounds.

We consider a distributed coding for computing problem with constant decoding locality, i.e. with a vanishing error probability, any single sample of the function can be approximately recovered by probing only constant number of compressed bits. We establish an achievable rate region by designing an efficient coding scheme. The scheme reduces the required rate by introducing auxiliary random variables and supports local decoding at the same time. Then we show the rate region is optimal under mild regularity conditions on source distributions. A coding for computing problem with side information is analogously studied. These results indicate that more rate has to be taken in order to achieve lower coding complexity in distributed computing settings. Moreover, useful graph characterizations are developed to simplify the computation of the achievable rate region.

This paper presents the reproduction of two studies focused on the perception of micro and macro expressions of Virtual Humans (VHs) generated by Computer Graphics (CG), first described in 2014 and replicated in 2021. The 2014 study referred to a VH realistic, whereas, in 2021, it referred to a VH cartoon. In our work, we replicate the study by using a realistic CG character. Our main goals are to compare the perceptions of micro and macro expressions between levels of realism (2021 cartoon versus 2023 realistic) and between realistic characters in different periods (i.e., 2014 versus 2023). In one of our results, people more easily recognized micro expressions in realistic VHs than in a cartoon VH. In another result, we show that the participants' perception was similar for both micro and macro expressions in 2014 and 2023.

Users would experience individually different sickness symptoms during or after navigating through an immersive virtual environment, generally known as cybersickness. Previous studies have predicted the severity of cybersickness based on physiological and/or kinematic data. However, compared with kinematic data, physiological data rely heavily on biosensors during the collection, which is inconvenient and limited to a few affordable VR devices. In this work, we proposed a deep neural network to predict cybersickness through kinematic data. We introduced the encoded physiological representation to characterize the individual susceptibility; therefore, the predictor could predict cybersickness only based on a user's kinematic data without counting on biosensors. Fifty-three participants were recruited to attend the user study to collect multimodal data, including kinematic data (navigation speed, head tracking), physiological signals (e.g., electrodermal activity, heart rate), and Simulator Sickness Questionnaire (SSQ). The predictor achieved an accuracy of 97.8\% for cybersickness prediction by involving the pre-computed physiological representation to characterize individual differences, providing much convenience for the current cybersickness measurement.

This paper presents a novel deep learning model based on the transformer architecture to predict the load-deformation behavior of large bored piles in Bangkok subsoil. The model encodes the soil profile and pile features as tokenization input, and generates the load-deformation curve as output. The model also incorporates the previous sequential data of load-deformation curve into the decoder to improve the prediction accuracy. The model also incorporates the previous sequential data of load-deformation curve into the decoder. The model shows a satisfactory accuracy and generalization ability for the load-deformation curve prediction, with a mean absolute error of 5.72% for the test data. The model could also be used for parametric analysis and design optimization of piles under different soil and pile conditions, pile cross section, pile length and type of pile.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

北京阿比特科技有限公司