亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Elo rating system is a simple and widely used method for calculating players' skills from paired comparisons data. Many have extended it in various ways. Yet the question of updating players' variances remains to be further explored. In this paper, we address the issue of variance update by using the Laplace approximation for posterior distribution, together with a random walk model for the dynamics of players' strengths, and a lower bound on players' variances. The random walk model is motivated by the Glicko system, but here we assume nonidentically distributed increments to take care of player heterogeneity. Experiments on men's professional matches showed that the prediction accuracy slightly improves when the variance update is performed. They also showed that new players' strengths may be better captured with the variance update.

相關內容

We propose a theoretically justified and practically applicable slice sampling based Markov chain Monte Carlo (MCMC) method for approximate sampling from probability measures on Riemannian manifolds. The latter naturally arise as posterior distributions in Bayesian inference of matrix-valued parameters, for example belonging to either the Stiefel or the Grassmann manifold. Our method, called geodesic slice sampling, is reversible with respect to the distribution of interest, and generalizes Hit-and-run slice sampling on $\mathbb{R}^{d}$ to Riemannian manifolds by using geodesics instead of straight lines. We demonstrate the robustness of our sampler's performance compared to other MCMC methods dealing with manifold valued distributions through extensive numerical experiments, on both synthetic and real data. In particular, we illustrate its remarkable ability to cope with anisotropic target densities, without using gradient information and preconditioning.

In real-world tasks, there is usually a large amount of unlabeled data and labeled data. The task of combining the two to learn is known as semi-supervised learning. Experts can use logical rules to label unlabeled data, but this operation is costly. The combination of perception and reasoning has a good effect in processing such semi-supervised tasks with domain knowledge. However, acquiring domain knowledge and the correction, reduction and generation of rules remain complex problems to be solved. Rough set theory is an important method for solving knowledge processing in information systems. In this paper, we propose a rule general abductive learning by rough set (RS-ABL). By transforming the target concept and sub-concepts of rules into information tables, rough set theory is used to solve the acquisition of domain knowledge and the correction, reduction and generation of rules at a lower cost. This framework can also generate more extensive negative rules to enhance the breadth of the knowledge base. Compared with the traditional semi-supervised learning method, RS-ABL has higher accuracy in dealing with semi-supervised tasks.

We present ParrotTTS, a modularized text-to-speech synthesis model leveraging disentangled self-supervised speech representations. It can train a multi-speaker variant effectively using transcripts from a single speaker. ParrotTTS adapts to a new language in low resource setup and generalizes to languages not seen while training the self-supervised backbone. Moreover, without training on bilingual or parallel examples, ParrotTTS can transfer voices across languages while preserving the speaker specific characteristics, e.g., synthesizing fluent Hindi speech using a French speaker's voice and accent. We present extensive results in monolingual and multi-lingual scenarios. ParrotTTS outperforms state-of-the-art multi-lingual TTS models using only a fraction of paired data as latter.

Model sparsification in deep learning promotes simpler, more interpretable models with fewer parameters. This not only reduces the model's memory footprint and computational needs but also shortens inference time. This work focuses on creating sparse models optimized for multiple tasks with fewer parameters. These parsimonious models also possess the potential to match or outperform dense models in terms of performance. In this work, we introduce channel-wise l1/l2 group sparsity in the shared convolutional layers parameters (or weights) of the multi-task learning model. This approach facilitates the removal of extraneous groups i.e., channels (due to l1 regularization) and also imposes a penalty on the weights, further enhancing the learning efficiency for all tasks (due to l2 regularization). We analyzed the results of group sparsity in both single-task and multi-task settings on two widely-used Multi-Task Learning (MTL) datasets: NYU-v2 and CelebAMask-HQ. On both datasets, which consist of three different computer vision tasks each, multi-task models with approximately 70% sparsity outperform their dense equivalents. We also investigate how changing the degree of sparsification influences the model's performance, the overall sparsity percentage, the patterns of sparsity, and the inference time.

In the realm of tensor optimization, low-rank tensor decomposition, particularly Tucker decomposition, stands as a pivotal technique for reducing the number of parameters and for saving storage. We embark on an exploration of Tucker tensor varieties -- the set of tensors with bounded Tucker rank -- in which the geometry is notably more intricate than the well-explored geometry of matrix varieties. We give an explicit parametrization of the tangent cone of Tucker tensor varieties and leverage its geometry to develop provable gradient-related line-search methods for optimization on Tucker tensor varieties. The search directions are computed from approximate projections of antigradient onto the tangent cone, which circumvents the calculation of intractable metric projections. To the best of our knowledge, this is the first work concerning geometry and optimization on Tucker tensor varieties. In practice, low-rank tensor optimization suffers from the difficulty of choosing a reliable rank parameter. To this end, we incorporate the established geometry and propose a Tucker rank-adaptive method that is capable of identifying an appropriate rank during iterations while the convergence is also guaranteed. Numerical experiments on tensor completion with synthetic and real-world datasets reveal that the proposed methods are in favor of recovering performance over other state-of-the-art methods. Moreover, the rank-adaptive method performs the best across various rank parameter selections and is indeed able to find an appropriate rank.

Recently, Eldan, Koehler, and Zeitouni (2020) showed that Glauber dynamics mixes rapidly for general Ising models so long as the difference between the largest and smallest eigenvalues of the coupling matrix is at most $1 - \epsilon$ for any fixed $\epsilon > 0$. We give evidence that Glauber dynamics is in fact optimal for this "general-purpose sampling" task. Namely, we give an average-case reduction from hypothesis testing in a Wishart negatively-spiked matrix model to approximately sampling from the Gibbs measure of a general Ising model for which the difference between the largest and smallest eigenvalues of the coupling matrix is at most $1 + \epsilon$ for any fixed $\epsilon > 0$. Combined with results of Bandeira, Kunisky, and Wein (2019) that analyze low-degree polynomial algorithms to give evidence for the hardness of the former spiked matrix problem, our results in turn give evidence for the hardness of general-purpose sampling improving on Glauber dynamics. We also give a similar reduction to approximating the free energy of general Ising models, and again infer evidence that simulated annealing algorithms based on Glauber dynamics are optimal in the general-purpose setting.

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decomposition or factorization, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, starting from tensor matricization, we aim to ``decode" estimation of a higher-order tensor factor model in the sense that, we recast it into mode-wise traditional high-dimensional vector/fiber factor models so as to deploy the conventional estimation of principle components analysis (PCA). Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from most popular Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extend. We establish the inferential theory of the proposed estimations and conduct rich simulation experiments under TuTFaM, and illustrate how the proposed estimations can work in tensor reconstruction, clustering for video and economic datasets, respectively.

Unmanned Surface Vehicles (USVs) play a pivotal role in various applications, including surface rescue, commercial transactions, scientific exploration, water rescue, and military operations. The effective control of high-speed unmanned surface boats stands as a critical aspect within the overall USV system, particularly in challenging environments marked by complex surface obstacles and dynamic conditions, such as time-varying surges, non-directional forces, and unpredictable winds. In this paper, we propose a data-driven control method based on Koopman theory. This involves constructing a high-dimensional linear model by mapping a low-dimensional nonlinear model to a higher-dimensional linear space through data identification. The observable USVs dynamical system is dynamically reconstructed using online error learning. To enhance tracking control accuracy, we utilize a Constructive Lyapunov Function (CLF)-Control Barrier Function (CBF)-Quadratic Programming (QP) approach to regulate the high-dimensional linear dynamical system obtained through identification. This approach facilitates error compensation, thereby achieving more precise tracking control.

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

Training a deep architecture using a ranking loss has become standard for the person re-identification task. Increasingly, these deep architectures include additional components that leverage part detections, attribute predictions, pose estimators and other auxiliary information, in order to more effectively localize and align discriminative image regions. In this paper we adopt a different approach and carefully design each component of a simple deep architecture and, critically, the strategy for training it effectively for person re-identification. We extensively evaluate each design choice, leading to a list of good practices for person re-identification. By following these practices, our approach outperforms the state of the art, including more complex methods with auxiliary components, by large margins on four benchmark datasets. We also provide a qualitative analysis of our trained representation which indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism.

北京阿比特科技有限公司