亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A simple way of obtaining robust estimates of the "center" (or the "location") and of the "spread" of a dataset is to use the maximum likelihood estimate with a class of heavy-tailed distributions, regardless of the "true" distribution generating the data. We observe that the maximum likelihood problem for the Cauchy distributions, which have particularly heavy tails, is geodesically convex and therefore efficiently solvable (Cauchy distributions are parametrized by the upper half plane, i.e. by the hyperbolic plane). Moreover, it has an appealing geometrical meaning: the datapoints, living on the boundary of the hyperbolic plane, are attracting the parameter by unit forces, and we search the point where these forces are in equilibrium. This picture generalizes to several classes of multivariate distributions with heavy tails, including, in particular, the multivariate Cauchy distributions. The hyperbolic plane gets replaced by symmetric spaces of noncompact type. Geodesic convexity gives us an efficient numerical solution of the maximum likelihood problem for these distribution classes. This can then be used for robust estimates of location and spread, thanks to the heavy tails of these distributions.

相關內容

In order to correct the pair-errors generated during the transmission of modern high-density data storage that the outputs of the channels consist of overlapping pairs of symbols, a new coding scheme named symbol-pair code is proposed. The error-correcting capability of the symbol-pair code is determined by its minimum symbol-pair distance. For such codes, the larger the minimum symbol-pair distance, the better. It is a challenging task to construct symbol-pair codes with optimal parameters, especially, maximum-distance-separable (MDS) symbol-pair codes. In this paper, the permutation equivalence codes of matrix-product codes with underlying matrixes of orders 3 and 4 are used to extend the minimum symbol-pair distance, and six new classes of MDS symbol-pair codes are derived.

The increasing availability of temporal data poses a challenge to time-series and signal-processing domains due to its high numerosity and complexity. Symbolic representation outperforms raw data in a variety of engineering applications due to its storage efficiency, reduced numerosity, and noise reduction. The most recent symbolic aggregate approximation technique called ABBA demonstrates outstanding performance in preserving essential shape information of time series and enhancing the downstream applications. However, ABBA cannot handle multiple time series with consistent symbols, i.e., the same symbols from distinct time series are not identical. Also, working with appropriate ABBA digitization involves the tedious task of tuning the hyperparameters, such as the number of symbols or tolerance. Therefore, we present a joint symbolic aggregate approximation that has symbolic consistency, and show how the hyperparameter of digitization can itself be optimized alongside the compression tolerance ahead of time. Besides, we propose a novel computing paradigm that enables parallel computing of symbolic approximation. The extensive experiments demonstrate its superb performance and outstanding speed regarding symbolic approximation and reconstruction.

Pseudo-Hamiltonian neural networks (PHNN) were recently introduced for learning dynamical systems that can be modelled by ordinary differential equations. In this paper, we extend the method to partial differential equations. The resulting model is comprised of up to three neural networks, modelling terms representing conservation, dissipation and external forces, and discrete convolution operators that can either be learned or be given as input. We demonstrate numerically the superior performance of PHNN compared to a baseline model that models the full dynamics by a single neural network. Moreover, since the PHNN model consists of three parts with different physical interpretations, these can be studied separately to gain insight into the system, and the learned model is applicable also if external forces are removed or changed.

In this paper, our objective is to present a constraining principle governing the spectral properties of the sample covariance matrix. This principle exhibits harmonious behavior across diverse limiting frameworks, eliminating the need for constraints on the rates of dimension $p$ and sample size $n$, as long as they both tend to infinity. We accomplish this by employing a suitable normalization technique on the original sample covariance matrix. Following this, we establish a harmonic central limit theorem for linear spectral statistics within this expansive framework. This achievement effectively eliminates the necessity for a bounded spectral norm on the population covariance matrix and relaxes constraints on the rates of dimension $p$ and sample size $n$, thereby significantly broadening the applicability of these results in the field of high-dimensional statistics. We illustrate the power of the established results by considering the test for covariance structure under high dimensionality, freeing both $p$ and $n$.

It is well-known that mood and pain interact with each other, however individual-level variability in this relationship has been less well quantified than overall associations between low mood and pain. Here, we leverage the possibilities presented by mobile health data, in particular the "Cloudy with a Chance of Pain" study, which collected longitudinal data from the residents of the UK with chronic pain conditions. Participants used an App to record self-reported measures of factors including mood, pain and sleep quality. The richness of these data allows us to perform model-based clustering of the data as a mixture of Markov processes. Through this analysis we discover four endotypes with distinct patterns of co-evolution of mood and pain over time. The differences between endotypes are sufficiently large to play a role in clinical hypothesis generation for personalised treatments of comorbid pain and low mood.

The effect that different police protest management methods have on protesters' physical and mental trauma is still not well understood and is a matter of debate. In this paper, we take a two-pronged approach to gain insight into this issue. First, we perform statistical analysis on time series data of protests provided by ACLED and spanning the period of time from January 1, 2020, until March 13, 2021. We observe that the use of kinetic impact projectiles is associated with more protests in subsequent days and is also a better predictor of the number of deaths in subsequent deaths than the number of protests, concluding that the use of non-lethal weapons seems to have an inflammatory rather than suppressive effect on protests. Next, we provide a mathematical framework to model modern, but well-established psychological and sociological research on compliance theory and crowd dynamics. Our results show that understanding the heterogeneity of the crowd is key for protests that lead to a reduction of social tension and minimization of physical and mental trauma in protesters.

In fitting a continuous bounded data, the generalized beta (and several variants of this distribution) and the two-parameter Kumaraswamy (KW) distributions are the two most prominent univariate continuous distributions that come to our mind. There are some common features between these two rival probability models and to select one of them in a practical situation can be of great interest. Consequently, in this paper, we discuss various methods of selection between the generalized beta proposed by Libby and Novick (1982) (LNGB) and the KW distributions, such as the criteria based on probability of correct selection which is an improvement over the likelihood ratio statistic approach, and also based on pseudo-distance measures. We obtain an approximation for the probability of correct selection under the hypotheses HLNGB and HKW , and select the model that maximizes it. However, our proposal is more appealing in the sense that we provide the comparison study for the LNGB distribution that subsumes both types of classical beta and exponentiated generators (see, for details, Cordeiro et al. 2014; Libby and Novick 1982) which can be a natural competitor of a two-parameter KW distribution in an appropriate scenario.

Multiscale stochastic dynamical systems have been widely adopted to a variety of scientific and engineering problems due to their capability of depicting complex phenomena in many real world applications. This work is devoted to investigating the effective dynamics for slow-fast stochastic dynamical systems. Given observation data on a short-term period satisfying some unknown slow-fast stochastic systems, we propose a novel algorithm including a neural network called Auto-SDE to learn invariant slow manifold. Our approach captures the evolutionary nature of a series of time-dependent autoencoder neural networks with the loss constructed from a discretized stochastic differential equation. Our algorithm is also validated to be accurate, stable and effective through numerical experiments under various evaluation metrics.

Bayesian model-averaged hypothesis testing is an important technique in regression because it addresses the problem that the evidence one variable directly affects an outcome often depends on which other variables are included in the model. This problem is caused by confounding and mediation, and is pervasive in big data settings with thousands of variables. However, model-averaging is under-utilized in fields, like epidemiology, where classical statistical approaches dominate. Here we show that simultaneous Bayesian and frequentist model-averaged hypothesis testing is possible in large samples, for a family of priors. We show that Bayesian model-averaged regression is a closed testing procedure, and use the theory of regular variation to derive interchangeable posterior odds and $p$-values that jointly control the Bayesian false discovery rate (FDR), the frequentist type I error rate, and the frequentist familywise error rate (FWER). These results arise from an asymptotic chi-squared distribution for the model-averaged deviance, under the null hypothesis. We call the approach 'Doublethink'. In a related manuscript (Arning, Fryer and Wilson, 2024), we apply it to discovering direct risk factors for COVID-19 hospitalization in UK Biobank, and we discuss its broader implications for bridging the differences between Bayesian and frequentist hypothesis testing.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

北京阿比特科技有限公司