亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many experimental paradigms in neuroscience involve driving the nervous system with periodic sensory stimuli. Neural signals recorded using a variety of techniques will then include phase-locked oscillations at the stimulation frequency. The analysis of such data often involves standard univariate statistics such as T-tests, conducted on the Fourier amplitude components (ignoring phase), either to test for the presence of a signal, or to compare signals across different conditions. However, the assumptions of these tests will sometimes be violated because amplitudes are not normally distributed, and furthermore weak signals might be missed if the phase information is discarded. An alternative approach is to conduct multivariate statistical tests using the real and imaginary Fourier components. Here the performance of two multivariate extensions of the T-test are compared: Hotelling's $T^2$ and a variant called $T^2_{circ}$. A novel test of the assumptions of $T^2_{circ}$ is developed, based on the condition index of the data (the square root of the ratio of eigenvalues of a bounding ellipse), and a heuristic for excluding outliers using the Mahalanobis distance is proposed. The $T^2_{circ}$ statistic is then extended to multi-level designs, resulting in a new statistical test termed $ANOVA^2_{circ}$. This has identical assumptions to $T^2_{circ}$, and is shown to be more sensitive than MANOVA when these assumptions are met. The use of these tests is demonstrated for two publicly available empirical data sets, and practical guidance is suggested for choosing which test to run. Implementations of these novel tools are provided as an R package and a Matlab toolbox, in the hope that their wider adoption will improve the sensitivity of statistical inferences involving periodic data.

相關內容

Partially recorded data are frequently encountered in many applications and usually clustered by first removing incomplete cases or features with missing values, or by imputing missing values, followed by application of a clustering algorithm to the resulting altered dataset. Here, we develop clustering methodology through a model-based approach using the marginal density for the observed values, assuming a finite mixture model of multivariate $t$ distributions. We compare our approximate algorithm to the corresponding full expectation-maximization (EM) approach that considers the missing values in the incomplete data set and makes a missing at random (MAR) assumption, as well as case deletion and imputation methods. Since only the observed values are utilized, our approach is computationally more efficient than imputation or full EM. Simulation studies demonstrate that our approach has favorable recovery of the true cluster partition compared to case deletion and imputation under various missingness mechanisms, and is at least competitive with the full EM approach, even when MAR assumptions are violated. Our methodology is demonstrated on a problem of clustering gamma-ray bursts and is implemented at //github.com/emilygoren/MixtClust.

The essential interactive capacity of a discrete memoryless channel is defined in this paper as the maximal rate at which the transcript of any interactive protocol can be reliably simulated over the channel, using a deterministic coding scheme. In contrast to other interactive capacity definitions in the literature, this definition makes no assumptions on the order of speakers (which can be adaptive) and does not allow any use of private / public randomness; hence, the essential interactive capacity is a function of the channel model only. It is shown that the essential interactive capacity of any binary memoryless symmetric (BMS) channel is at least $0.0302$ its Shannon capacity. To that end, we present a simple coding scheme, based on extended-Hamming codes combined with error detection, that achieves the lower bound in the special case of the binary symmetric channel (BSC). We then adapt the scheme to the entire family of BMS channels, and show that it achieves the same lower bound using extremes of the Bhattacharyya parameter.

Accounting for phase variability is a critical challenge in functional data analysis. To separate it from amplitude variation, functional data are registered, i.e., their observed domains are deformed elastically so that the resulting functions are aligned with template functions. At present, most available registration approaches are limited to datasets of complete and densely measured curves with Gaussian noise. However, many real-world functional data sets are not Gaussian and contain incomplete curves, in which the underlying process is not recorded over its entire domain. In this work, we extend and refine a framework for joint likelihood-based registration and latent Gaussian process-based generalized functional principal component analysis that is able to handle incomplete curves. Our approach is accompanied by sophisticated open-source software, allowing for its application in diverse non-Gaussian data settings and a public code repository to reproduce all results. We register data from a seismological application comprising spatially indexed, incomplete ground velocity time series with a highly volatile Gamma structure. We describe, implement and evaluate the approach for such incomplete non-Gaussian functional data and compare it to existing routines.

Inference of population structure from genetic data plays an important role in population and medical genetics studies. The traditional EIGENSTRAT method has been widely used for computing and selecting top principal components that capture population structure information (Price et al., 2006). With the advancement and decreasing cost of sequencing technology, whole-genome sequencing data provide much richer information about the underlying population structures. However, the EIGENSTRAT method was originally developed for analyzing array-based genotype data and thus may not perform well on sequencing data for two reasons. First, the number of genetic variants $p$ is much larger than the sample size $n$ in sequencing data such that the sample-to-marker ratio $n/p$ is nearly zero, violating the assumption of the Tracy-Widom test used in the EIGENSTRAT method. Second, the EIGENSTRAT method might not be able to handle the linkage disequilibrium (LD) well in sequencing data. To resolve those two critical issues, we propose a new statistical method called ERStruct to estimate the number of latent sub-populations based on sequencing data. We propose to use the ratio of successive eigenvalues as a more robust testing statistic, and then we approximate the null distribution of our proposed test statistic using modern random matrix theory. Simulation studies found that our proposed ERStruct method has outperformed the traditional Tracy-Widom test on sequencing data. We further use two public data sets from the HapMap 3 and the 1000 Genomes Projects to demonstrate the performance of our ERStruct method. We also implement our ERStruct in a MATLAB toolbox which is now publicly available on GitHub through //github.com/bglvly/ERStruct.

This paper proposes a novel modelling approach for a heavy-duty manipulator with parallel$-$serial structures connected in series. Each considered parallel$-$serial structure contains a revolute segment with rigid links connected by a passive revolute joint and actuated by a linear hydraulic actuator, thus forming a closed kinematic loop. In addition, prismatic segments, consisting of prismatic joints driven by hydraulic linear actuators, also are considered. Expressions for actuator forces are derived using the Newton$-$Euler (N$-$E) dynamics formulation. The derivation process does not assume massless actuators decoupled from manipulator links, which is common in the Lagrange dynamics formulation. Actuator pressure dynamics are included in the analysis, leading in total to a third-order system of ordinary differential equations (ODEs). The proposed model in the N$-$E framework, with fewer parameters than its predecessors, inspires revision of the virtual decomposition control (VDC) systematic process to formulate a control law based on the new model. The virtual stability of each generic manipulator revolute and prismatic segment is obtained, leading to the Lyapunov stability of the entire robot.

Distributionally robust optimization (DRO) is a worst-case framework for stochastic optimization under uncertainty that has drawn fast-growing studies in recent years. When the underlying probability distribution is unknown and observed from data, DRO suggests to compute the worst-case distribution within a so-called uncertainty set that captures the involved statistical uncertainty. In particular, DRO with uncertainty set constructed as a statistical divergence neighborhood ball has been shown to provide a tool for constructing valid confidence intervals for nonparametric functionals, and bears a duality with the empirical likelihood (EL). In this paper, we show how adjusting the ball size of such type of DRO can reduce higher-order coverage errors similar to the Bartlett correction. Our correction, which applies to general von Mises differentiable functionals, is more general than the existing EL literature that only focuses on smooth function models or $M$-estimation. Moreover, we demonstrate a higher-order "self-normalizing" property of DRO regardless of the choice of divergence. Our approach builds on the development of a higher-order expansion of DRO, which is obtained through an asymptotic analysis on a fixed point equation arising from the Karush-Kuhn-Tucker conditions.

Principal component analysis (PCA) is a most frequently used statistical tool in almost all branches of data science. However, like many other statistical tools, there is sometimes the risk of misuse or even abuse. In this paper, we highlight possible pitfalls in using the theoretical results of PCA based on the assumption of independent data when the data are time series. For the latter, we state with proof a central limit theorem of the eigenvalues and eigenvectors (loadings), give direct and bootstrap estimation of their asymptotic covariances, and assess their efficacy via simulation. Specifically, we pay attention to the proportion of variation, which decides the number of principal components (PCs), and the loadings, which help interpret the meaning of PCs. Our findings are that while the proportion of variation is quite robust to different dependence assumptions, the inference of PC loadings requires careful attention. We initiate and conclude our investigation with an empirical example on portfolio management, in which the PC loadings play a prominent role. It is given as a paradigm of correct usage of PCA for time series data.

In statistical applications, it is common to encounter parameters supported on a varying or unknown dimensional space. Examples include the fused lasso regression, the matrix recovery under an unknown low rank, etc. Despite the ease of obtaining a point estimate via the optimization, it is much more challenging to quantify their uncertainty --- in the Bayesian framework, a major difficulty is that if assigning the prior associated with a $p$-dimensional measure, then there is zero posterior probability on any lower-dimensional subset with dimension $d<p$; to avoid this caveat, one needs to choose another dimension-selection prior on $d$, which often involves a highly combinatorial problem. To significantly reduce the modeling burden, we propose a new generative process for the prior: starting from a continuous random variable such as multivariate Gaussian, we transform it into a varying-dimensional space using the proximal mapping. This leads to a large class of new Bayesian models that can directly exploit the popular frequentist regularizations and their algorithms, such as the nuclear norm penalty and the alternating direction method of multipliers, while providing a principled and probabilistic uncertainty estimation. We show that this framework is well justified in the geometric measure theory, and enjoys a convenient posterior computation via the standard Hamiltonian Monte Carlo. We demonstrate its use in the analysis of the dynamic flow network data.

Graph signals are signals with an irregular structure that can be described by a graph. Graph neural networks (GNNs) are information processing architectures tailored to these graph signals and made of stacked layers that compose graph convolutional filters with nonlinear activation functions. Graph convolutions endow GNNs with invariance to permutations of the graph nodes' labels. In this paper, we consider the design of trainable nonlinear activation functions that take into consideration the structure of the graph. This is accomplished by using graph median filters and graph max filters, which mimic linear graph convolutions and are shown to retain the permutation invariance of GNNs. We also discuss modifications to the backpropagation algorithm necessary to train local activation functions. The advantages of localized activation function architectures are demonstrated in four numerical experiments: source localization on synthetic graphs, authorship attribution of 19th century novels, movie recommender systems and scientific article classification. In all cases, localized activation functions are shown to improve model capacity.

Neural waveform models such as the WaveNet are used in many recent text-to-speech systems, but the original WaveNet is quite slow in waveform generation because of its autoregressive (AR) structure. Although faster non-AR models were recently reported, they may be prohibitively complicated due to the use of a distilling training method and the blend of other disparate training criteria. This study proposes a non-AR neural source-filter waveform model that can be directly trained using spectrum-based training criteria and the stochastic gradient descent method. Given the input acoustic features, the proposed model first uses a source module to generate a sine-based excitation signal and then uses a filter module to transform the excitation signal into the output speech waveform. Our experiments demonstrated that the proposed model generated waveforms at least 100 times faster than the AR WaveNet and the quality of its synthetic speech is close to that of speech generated by the AR WaveNet. Ablation test results showed that both the sine-wave excitation signal and the spectrum-based training criteria were essential to the performance of the proposed model.

北京阿比特科技有限公司