亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose an extension of the N-mixture model which allows for the estimation of both abundances of multiple species simultaneously and their inter-species correlations. We also propose further extensions to this multi-species N-mixture model, one of which permits us to examine data which has an excess of zero counts, and another which allows us to relax the assumption of closure inherent in N-mixture models through the incorporation of an AR term in the abundance. The inclusion of a multivariate normal distribution as prior on the random effect in the abundance facilitates the estimation of a matrix of interspecies correlations. Each model is also fitted to avian point data collected as part of the NABBS 2010-2019. Results of simulation studies reveal that these models produce accurate estimates of abundance, inter-species correlations and detection probabilities at both small and large sample sizes, in scenarios with small, large and no zero inflation. Results of model-fitting to the North American Breeding Bird Survey data reveal an increase in Bald Eagle population size in southeastern Alaska in the decade examined.Our novel multi-species N-mixture model accounts for full communities, allowing us to examine abundances of every species present in a study area and, as these species do not exist in a vacuum, allowing us to estimate correlations between species' abundances.While previous multi-species abundance models have allowed for the estimation of abundance and detection probability, ours is the first to address the estimation of both positive and negative inter-species correlations, which allows us to begin to make inferences as to the effect that these species' abundances have on one another. Our modelling approach provides a method of quantifying the strength of association between species' population sizes, and is of practical use to population and conservation ecologists.

相關內容

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

Large observational data are increasingly available in disciplines such as health, economic and social sciences, where researchers are interested in causal questions rather than prediction. In this paper, we examine the problem of estimating heterogeneous treatment effects using non-parametric regression-based methods, starting from an empirical study aimed at investigating the effect of participation in school meal programs on health indicators. Firstly, we introduce the setup and the issues related to conducting causal inference with observational or non-fully randomized data, and how these issues can be tackled with the help of statistical learning tools. Then, we review and develop a unifying taxonomy of the existing state-of-the-art frameworks that allow for individual treatment effects estimation via non-parametric regression models. After presenting a brief overview on the problem of model selection, we illustrate the performance of some of the methods on three different simulated studies. We conclude by demonstrating the use of some of the methods on an empirical analysis of the school meal program data.

We recently introduced a formalism for the modeling of temporal networks, that we call stream graphs. It emphasizes the streaming nature of data and allows rigorous definitions of many important concepts generalizing classical graphs. This includes in particular size, density, clique, neighborhood, degree, clustering coefficient, and transitivity. In this contribution, we show that, like graphs, stream graphs may be extended to cope with bipartite structures, with node and link weights, or with link directions. We review the main bipartite, weighted or directed graph concepts proposed in the literature, we generalize them to the cases of bipartite, weighted, or directed stream graphs, and we show that obtained concepts are consistent with graph and stream graph ones. This provides a formal ground for an accurate modeling of the many temporal networks that have one or several of these features.

We consider functional data which are measured on a discrete set of observation points. Often such data are measured with additional noise. We explore in this paper the factor structure underlying this type of data. We show that the latent signal can be attributed to the common components of a corresponding factor model and can be estimated accordingly, by borrowing methods from factor model literature. We also show that principal components, which play a key role in functional data analysis, can be accurately estimated after taking such a multivariate instead of a `functional' perspective. In addition to the estimation problem, we also address testing of the null-hypothesis of iid noise. While this assumption is largely prevailing in the literature, we believe that it is often unrealistic and not supported by a residual analysis.

This paper investigates a channel estimator based on Gaussian mixture models (GMMs). We fit a GMM to given channel samples to obtain an analytic probability density function (PDF) which approximates the true channel PDF. Then, a conditional mean channel estimator corresponding to this approximating PDF is computed in closed form and used as an approximation of the optimal conditional mean estimator based on the true channel PDF. This optimal estimator cannot be calculated analytically because the true channel PDF is generally not available. To motivate the GMM-based estimator, we show that it converges to the optimal conditional mean estimator as the number of GMM components is increased. In numerical experiments, a reasonable number of GMM components already shows promising estimation results.

Motivated by applications in combinatorial geometry, we consider the following question: Let $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_m)$ be an $m$-partition of a positive integer $n$, $S_i \subseteq \mathbb{C}^{\lambda_i}$ be finite sets, and let $S:=S_1 \times S_2 \times \ldots \times S_m \subset \mathbb{C}^n$ be the multi-grid defined by $S_i$. Suppose $p$ is an $n$-variate degree $d$ polynomial. How many zeros does $p$ have on $S$? We first develop a multivariate generalization of Combinatorial Nullstellensatz that certifies existence of a point $t \in S$ so that $p(t) \neq 0$. Then we show that a natural multivariate generalization of the DeMillo-Lipton-Schwartz-Zippel lemma holds, except for a special family of polynomials that we call $\lambda$-reducible. This yields a simultaneous generalization of Szemer\'edi-Trotter theorem and Schwartz-Zippel lemma into higher dimensions, and has applications in incidence geometry. Finally, we develop a symbolic algorithm that identifies certain $\lambda$-reducible polynomials. More precisely, our symbolic algorithm detects polynomials that include a cartesian product of hypersurfaces in their zero set. It is likely that using Chow forms the algorithm can be generalized to handle arbitrary $\lambda$-reducible polynomials, which we leave as an open problem.

Reliable probability estimation is of crucial importance in many real-world applications where there is inherent uncertainty, such as weather forecasting, medical prognosis, or collision avoidance in autonomous vehicles. Probability-estimation models are trained on observed outcomes (e.g. whether it has rained or not, or whether a patient has died or not), because the ground-truth probabilities of the events of interest are typically unknown. The problem is therefore analogous to binary classification, with the important difference that the objective is to estimate probabilities rather than predicting the specific outcome. The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks. There exist several methods to improve the probabilities generated by these models but they mostly focus on classification problems where the probabilities are related to model uncertainty. In the case of problems with inherent uncertainty, it is challenging to evaluate performance without access to ground-truth probabilities. To address this, we build a synthetic dataset to study and compare different computable metrics. We evaluate existing methods on the synthetic data as well as on three real-world probability estimation tasks, all of which involve inherent uncertainty: precipitation forecasting from radar images, predicting cancer patient survival from histopathology images, and predicting car crashes from dashcam videos. Finally, we also propose a new method for probability estimation using neural networks, which modifies the training process to promote output probabilities that are consistent with empirical probabilities computed from the data. The method outperforms existing approaches on most metrics on the simulated as well as real-world data.

Determinantal point processes (a.k.a. DPPs) have recently become popular tools for modeling the phenomenon of negative dependence, or repulsion, in data. However, our understanding of an analogue of a classical parametric statistical theory is rather limited for this class of models. In this work, we investigate a parametric family of Gaussian DPPs with a clearly interpretable effect of parametric modulation on the observed points. We show that parameter modulation impacts the observed points by introducing directionality in their repulsion structure, and the principal directions correspond to the directions of maximal (i.e. the most long ranged) dependency. This model readily yields a novel and viable alternative to Principal Component Analysis (PCA) as a dimension reduction tool that favors directions along which the data is most spread out. This methodological contribution is complemented by a statistical analysis of a spiked model similar to that employed for covariance matrices as a framework to study PCA. These theoretical investigations unveil intriguing questions for further examination in random matrix theory, stochastic geometry and related topics.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司