亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article inspects whether a multivariate distribution is different from a specified distribution or not, and it also tests the equality of two multivariate distributions. In the course of this study, a graphical tool-kit using well-known half-spaced depth based information criteria is proposed, which is a two-dimensional plot, regardless of the dimension of the data, and it is even useful in comparing high-dimensional distributions. The simple interpretability of the proposed graphical tool-kit motivates us to formulate test statistics to carry out the corresponding testing of hypothesis problems. It is established that the proposed tests based on the same information criteria are consistent, and moreover, the asymptotic distributions of the test statistics under contiguous/local alternatives are derived, which enable us to compute the asymptotic power of these tests. Furthermore, it is observed that the computations associated with the proposed tests are unburdensome. Besides, these tests perform better than many other tests available in the literature when data are generated from various distributions such as heavy tailed distributions, which indicates that the proposed methodology is robust as well. Finally, the usefulness of the proposed graphical tool-kit and tests is shown on two benchmark real data sets.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Learning · Performer · 學習器 · MoDELS ·
2023 年 9 月 1 日

Deep neural networks have shown remarkable performance when trained on independent and identically distributed data from a fixed set of classes. However, in real-world scenarios, it can be desirable to train models on a continuous stream of data where multiple classification tasks are presented sequentially. This scenario, known as Continual Learning (CL) poses challenges to standard learning algorithms which struggle to maintain knowledge of old tasks while learning new ones. This stability-plasticity dilemma remains central to CL and multiple metrics have been proposed to adequately measure stability and plasticity separately. However, none considers the increasing difficulty of the classification task, which inherently results in performance loss for any model. In that sense, we analyze some limitations of current metrics and identify the presence of setup-induced forgetting. Therefore, we propose new metrics that account for the task's increasing difficulty. Through experiments on benchmark datasets, we demonstrate that our proposed metrics can provide new insights into the stability-plasticity trade-off achieved by models in the continual learning environment.

Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.

This study focuses on the presence of (multi)fractal structures in confined hadronic matter through the momentum distributions of mesons produced in proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates that the $q$-exponential behaviour of the particle momentum distributions is consistent with fractal characteristics, exhibiting fractal structures in confined hadronic matter with features similar to those observed in the deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic analysis of meson production in hadronic collisions at energies below 1 TeV suggests that specific fractal parameters are universal, independently of confinement or deconfinement, while others may be influenced by the quark content of the produced meson. These results pave the way for further research exploring the implications of fractal structures on various physical distributions and offer insights into the nature of the phase transition between confined and deconfined regimes.

We present a framework for approximate Bayesian inference when only a limited number of noisy log-likelihood evaluations can be obtained due to computational constraints, which is becoming increasingly common for applications of complex models. We model the log-likelihood function using a Gaussian process (GP) and the main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take if it was applicable. Informative log-likelihood evaluation locations are selected using a sequential experimental design strategy until the MH accept/reject decision is done accurately enough according to the GP model. The resulting approximate sampler is conceptually simple and sample-efficient. It is also more robust to violations of GP modelling assumptions compared with earlier, related "Bayesian optimisation-like" methods tailored for Bayesian inference. We discuss some theoretical aspects and various interpretations of the resulting approximate MH sampler, and demonstrate its benefits in the context of Bayesian and generalised Bayesian likelihood-free inference for simulator-based statistical models.

We define some Schnyder-type combinatorial structures on a class of planar triangulations of the pentagon which are closely related to 5-connected triangulations. The combinatorial structures have three incarnations defined in terms of orientations, corner-labelings, and woods respectively. The wood incarnation consists in 5 spanning trees crossing each other in an orderly fashion. Similarly as for Schnyder woods on triangulations, it induces, for each vertex, a partition of the inner triangles into face-connected regions (5~regions here). We show that the induced barycentric vertex-placement, where each vertex is at the barycenter of the 5 outer vertices with weights given by the number of faces in each region, yields a planar straight-line drawing.

This paper examines the distribution of order statistics taken from simple-random-sampling without replacement (SRSWOR) from a finite population with values 1,...,N. This distribution is a shifted version of the beta-binomial distribution, parameterised in a particular way. We derive the distribution and show how it relates to the distribution of order statistics under IID sampling from a uniform distribution over the unit interval. We examine properties of the distribution, including moments and asymptotic results. We also generalise the distribution to sampling without replacement of order statistics from an arbitrary finite population. We examine the properties of the order statistics for inference about an unknown population size (called the German tank problem) and we derive relevant estimation results based on observation of an arbitrary set of order statistics. We also introduce an algorithm that simulates sampling without replacement of order statistics from an arbitrary finite population without having to generate the entire sample.

Recently, several algorithms have been proposed for decomposing reactive synthesis specifications into independent and simpler sub-specifications. Being inspired by one of the approaches, developed by Antonio Iannopollo (2018), who designed the so-called (DC) algorithm, we present here our solution that takes his ideas further and provides mathematical formalisation of the strategy behind DC. We rigorously define the main notions involved in the algorithm, explain the technique, and demonstrate its application on examples. The core technique of DC is based on the detection of independent variables in linear temporal logic formulae by exploiting the power and efficiency of a model checker. Although the DC algorithm is sound, it is not complete, as its author already pointed out. In this paper, we provide a counterexample that shows this fact and propose relevant changes to adapt the original DC strategy to ensure its correctness. The modification of DC and the detailed proof of its soundness and completeness are the main contributions of this work.

In genetic studies, haplotype data provide more refined information than data about separate genetic markers. However, large-scale studies that genotype hundreds to thousands of individuals may only provide results of pooled data, where only the total allele counts of each marker in each pool are reported. Methods for inferring haplotype frequencies from pooled genetic data that scale well with pool size rely on a normal approximation, which we observe to produce unreliable inference when applied to real data. We illustrate cases where the approximation breaks down, due to the normal covariance matrix being near-singular. As an alternative to approximate methods, in this paper we propose exact methods to infer haplotype frequencies from pooled genetic data based on a latent multinomial model, where the observed allele counts are considered integer combinations of latent, unobserved haplotype counts. One of our methods, latent count sampling via Markov bases, achieves approximately linear runtime with respect to pool size. Our exact methods produce more accurate inference over existing approximate methods for synthetic data and for data based on haplotype information from the 1000 Genomes Project. We also demonstrate how our methods can be applied to time-series of pooled genetic data, as a proof of concept of how our methods are relevant to more complex hierarchical settings, such as spatiotemporal models.

Permutation tests are widely used for statistical hypothesis testing when the sampling distribution of the test statistic under the null hypothesis is analytically intractable or unreliable due to finite sample sizes. One critical challenge in the application of permutation tests in genomic studies is that an enormous number of permutations are often needed to obtain reliable estimates of very small $p$-values, leading to intensive computational effort. To address this issue, we develop algorithms for the accurate and efficient estimation of small $p$-values in permutation tests for paired and independent two-group genomic data, and our approaches leverage a novel framework for parameterizing the permutation sample spaces of those two types of data respectively using the Bernoulli and conditional Bernoulli distributions, combined with the cross-entropy method. The performance of our proposed algorithms is demonstrated through the application to two simulated datasets and two real-world gene expression datasets generated by microarray and RNA-Seq technologies and comparisons to existing methods such as crude permutations and SAMC, and the results show that our approaches can achieve orders of magnitude of computational efficiency gains in estimating small $p$-values. Our approaches offer promising solutions for the improvement of computational efficiencies of existing permutation test procedures and the development of new testing methods using permutations in genomic data analysis.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司