亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The persistent Betti numbers are used in topological data analysis to infer the scales at which topological features appear and disappear in the filtration of a topological space. Most commonly by means of the corresponding barcode or persistence diagram. While this approach to data science has been very successful, it suffers from sensitivity to outliers, and it does not allow for additional filtration parameters. Such parameters naturally appear when a cloud of data points comes together with additional measurements taken at the locations of the data. For these reasons, multiparameter persistent homology has recently received significant attention. In particular, the multicover and \v{C}ech bifiltration have been introduced to overcome the aforementioned shortcomings. In this work, we establish the strong consistency and asymptotic normality of the multiparameter persistent Betti numbers in growing domains. Our asymptotic results are established for a general framework encompassing both the marked \v{C}ech bifiltration, as well as the multicover bifiltration constructed on the null model of an independently marked Poisson point process. In a simulation study, we explain how the asymptotic normality can be used to derive tests for the goodness of fit. The statistical power of such tests is illustrated through different alternatives exhibiting more clustering, or more repulsion than the null model.

相關內容

Support Vector Machine (SVM) is one of the most popular classification methods, and a de-facto reference for many Machine Learning approaches. Its performance is determined by parameter selection, which is usually achieved by a time-consuming grid search cross-validation procedure. There exist, however, several unsupervised heuristics that take advantage of the characteristics of the dataset for selecting parameters instead of using class label information. Unsupervised heuristics, while an order of magnitude faster, are scarcely used under the assumption that their results are significantly worse than those of grid search. To challenge that assumption we have conducted a wide study of various heuristics for SVM parameter selection on over thirty datasets, in both supervised and semi-supervised scenarios. In most cases, the cross-validation grid search did not achieve a significant advantage over the heuristics. In particular, heuristical parameter selection may be preferable for high dimensional and unbalanced datasets or when a small number of examples is available. Our results also show that using a heuristic to determine the starting point of further cross-validation does not yield significantly better results than the default start.

The monitoring of infrastructure assets using sensor networks is becoming increasingly prevalent. A digital twin in the form of a finite element model, as used in design and construction, can help make sense of the copious amount of collected sensor data. This paper demonstrates the application of the statistical finite element method (statFEM), which provides a consistent and principled means for synthesising data and physics-based models, in developing a digital twin of a self-sensing structure. As a case study, an instrumented steel railway bridge of 27.34 m length located along the West Coast Mainline near Staffordshire in the UK is considered. Using strain data captured from fibre Bragg grating (FBG) sensors at 108 locations along the bridge superstructure, statFEM can predict the `true' system response while taking into account the uncertainties in sensor readings, applied loading and finite element model misspecification errors. Longitudinal strain distributions along the two main I-beams are both measured and modelled during the passage of a passenger train. The digital twin, because of its physics-based component, is able to generate reasonable strain distribution predictions at locations where no measurement data is available, including at several points along the main I-beams and on structural elements on which sensors are not even installed. The implications for long-term structural health monitoring and assessment include optimisation of sensor placement, and performing more reliable what-if analyses at locations and under loading scenarios for which no measurement data is available.

We introduce a flexible and scalable class of Bayesian geostatistical models for discrete data, based on the class of nearest neighbor mixture transition distribution processes (NNMP), referred to as discrete NNMP. The proposed class characterizes spatial variability by a weighted combination of first-order conditional probability mass functions (pmfs) for each one of a given number of neighbors. The approach supports flexible modeling for multivariate dependence through specification of general bivariate discrete distributions that define the conditional pmfs. Moreover, the discrete NNMP allows for construction of models given a pre-specified family of marginal distributions that can vary in space, facilitating covariate inclusion. In particular, we develop a modeling and inferential framework for copula-based NNMPs that can attain flexible dependence structures, motivating the use of bivariate copula families for spatial processes. Compared to the traditional class of spatial generalized linear mixed models, where spatial dependence is introduced through a transformation of response means, our process-based modeling approach provides both computational and inferential advantages. We illustrate the benefits with synthetic data examples and an analysis of North American Breeding Bird Survey data.

Modern technological advances have expanded the scope of applications requiring analysis of large-scale datastreams that comprise multiple indefinitely long time series. There is an acute need for statistical methodologies that perform online inference and continuously revise the model to reflect the current status of the underlying process. In this manuscript, we propose a dynamic statistical inference framework--named dynamic tracking and screening (DTS)--that is not only able to provide accurate estimates of the underlying parameters in a dynamic statistical model, but also capable of rapidly identifying irregular individual streams whose behavioral patterns deviate from the majority. Concretely, by fully exploiting the sequential feature of datastreams, we develop a robust estimation approach under a framework of varying coefficient model. The procedure naturally accommodates unequally-spaced design points and updates the coefficient estimates as new data arrive without the need to store historical data. A data-driven choice of an optimal smoothing parameter is accordingly proposed. Furthermore, we suggest a new multiple testing procedure tailored to the streaming environment. The resulting DTS scheme is able to adapt time-varying structures appropriately, track changes in the underlying models, and hence maintain high accuracy in detecting time periods during which individual streams exhibit irregular behavior. Moreover, we derive rigorous statistical guarantees of the procedure and investigate its finite-sample performance through simulation studies. We demonstrate the proposed methods through a mobile health example to estimate the timings when subjects' sleep and physical activities have unusual influence upon their mood.

For the differential privacy under the sub-Gamma noise, we derive the asymptotic properties of a class of network models with binary values with a general link function. In this paper, we release the degree sequences of the binary networks under a general noisy mechanism with the discrete Laplace mechanism as a special case. We establish the asymptotic result including both consistency and asymptotically normality of the parameter estimator when the number of parameters goes to infinity in a class of network models. Simulations and a real data example are provided to illustrate asymptotic results.

We consider the problem of estimating a $d$-dimensional discrete distribution from its samples observed under a $b$-bit communication constraint. In contrast to most previous results that largely focus on the global minimax error, we study the local behavior of the estimation error and provide \emph{pointwise} bounds that depend on the target distribution $p$. In particular, we show that the $\ell_2$ error decays with $O\left(\frac{\lVert p\rVert_{1/2}}{n2^b}\vee \frac{1}{n}\right)$ (In this paper, we use $a\vee b$ and $a \wedge b$ to denote $\max(a, b)$ and $\min(a,b)$ respectively.) when $n$ is sufficiently large, hence it is governed by the \emph{half-norm} of $p$ instead of the ambient dimension $d$. For the achievability result, we propose a two-round sequentially interactive estimation scheme that achieves this error rate uniformly over all $p$. Our scheme is based on a novel local refinement idea, where we first use a standard global minimax scheme to localize $p$ and then use the remaining samples to locally refine our estimate. We also develop a new local minimax lower bound with (almost) matching $\ell_2$ error, showing that any interactive scheme must admit a $\Omega\left( \frac{\lVert p \rVert_{{(1+\delta)}/{2}}}{n2^b}\right)$ $\ell_2$ error for any $\delta > 0$. The lower bound is derived by first finding the best parametric sub-model containing $p$, and then upper bounding the quantized Fisher information under this model. Our upper and lower bounds together indicate that the $\mathcal{H}_{1/2}(p) = \log(\lVert p \rVert_{{1}/{2}})$ bits of communication is both sufficient and necessary to achieve the optimal (centralized) performance, where $\mathcal{H}_{{1}/{2}}(p)$ is the R\'enyi entropy of order $2$. Therefore, under the $\ell_2$ loss, the correct measure of the local communication complexity at $p$ is its R\'enyi entropy.

We consider Bayesian multiple hypothesis problem with independent and identically distributed observations. The classical, Sanov's theorem-based, analysis of the error probability allows one to characterize the best achievable error exponent. However, this analysis does not generalize to the case where the true distributions of the hypothesis are not exact or partially known via some nominal distributions. This problem has practical significance, because the nominal distributions may be quantized versions of the true distributions in a hardware implementation, or they may be estimates of the true distributions obtained from labeled training sequences as in statistical classification. In this paper, we develop a type-based analysis to investigate Bayesian multiple hypothesis testing problem. Our analysis allows one to explicitly calculate the error exponent of a given type and extends the classical analysis. As a generalization of the proposed method, we derive a robust test and obtain its error exponent for the case where the hypothesis distributions are not known but there exist nominal distribution that are close to true distributions in variational distance.

This paper is devoted to condition numbers of the total least squares problem with linear equality constraint (TLSE). With novel limit techniques, closed formulae for normwise, mixed and componentwise condition numbers of the TLSE problem are derived. Computable expressions and upper bounds for these condition numbers are also given to avoid the costly Kronecker product-based operations. The results unify the ones for the TLS problem. For TLSE problems with equilibratory input data, numerical experiments illustrate that normwise condition number-based estimate is sharp to evaluate the forward error of the solution, while for sparse and badly scaled matrices, mixed and componentwise condition numbers-based estimates are much tighter.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

Recently, caption generation with an encoder-decoder framework has been extensively studied and applied in different domains, such as image captioning, code captioning, and so on. In this paper, we propose a novel architecture, namely Auto-Reconstructor Network (ARNet), which, coupling with the conventional encoder-decoder framework, works in an end-to-end fashion to generate captions. ARNet aims at reconstructing the previous hidden state with the present one, besides behaving as the input-dependent transition operator. Therefore, ARNet encourages the current hidden state to embed more information from the previous one, which can help regularize the transition dynamics of recurrent neural networks (RNNs). Extensive experimental results show that our proposed ARNet boosts the performance over the existing encoder-decoder models on both image captioning and source code captioning tasks. Additionally, ARNet remarkably reduces the discrepancy between training and inference processes for caption generation. Furthermore, the performance on permuted sequential MNIST demonstrates that ARNet can effectively regularize RNN, especially on modeling long-term dependencies. Our code is available at: //github.com/chenxinpeng/ARNet

北京阿比特科技有限公司