亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose a one-sample test to check whether the support of the unknown distribution generating the data is homologically equivalent to the support of some specified distribution or not OR using the corresponding two-sample test, one can test whether the supports of two unknown distributions are homologically equivalent or not. In the course of this study, test statistics based on the Betti numbers are formulated, and the consistency of the tests is established under the critical and the supercritical regimes. Moreover, some simulation studies are conducted and results are compared with the existing methodologies such as Robinson's permutation test and test based on mean persistent landscape functions. Furthermore, the practicability of the tests is shown on two well-known real data sets also.

相關內容

A patch framework consists of a bipartite graph between $n$ points and $m$ local views (patches) and the $d$-dimensional local coordinates of the points due to the views containing them. Given a patch framework, we consider the problem of finding a rigid alignment of the views, identified with an element of the product of $m$ orthogonal groups, $\mathbb{O}(d)^m$, that minimizes the alignment error. In the case when the views are noiseless, a perfect alignment exists, resulting in a realization of the points that respects the geometry of the views. The affine rigidity of such realizations, its connection with the overlapping structure of the views, and its consequences in spectral and semidefinite algorithms has been studied in related work [Zha and Zhang; Chaudhary et al.]. In this work, we characterize the non-degeneracy of a rigid alignment, consequently obtaining a characterization of the local rigidity of a realization, and convergence guarantees on Riemannian gradient descent for aligning the views. Precisely, we characterize the non-degeneracy of an alignment of (possibly noisy) local views based on the kernel and positivity of a certain matrix. Thereafter, we work in the noiseless setting. Under a mild condition on the local views, we show that the non-degeneracy and uniqueness of a perfect alignment, up to the action of $\mathbb{O}(d)$, are equivalent to the local and global rigidity of the resulting realization, respectively. This also yields a characterization of the local rigidity of a realization. We also provide necessary and sufficient conditions on the overlapping structure of the noiseless local views for their realizations to be locally/globally rigid. Finally, we focus on the Riemannian gradient descent for aligning the local views and obtain a sufficient condition on an alignment for the algorithm to converge (locally) linearly to it.

Based on a novel dynamic Whittle likelihood approximation for locally stationary processes, a Bayesian nonparametric approach to estimating the time-varying spectral density is proposed. This dynamic frequency-domain based likelihood approximation is able to depict the time-frequency evolution of the process by utilizing the moving periodogram previously introduced in the bootstrap literature. The posterior distribution is obtained by updating a bivariate extension of the Bernstein-Dirichlet process prior with the dynamic Whittle likelihood. Asymptotic properties such as sup-norm posterior consistency and L2-norm posterior contraction rates are presented. Additionally, this methodology enables model selection between stationarity and non-stationarity based on the Bayes factor. The finite-sample performance of the method is investigated in simulation studies and applications to real-life data-sets are presented.

Negative control is a common technique in scientific investigations and broadly refers to the situation where a null effect (''negative result'') is expected. Motivated by a real proteomic dataset, we will present three promising and closely connected methods of using negative controls to assist simultaneous hypothesis testing. The first method uses negative controls to construct a permutation p-value for every hypothesis under investigation, and we give several sufficient conditions for such p-values to be valid and positive regression dependent on the set (PRDS) of true nulls. The second method uses negative controls to construct an estimate of the false discovery rate (FDR), and we give a sufficient condition under which the step-up procedure based on this estimate controls the FDR. The third method, derived from an existing ad hoc algorithm for proteomic analysis, uses negative controls to construct a nonparametric estimator of the local false discovery rate. We conclude with some practical suggestions and connections to some closely related methods that are propsed recently.

In this article, we consider the problem of testing whether two latent position random graphs are correlated. We propose a test statistic based on the kernel method and introduce the estimation procedure based on the spectral decomposition of adjacency matrices. Even if no kernel function is specified, the sample graph covariance based on our proposed estimation method will converge to the population version. The asymptotic distribution of the sample covariance can also be obtained. We design a procedure for testing independence under permutation tests and demonstrate that our proposed test statistic is consistent and valid. Our estimation method can be extended to the spectral decomposition of normalized Laplacian matrices and inhomogeneous random graphs. Our method achieves promising results on both simulated and real data.

Switch-like responses arising from bistability have been linked to cell signaling processes and memory. Revealing the shape and properties of the set of parameters that lead to bistability is necessary to understand the underlying biological mechanisms, but is a complex mathematical problem. We present an efficient approach to determine a basic topological property of the parameter region of multistationary, namely whether it is connected or not. The connectivity of this region can be interpreted in terms of the biological mechanisms underlying bistability and the switch-like patterns that the system can create. We provide an algorithm to assert that the parameter region of multistationarity is connected, targeting reaction networks with mass-action kinetics. We show that this is the case for numerous relevant cell signaling motifs, previously described to exhibit bistability. However, we show that for a motif displaying a phosphorylation cycle with allosteric enzyme regulation, the region of multistationarity has two distinct connected components, corresponding to two different, but symmetric, biological mechanisms. The method relies on linear programming and bypasses the expensive computational cost of direct and generic approaches to study parametric polynomial systems. This characteristic makes it suitable for mass-screening of reaction networks.

We consider the goodness-of fit testing problem for H\"older smooth densities over $\mathbb{R}^d$: given $n$ iid observations with unknown density $p$ and given a known density $p_0$, we investigate how large $\rho$ should be to distinguish, with high probability, the case $p=p_0$ from the composite alternative of all H\"older-smooth densities $p$ such that $\|p-p_0\|_t \geq \rho$ where $t \in [1,2]$. The densities are assumed to be defined over $\mathbb{R}^d$ and to have H\"older smoothness parameter $\alpha>0$. In the present work, we solve the case $\alpha \leq 1$ and handle the case $\alpha>1$ using an additional technical restriction on the densities. We identify matching upper and lower bounds on the local minimax rates of testing, given explicitly in terms of $p_0$. We propose novel test statistics which we believe could be of independent interest. We also establish the first definition of an explicit cutoff $u_B$ allowing us to split $\mathbb{R}^d$ into a bulk part (defined as the subset of $\mathbb{R}^d$ where $p_0$ takes only values greater than or equal to $u_B$) and a tail part (defined as the complementary of the bulk), each part involving fundamentally different contributions to the local minimax rates of testing.

Temporal data, representing chronological observations of complex systems, has always been a typical data structure that can be widely generated by many domains, such as industry, medicine and finance. Analyzing this type of data is extremely valuable for various applications. Thus, different temporal data analysis tasks, eg, classification, clustering and prediction, have been proposed in the past decades. Among them, causal discovery, learning the causal relations from temporal data, is considered an interesting yet critical task and has attracted much research attention. Existing casual discovery works can be divided into two highly correlated categories according to whether the temporal data is calibrated, ie, multivariate time series casual discovery, and event sequence casual discovery. However, most previous surveys are only focused on the time series casual discovery and ignore the second category. In this paper, we specify the correlation between the two categories and provide a systematical overview of existing solutions. Furthermore, we provide public datasets, evaluation metrics and new perspectives for temporal data casual discovery.

This paper studies model checking for general parametric regression models with no dimension reduction structures on the high-dimensional vector of predictors. Using existing test as an initial test, this paper combines the sample-splitting technique and conditional studentization approach to construct a COnditionally Studentized Test(COST). Unlike existing tests, whether the initial test is global or local smoothing-based, and whether the dimension of the predictor vector and the number of parameters are fixed, or diverge at a certain rate as the sample size goes to infinity, the proposed test always has a normal weak limit under the null hypothesis. Further, the test can detect the local alternatives distinct from the null hypothesis at the fastest possible rate of convergence in hypothesis testing. We also discuss the optimal sample splitting in power performance. The numerical studies offer information on its merits and limitations in finite sample cases. As a generic methodology, it could be applied to other testing problems.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司