亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The classical persistence algorithm computes the unique decomposition of a persistence module implicitly given by an input simplicial filtration. Based on matrix reduction, this algorithm is a cornerstone of the emergent area of topological data analysis. Its input is a simplicial filtration defined over the integers $\mathbb{Z}$ giving rise to a $1$-parameter persistence module. It has been recognized that multiparameter version of persistence modules given by simplicial filtrations over $d$-dimensional integer grids $\mathbb{Z}^d$ is equally or perhaps more important in data science applications. However, in the multiparameter setting, one of the main challenges is that topological summaries based on algebraic structure such as decompositions and bottleneck distances cannot be as efficiently computed as in the $1$-parameter case because there is no known extension of the persistence algorithm to multiparameter persistence modules. We present an efficient algorithm to compute the unique decomposition of a finitely presented persistence module $M$ defined over the multiparameter $\mathbb{Z}^d$. The algorithm first assumes that the module is presented with a set of $N$ generators and relations that are \emph{distinctly graded}. Based on a generalized matrix reduction technique it runs in $O(N^{2\omega+1})$ time where $\omega<2.373$ is the exponent for matrix multiplication. This is much better than the well known algorithm called Meataxe which runs in $\tilde{O}(N^{6(d+1)})$ time on such an input. In practice, persistence modules are usually induced by simplicial filtrations. With such an input consisting of $n$ simplices, our algorithm runs in $O(n^{(d-1)(2\omega + 1)})$ time for $d\geq 2$. For the special case of zero dimensional homology, it runs in time $O(n^{2\omega +1})$.

相關內容

In pursuance of the unused spectrum in higher frequencies, millimeter wave (mmWave) bands have a pivotal role. However, the high path-loss and poor scattering associated with mmWave communications highlight the necessity of employing effective beamforming techniques. In order to efficiently search for the beam to serve a user and to jointly serve multiple users it is often required to use a composite beam which consists of multiple disjoint lobes. A composite beam covers multiple desired angular coverage intervals (ACIs) and ideally has maximum and uniform gain (smoothness) within each desired ACI, negligible gain (leakage) outside the desired ACIs, and sharp edges. We propose an algorithm for designing such ideal composite codebook by providing an analytical closed-form solution with low computational complexity. There is a fundamental trade-off between the gain, leakage and smoothness of the beams. Our design allows to achieve different values in such trade-off based on changing the design parameters. We highlight the shortcomings of the uniform linear arrays (ULAs) in building arbitrary composite beams. Consequently, we use a recently introduced twin-ULA (TULA) antenna structure to effectively resolve these inefficiencies. Numerical results are used to validate the theoretical findings.

In the problem of classical group testing one aims to identify a small subset (of size $d$) diseased individuals/defective items in a large population (of size $n$). This process is based on a minimal number of suitably-designed group tests on subsets of items, where the test outcome is positive iff the given test contains at least one defective item. Motivated by physical considerations, we consider a generalized setting that includes as special cases multiple other group-testing-like models in the literature. In our setting, which subsumes as special cases a variety of noiseless and noisy group-testing models in the literature, the test outcome is positive with probability $f(x)$, where $x$ is the number of defectives tested in a pool, and $f(\cdot)$ is an arbitrary monotonically increasing (stochastic) test function. Our main contributions are as follows. 1. We present a non-adaptive scheme that with probability $1-\varepsilon$ identifies all defective items. Our scheme requires at most ${\cal O}( H(f) d\log\left(\frac{n}{\varepsilon}\right))$ tests, where $H(f)$ is a suitably defined "sensitivity parameter" of $f(\cdot)$, and is never larger than ${\cal O}\left(d^{1+o(1)}\right)$, but may be substantially smaller for many $f(\cdot)$. 2. We argue that any non-adaptive group testing scheme needs at least $\Omega \left((1-\varepsilon)h(f) d\log\left(\frac n d\right)\right)$ tests to ensure reliable recovery. Here $h(f) \geq 1$ is a suitably defined "concentration parameter" of $f(\cdot)$. 3. We prove that $\frac{H(f)}{h(f)}\in\Theta(1)$ for a variety of sparse-recovery group-testing models in the literature, and $\frac {H(f)} {h(f)} \in {\cal O}\left(d^{1+o(1)}\right)$ for any other test function.

In this paper, some preliminaries about signal flow graph, linear time-invariant system on F(z) and computational complexity are first introduced in detail. In order to synthesize the necessary and sufficient condition on F(z) for a general 2-path problem, the sufficient condition on F(z) or R and necessary conditions on F(z) for a general 2-path problem are secondly analyzed respectively. Moreover, an equivalent sufficient and necessary condition on R whether there exists a general 2-path is deduced in detail. Finally, the computational complexity of the algorithm for this equivalent sufficient and necessary condition is introduced so that it means that the general 2-path problem is a P problem.

Computations of incompressible flows with velocity boundary conditions require solution of a Poisson equation for pressure with all Neumann boundary conditions. Discretization of such a Poisson equation results in a rank-deficient matrix of coefficients. When a non-conservative discretization method such as finite difference, finite element, or spectral scheme is used, such a matrix also generates an inconsistency which makes the residuals in the iterative solution to saturate at a threshold level that depends on the spatial resolution and order of the discretization scheme. In this paper, we examine inconsistency for a high-order meshless discretization scheme suitable for solving the equations on a complex domain. The high order meshless method uses polyharmonic spline radial basis functions (PHS-RBF) with appended polynomials to interpolate scattered data and constructs the discrete equations by collocation. The PHS-RBF provides the flexibility to vary the order of discretization by increasing the degree of the appended polynomial. In this study, we examine the convergence of the inconsistency for different spatial resolutions and for different degrees of the appended polynomials by solving the Poisson equation for a manufactured solution as well as the Navier-Stokes equations for several fluid flows. We observe that the inconsistency decreases faster than the error in the final solution, and eventually becomes vanishing small at sufficient spatial resolution. The rate of convergence of the inconsistency is observed to be similar or better than the rate of convergence of the discretization errors. This beneficial observation makes it unnecessary to regularize the Poisson equation by fixing either the mean pressure or pressure at an arbitrary point. A simple point solver such as the SOR is seen to be well-convergent, although it can be further accelerated using multilevel methods.

The purpose of this paper is to discuss a generalization of the bubble transform to differential forms. The bubble transform was discussed in a previous paper by the authors for scalar valued functions, or zero-forms, and represents a new tool for the understanding of finite element spaces of arbitrary polynomial degree. The present paper contains a similar study for differential forms. From a simplicial mesh of the domain, we build a map which decomposes piecewise smooth $k$-forms into a sum of local bubbles supported on appropriate macroelements. The key properties of the decomposition are that it commutes with the exterior derivative and preserves the piecewise polynomial structure of the standard finite element spaces of $k$-forms. Furthermore, the transform is bounded in $L^2$ and also on the appropriate subspace consisting of $k$-forms with exterior derivatives in $L^2$.

In this paper, I consider a fine-grained dichotomy of Boolean counting constraint satisfaction problem (#CSP), under the exponential time hypothesis of counting version (#ETH). Suppose $\mathscr{F}$ is a finite set of algebraic complex-valued functions defined on Boolean domain. When $\mathscr{F}$ is a subset of either two special function sets, I prove that #CSP($\mathscr{F}$) is polynomial-time solvable, otherwise it can not be computed in sub-exponential time unless #ETH fails. I also improve the result by proving the same dichotomy holds for #CSP with bounded degree (every variable appears at most constant constraints), even for #R$_3$-CSP. An important preparation before proving the result is to argue that pinning (two special unary functions $[1,0]$ and $[0,1]$ are used to reduce arity) can also keep the sub-exponential lower bound of a Boolean #CSP problem. I discuss this issue by utilizing some common methods in proving #P-hardness of counting problems. The proof illustrates the internal correlation among these commonly used methods.

This paper addresses two questions: (a) can we identify a sensible class of 2-parameter persistence modules on which the rank invariant is complete? (b) can we determine efficiently whether a given 2-parameter persistence module belongs to this class? We provide positive answers to both questions, and our class of interest is that of rectangle-decomposable modules. Our contributions include: on the one hand, a proof that the rank invariant is complete on rectangle-decomposable modules, together with an inclusion-exclusion formula for counting the multiplicities of the summands; on the other hand, algorithms to check whether a module induced in homology by a bifiltration is rectangle-decomposable, and to decompose it in the affirmative, with a better complexity than state-of-the-art decomposition methods for general 2-parameter persistence modules. Our algorithms are backed up by a new structure theorem, whereby a 2-parameter persistence module is rectangle-decomposable if, and only if, its restrictions to squares are. This local characterization is key to the efficiency of our algorithms, and it generalizes previous conditions derived for the smaller class of block-decomposable modules. It also admits an algebraic formulation that turns out to be a weaker version of the one for block-decomposability. By contrast, we show that general interval-decomposability does not admit such a local characterization, even when locality is understood in a broad sense. Our analysis focuses on the case of modules indexed over finite grids, the more general cases are left as future work.

We introduce a methodology for robust Bayesian estimation with robust divergence (e.g., density power divergence or {\gamma}-divergence), indexed by a single tuning parameter. It is well known that the posterior density induced by robust divergence gives highly robust estimators against outliers if the tuning parameter is appropriately and carefully chosen. In a Bayesian framework, one way to find the optimal tuning parameter would be using evidence (marginal likelihood). However, we numerically illustrate that evidence induced by the density power divergence does not work to select the optimal tuning parameter since robust divergence is not regarded as a statistical model. To overcome the problems, we treat the exponential of robust divergence as an unnormalized statistical model, and we estimate the tuning parameter via minimizing the Hyvarinen score. We also provide adaptive computational methods based on sequential Monte Carlo (SMC) samplers, which enables us to obtain the optimal tuning parameter and samples from posterior distributions simultaneously. The empirical performance of the proposed method through simulations and an application to real data are also provided.

PAC-Bayesian is an analysis framework where the training error can be expressed as the weighted average of the hypotheses in the posterior distribution whilst incorporating the prior knowledge. In addition to being a pure generalization bound analysis tool, PAC-Bayesian bound can also be incorporated into an objective function to train a probabilistic neural network, making them a powerful and relevant framework that can numerically provide a tight generalization bound for supervised learning. For simplicity, we call probabilistic neural network learned using training objectives derived from PAC-Bayesian bounds as {\it PAC-Bayesian learning}. Despite their empirical success, the theoretical analysis of PAC-Bayesian learning for neural networks is rarely explored. This paper proposes a new class of convergence and generalization analysis for PAC-Bayes learning when it is used to train the over-parameterized neural networks by the gradient descent method. For a wide probabilistic neural network, we show that when PAC-Bayes learning is applied, the convergence result corresponds to solving a kernel ridge regression when the probabilistic neural tangent kernel (PNTK) is used as its kernel. Based on this finding, we further characterize the uniform PAC-Bayesian generalization bound which improves over the Rademacher complexity-based bound for non-probabilistic neural network. Finally, drawing the insight from our theoretical results, we propose a proxy measure for efficient hyperparameters selection, which is proven to be time-saving.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16].

北京阿比特科技有限公司