亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide numerical procedures for possibly best evaluating the sum of positive series. Our procedures are based on the application of a generalized version of Kummer's test.

相關內容

This paper studies the $\tau$-coherence of a (n x p)-observation matrix in a Gaussian framework. The $\tau$-coherence is defined as the largest magnitude outside a diagonal bandwith of size $\tau$ of the empirical correlation coefficients associated to our observations. Using the Chen-Stein method we derive the limiting law of the normalized coherence and show the convergence towards a Gumbel distribution. We generalize here the results of Cai and Jiang [CJ11a]. We assume that the covariance matrix of the model is bandwise. Moreover, we provide numerical considerations highlighting issues from the high dimension hypotheses. We numerically illustrate the asymptotic behaviour of the coherence with Monte-Carlo experiment using a HPC splitting strategy for high dimensional correlation matrices.

Quantification is the research field that studies methods for counting the number of data points that belong to each class in an unlabeled sample. Traditionally, researchers in this field assume the availability of labelled observations for all classes to induce a quantification model. However, we often face situations where the number of classes is large or even unknown, or we have reliable data for a single class. When inducing a multi-class quantifier is infeasible, we are often concerned with estimates for a specific class of interest. In this context, we have proposed a novel setting known as One-class Quantification (OCQ). In contrast, Positive and Unlabeled Learning (PUL), another branch of Machine Learning, has offered solutions to OCQ, despite quantification not being the focal point of PUL. This article closes the gap between PUL and OCQ and brings both areas together under a unified view. We compare our method, Passive Aggressive Threshold (PAT), against PUL methods and show that PAT generally is the fastest and most accurate algorithm. PAT induces quantification models that can be reused to quantify different samples of data. We additionally introduce Exhaustive TIcE (ExTIcE), an improved version of the PUL algorithm Tree Induction for c Estimation (TIcE). We show that ExTIcE quantifies more accurately than PAT and the other assessed algorithms in scenarios where several negative observations are identical to the positive ones.

This article derives closed-form parametric formulas for the Minkowski sums of convex bodies in d-dimensional Euclidean space with boundaries that are smooth and have all positive sectional curvatures at every point. Under these conditions, there is a unique relationship between the position of each boundary point and the surface normal. The main results are presented as two theorems. The first theorem directly parameterizes the Minkowski sums using the unit normal vector at each surface point. Although simple to express mathematically, such a parameterization is not always practical to obtain computationally. Therefore, the second theorem derives a more useful parametric closed-form expression using the gradient that is not normalized. In the special case of two ellipsoids, the proposed expressions are identical to those derived previously using geometric interpretations. In order to examine the results, numerical validations and comparisons of the Minkowski sums between two superquadric bodies are conducted. Applications to generate configuration space obstacles in motion planning problems and to improve optimization-based collision detection algorithms are introduced and demonstrated.

We present a new approach to detecting projective equivalences and symmetries of rational parametric 3D curves. To detect projective equivalences, we first derive two projective differential invariants that are also invariant with respect to the change of parameters called M\"obius transformations. Given two rational curves, we form a system consists of two homogeneous polynomials in four variables using the projective differential invariants. The solution of the system yields the M\"obius transformations, each of which corresponds to a projective equivalence. If the input curves are the same, then our method detects the projective symmetries of the input curve. Our method is substantially faster than methods addressing a similar problem and provides solutions even for the curves with degree up to 24 and coefficients up to 78 digits.

Phase-type (PH) distributions are a popular tool for the analysis of univariate risks in numerous actuarial applications. Their multivariate counterparts (MPH$^\ast$), however, have not seen such a proliferation, due to lack of explicit formulas and complicated estimation procedures. A simple construction of multivariate phase-type distributions -- mPH -- is proposed for the parametric description of multivariate risks, leading to models of considerable probabilistic flexibility and statistical tractability. The main idea is to start different Markov processes at the same state, and allow them to evolve independently thereafter, leading to dependent absorption times. By dimension augmentation arguments, this construction can be cast into the umbrella of MPH$^\ast$ class, but enjoys explicit formulas which the general specification lacks, including common measures of dependence. Moreover, it is shown that the class is still rich enough to be dense on the set of multivariate risks supported on the positive orthant, and it is the smallest known sub-class to have this property. In particular, the latter result provides a new short proof of the denseness of the MPH$^\ast$ class. In practice this means that the mPH class allows for modeling of bivariate risks with any given correlation or copula. We derive an EM algorithm for its statistical estimation, and illustrate it on bivariate insurance data. Extensions to more general settings are outlined.

Graph generative models are a highly active branch of machine learning. Given the steady development of new models of ever-increasing complexity, it is necessary to provide a principled way to evaluate and compare them. In this paper, we enumerate the desirable criteria for such a comparison metric and provide an overview of the status quo of graph generative model comparison in use today, which predominantly relies on maximum mean discrepancy (MMD). We perform a systematic evaluation of MMD in the context of graph generative model comparison, highlighting some of the challenges and pitfalls researchers inadvertently may encounter. After conducting a thorough analysis of the behaviour of MMD on synthetically-generated perturbed graphs as well as on recently-proposed graph generative models, we are able to provide a suitable procedure to mitigate these challenges and pitfalls. We aggregate our findings into a list of practical recommendations for researchers to use when evaluating graph generative models.

The planted densest subgraph detection problem refers to the task of testing whether in a given (random) graph there is a subgraph that is unusually dense. Specifically, we observe an undirected and unweighted graph on $n$ nodes. Under the null hypothesis, the graph is a realization of an Erd\H{o}s-R\'{e}nyi graph with edge probability (or, density) $q$. Under the alternative, there is a subgraph on $k$ vertices with edge probability $p>q$. The statistical as well as the computational barriers of this problem are well-understood for a wide range of the edge parameters $p$ and $q$. In this paper, we consider a natural variant of the above problem, where one can only observe a small part of the graph using adaptive edge queries. For this model, we determine the number of queries necessary and sufficient for detecting the presence of the planted subgraph. Specifically, we show that any (possibly randomized) algorithm must make $\mathsf{Q} = \Omega(\frac{n^2}{k^2\chi^4(p||q)}\log^2n)$ adaptive queries (on expectation) to the adjacency matrix of the graph to detect the planted subgraph with probability more than $1/2$, where $\chi^2(p||q)$ is the Chi-Square distance. On the other hand, we devise a quasi-polynomial-time algorithm that detects the planted subgraph with high probability by making $\mathsf{Q} = O(\frac{n^2}{k^2\chi^4(p||q)}\log^2n)$ non-adaptive queries. We then propose a polynomial-time algorithm which is able to detect the planted subgraph using $\mathsf{Q} = O(\frac{n^3}{k^3\chi^2(p||q)}\log^3 n)$ queries. We conjecture that in the leftover regime, where $\frac{n^2}{k^2}\ll\mathsf{Q}\ll \frac{n^3}{k^3}$, no polynomial-time algorithms exist. Our results resolve two questions posed in \cite{racz2020finding}, where the special case of adaptive detection and recovery of a planted clique was considered.

We propose BERTScore, an automatic evaluation metric for text generation. Analogous to common metrics, \method computes a similarity score for each token in the candidate sentence with each token in the reference. However, instead of looking for exact matches, we compute similarity using contextualized BERT embeddings. We evaluate on several machine translation and image captioning benchmarks, and show that BERTScore correlates better with human judgments than existing metrics, often significantly outperforming even task-specific supervised metrics.

We show that for the problem of testing if a matrix $A \in F^{n \times n}$ has rank at most $d$, or requires changing an $\epsilon$-fraction of entries to have rank at most $d$, there is a non-adaptive query algorithm making $\widetilde{O}(d^2/\epsilon)$ queries. Our algorithm works for any field $F$. This improves upon the previous $O(d^2/\epsilon^2)$ bound (SODA'03), and bypasses an $\Omega(d^2/\epsilon^2)$ lower bound of (KDD'14) which holds if the algorithm is required to read a submatrix. Our algorithm is the first such algorithm which does not read a submatrix, and instead reads a carefully selected non-adaptive pattern of entries in rows and columns of $A$. We complement our algorithm with a matching query complexity lower bound for non-adaptive testers over any field. We also give tight bounds of $\widetilde{\Theta}(d^2)$ queries in the sensing model for which query access comes in the form of $\langle X_i, A\rangle:=tr(X_i^\top A)$; perhaps surprisingly these bounds do not depend on $\epsilon$. We next develop a novel property testing framework for testing numerical properties of a real-valued matrix $A$ more generally, which includes the stable rank, Schatten-$p$ norms, and SVD entropy. Specifically, we propose a bounded entry model, where $A$ is required to have entries bounded by $1$ in absolute value. We give upper and lower bounds for a wide range of problems in this model, and discuss connections to the sensing model above.

We present a challenging and realistic novel dataset for evaluating 6-DOF object tracking algorithms. Existing datasets show serious limitations---notably, unrealistic synthetic data, or real data with large fiducial markers---preventing the community from obtaining an accurate picture of the state-of-the-art. Our key contribution is a novel pipeline for acquiring accurate ground truth poses of real objects w.r.t a Kinect V2 sensor by using a commercial motion capture system. A total of 100 calibrated sequences of real objects are acquired in three different scenarios to evaluate the performance of trackers in various scenarios: stability, robustness to occlusion and accuracy during challenging interactions between a person and the object. We conduct an extensive study of a deep 6-DOF tracking architecture and determine a set of optimal parameters. We enhance the architecture and the training methodology to train a 6-DOF tracker that can robustly generalize to objects never seen during training, and demonstrate favorable performance compared to previous approaches trained specifically on the objects to track.

小貼士
登錄享
相關主題
北京阿比特科技有限公司