亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a new real-valued invariant called the natural slope of a hyperbolic knot in the 3-sphere, which is defined in terms of its cusp geometry. We show that twice the knot signature and the natural slope differ by at most a constant times the hyperbolic volume divided by the cube of the injectivity radius. This inequality was discovered using machine learning to detect relationships between various knot invariants. It has applications to Dehn surgery and to 4-ball genus. We also show a refined version of the inequality where the upper bound is a linear function of the volume, and the slope is corrected by terms corresponding to short geodesics that link the knot an odd number of times.

相關內容

機(ji)器(qi)學(xue)習(xi)(xi)(xi)(Machine Learning)是(shi)一(yi)個研(yan)(yan)究(jiu)計算學(xue)習(xi)(xi)(xi)方(fang)(fang)法(fa)(fa)的(de)(de)(de)(de)國際論壇。該(gai)雜志發表(biao)文(wen)(wen)(wen)章,報(bao)告廣泛的(de)(de)(de)(de)學(xue)習(xi)(xi)(xi)方(fang)(fang)法(fa)(fa)應(ying)(ying)用(yong)于(yu)各(ge)種學(xue)習(xi)(xi)(xi)問(wen)(wen)題(ti)的(de)(de)(de)(de)實質性結(jie)果。該(gai)雜志的(de)(de)(de)(de)特色論文(wen)(wen)(wen)描(miao)述研(yan)(yan)究(jiu)的(de)(de)(de)(de)問(wen)(wen)題(ti)和(he)方(fang)(fang)法(fa)(fa),應(ying)(ying)用(yong)研(yan)(yan)究(jiu)和(he)研(yan)(yan)究(jiu)方(fang)(fang)法(fa)(fa)的(de)(de)(de)(de)問(wen)(wen)題(ti)。有關學(xue)習(xi)(xi)(xi)問(wen)(wen)題(ti)或方(fang)(fang)法(fa)(fa)的(de)(de)(de)(de)論文(wen)(wen)(wen)通過實證(zheng)(zheng)研(yan)(yan)究(jiu)、理論分析或與心(xin)理現象的(de)(de)(de)(de)比較提供了(le)堅實的(de)(de)(de)(de)支(zhi)(zhi)持。應(ying)(ying)用(yong)論文(wen)(wen)(wen)展示(shi)了(le)如何應(ying)(ying)用(yong)學(xue)習(xi)(xi)(xi)方(fang)(fang)法(fa)(fa)來(lai)解決重(zhong)要(yao)的(de)(de)(de)(de)應(ying)(ying)用(yong)問(wen)(wen)題(ti)。研(yan)(yan)究(jiu)方(fang)(fang)法(fa)(fa)論文(wen)(wen)(wen)改進了(le)機(ji)器(qi)學(xue)習(xi)(xi)(xi)的(de)(de)(de)(de)研(yan)(yan)究(jiu)方(fang)(fang)法(fa)(fa)。所有的(de)(de)(de)(de)論文(wen)(wen)(wen)都以其他研(yan)(yan)究(jiu)人員可(ke)以驗證(zheng)(zheng)或復制(zhi)的(de)(de)(de)(de)方(fang)(fang)式描(miao)述了(le)支(zhi)(zhi)持證(zheng)(zheng)據。論文(wen)(wen)(wen)還(huan)詳細(xi)說(shuo)明了(le)學(xue)習(xi)(xi)(xi)的(de)(de)(de)(de)組(zu)成部(bu)分,并討論了(le)關于(yu)知識(shi)表(biao)示(shi)和(he)性能任務的(de)(de)(de)(de)假(jia)設(she)。 官網地址:

The need for efficiently comparing and representing datasets with unknown alignment spans various fields, from model analysis and comparison in machine learning to trend discovery in collections of medical datasets. We use manifold learning to compare the intrinsic geometric structures of different datasets by comparing their diffusion operators, symmetric positive-definite (SPD) matrices that relate to approximations of the continuous Laplace-Beltrami operator from discrete samples. Existing methods typically compare such operators in a pointwise manner or assume known data alignment. Instead, we exploit the Riemannian geometry of SPD matrices to compare these operators and define a new theoretically-motivated distance based on a lower bound of the log-Euclidean metric. Our framework facilitates comparison of data manifolds expressed in datasets with different sizes, numbers of features, and measurement modalities. Our log-Euclidean signature (LES) distance recovers meaningful structural differences, outperforming competing methods in various application domains.

In every finite mixture of different normal distributions, there will always be exactly one of those distributions that not only is over-represented in the right tail of the mixture, but even completely overwhelms all other subpopulations in the rightmost tails. This property, although not unique to normal distributions, is not shared by other common continuous centrally-symmetric unimodal distributions such as Laplace, nor even by other bell-shaped distributions such as Cauchy (Lorentz) distributions.

Clustering of mixed-type datasets can be a particularly challenging task as it requires taking into account the associations between variables with different level of measurement, i.e., nominal, ordinal and/or interval. In some cases, hierarchical clustering is considered a suitable approach, as it makes few assumptions about the data and its solution can be easily visualized. Since most hierarchical clustering approaches assume variables are measured on the same scale, a simple strategy for clustering mixed-type data is to homogenize the variables before clustering. This would mean either recoding the continuous variables as categorical ones or vice versa. However, typical discretization of continuous variables implies loss of information. In this work, an agglomerative hierarchical clustering approach for mixed-type data is proposed, which relies on a barycentric coding of continuous variables. The proposed approach minimizes information loss and is compatible with the framework of correspondence analysis. The utility of the method is demonstrated on real and simulated data.

Matrix sparsification is a well-known approach in the design of efficient algorithms, where one approximates a matrix $A$ with a sparse matrix $A'$. Achlioptas and McSherry [2007] initiated a long line of work on spectral-norm sparsification, which aims to guarantee that $\|A'-A\|\leq \epsilon \|A\|$ for error parameter $\epsilon>0$. Various forms of matrix approximation motivate considering this problem with a guarantee according to the Schatten $p$-norm for general $p$, which includes the spectral norm as the special case $p=\infty$. We investigate the relation between fixed but different $p\neq q$, that is, whether sparsification in Schatten $p$-norm implies (existentially and/or algorithmically) sparsification in Schatten $q$-norm with similar sparsity. An affirmative answer could be tremendously useful, as it will identify which value of $p$ to focus on. Our main finding is a surprising contrast between this question and the analogous case of $\ell_p$-norm sparsification for vectors: For vectors, the answer is affirmative for $p<q$ and negative for $p>q$, but for matrices we answer negatively for almost all $p\neq q$.

With the recent advance of deep learning, neural networks have been extensively used for the task of molecular generation. Many deep generators extract atomic relations from molecular graphs and ignore hierarchical information at both atom and molecule levels. In order to extract such hierarchical information, we propose a novel hyperbolic generative model. Our model contains three parts: first, a fully hyperbolic junction-tree encoder-decoder that embeds the hierarchical information of the molecules in the latent hyperbolic space; second, a latent generative adversarial network for generating the latent embeddings; third, a molecular generator that inherits the decoders from the first part and the latent generator from the second part. We evaluate our model on the ZINC dataset using the MOSES benchmarking platform and achieve competitive results, especially in metrics about structural similarity.

Comparing probability distributions is an indispensable and ubiquitous task in machine learning and statistics. The most common way to compare a pair of Borel probability measures is to compute a metric between them, and by far the most widely used notions of metric are the Wasserstein metric and the total variation metric. The next most common way is to compute a divergence between them, and in this case almost every known divergences such as those of Kullback--Leibler, Jensen--Shannon, R\'enyi, and many more, are special cases of the $f$-divergence. Nevertheless these metrics and divergences may only be computed, in fact, are only defined, when the pair of probability measures are on spaces of the same dimension. How would one quantify, say, a KL-divergence between the uniform distribution on the interval $[-1,1]$ and a Gaussian distribution on $\mathbb{R}^3$? We show that these common notions of metrics and divergences give rise to natural distances between Borel probability measures defined on spaces of different dimensions, e.g., one on $\mathbb{R}^m$ and another on $\mathbb{R}^n$ where $m, n$ are distinct, so as to give a meaningful answer to the previous question.

Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.

The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.

We introduce hyperbolic attention networks to endow neural networks with enough capacity to match the complexity of data with hierarchical and power-law structure. A few recent approaches have successfully demonstrated the benefits of imposing hyperbolic geometry on the parameters of shallow networks. We extend this line of work by imposing hyperbolic geometry on the activations of neural networks. This allows us to exploit hyperbolic geometry to reason about embeddings produced by deep networks. We achieve this by re-expressing the ubiquitous mechanism of soft attention in terms of operations defined for hyperboloid and Klein models. Our method shows improvements in terms of generalization on neural machine translation, learning on graphs and visual question answering tasks while keeping the neural representations compact.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司