亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the Continuous Steiner Tree problem (CST), we are given as input a set of points (called terminals) in a metric space and ask for the minimum-cost tree connecting them. Additional points (called Steiner points) from the metric space can be introduced as nodes in the solution. In the Discrete Steiner Tree problem (DST), we are given in addition to the terminals, a set of facilities, and any solution tree connecting the terminals can only contain the Steiner points from this set of facilities. Trevisan [SICOMP'00] showed that CST and DST are APX-hard when the input lies in the $\ell_1$-metric (and Hamming metric). Chleb\'ik and Chleb\'ikov\'a [TCS'08] showed that DST is NP-hard to approximate to factor of $96/95\approx 1.01$ in the graph metric (and consequently $\ell_\infty$-metric). Prior to this work, it was unclear if CST and DST are APX-hard in essentially every other popular metric! In this work, we prove that DST is APX-hard in every $\ell_p$-metric. We also prove that CST is APX-hard in the $\ell_{\infty}$-metric. Finally, we relate CST and DST, showing a general reduction from CST to DST in $\ell_p$-metrics. As an immediate consequence, this yields a $1.39$-approximation polynomial time algorithm for CST in $\ell_p$-metrics.

相關內容

DST ( Digital Sky Technologies) 為一家俄羅斯科技、投資公司,創始人為 Yuri Milner。2010 年,DST 將旗下郵件服務和投資職能拆分為 Group 和 DST Global 兩家公司。 DST 曾投資過 Facebook、Twitter、Groupon、Airbnb、Spotify、Zynga、Flipkart、阿里巴巴、京東等知名科技互聯網企業。

Inspired by the traditional partial differential equation (PDE) approach for image denoising, we propose a novel neural network architecture, referred as NODE-ImgNet, that combines neural ordinary differential equations (NODEs) with convolutional neural network (CNN) blocks. NODE-ImgNet is intrinsically a PDE model, where the dynamic system is learned implicitly without the explicit specification of the PDE. This naturally circumvents the typical issues associated with introducing artifacts during the learning process. By invoking such a NODE structure, which can also be viewed as a continuous variant of a residual network (ResNet) and inherits its advantage in image denoising, our model achieves enhanced accuracy and parameter efficiency. In particular, our model exhibits consistent effectiveness in different scenarios, including denoising gray and color images perturbed by Gaussian noise, as well as real-noisy images, and demonstrates superiority in learning from small image datasets.

Brunerie's 2016 PhD thesis contains the first synthetic proof in Homotopy Type Theory (HoTT) of the classical result that the fourth homotopy group of the 3-sphere is $\mathbb{Z}/2\mathbb{Z}$. The proof is one of the most impressive pieces of synthetic homotopy theory to date and uses a lot of advanced classical algebraic topology rephrased synthetically. Furthermore, Brunerie's proof is fully constructive and the main result can be reduced to the question of whether a particular ``Brunerie'' number $\beta$ can be normalized to $\pm 2$. The question of whether Brunerie's proof could be formalized in a proof assistant, either by computing this number or by formalizing the pen-and-paper proof, has since remained open. In this paper, we present a complete formalization in the Cubical Agda system, following Brunerie's pen-and-paper proof. We do this by modifying Brunerie's proof so that a key technical result, whose proof Brunerie only sketched in his thesis, can be avoided. We also present a formalization of a new and much simpler proof that $\beta$ is $\pm 2$. This formalization provides us with a sequence of simpler Brunerie numbers, one of which normalizes very quickly to $-2$ in Cubical Agda, resulting in a fully formalized computer assisted proof that $\pi_4(\mathbb{S}^3) \cong \mathbb{Z}/2\mathbb{Z}$.

This paper presents a randomized algorithm for the problem of single-source shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with $n$ vertices and $m$ edges, the algorithm completes in $\tilde{O}(mn^{8/9})$ time with high probability. For real-weighted graphs, this result constitutes the first asymptotic improvement over the classic $O(mn)$-time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore.

Maximizing the log-likelihood is a crucial aspect of learning latent variable models, and variational inference (VI) stands as the commonly adopted method. However, VI can encounter challenges in achieving a high log-likelihood when dealing with complicated posterior distributions. In response to this limitation, we introduce a novel variational importance sampling (VIS) approach that directly estimates and maximizes the log-likelihood. VIS leverages the optimal proposal distribution, achieved by minimizing the forward $\chi^2$ divergence, to enhance log-likelihood estimation. We apply VIS to various popular latent variable models, including mixture models, variational auto-encoders, and partially observable generalized linear models. Results demonstrate that our approach consistently outperforms state-of-the-art baselines, both in terms of log-likelihood and model parameter estimation.

We develop a flexible online version of the permutation test. This allows us to test exchangeability as the data is arriving, where we can choose to stop or continue without invalidating the size of the test. Our methods generalize beyond exchangeability to other forms of invariance under a compact group. Our approach relies on constructing an $e$-process that is the running product of multiple conditional $e$-values. To construct $e$-values, we first develop an essentially complete class of admissible $e$-values in which one can flexibly `plug in' almost any desired test statistic. To make the $e$-values conditional, we explore the intersection between the concepts of conditional invariance and sequential invariance, and find that the appropriate conditional distribution can be captured by a compact subgroup. To find powerful $e$-values for given alternatives, we develop the theory of likelihood ratios for testing group invariance yielding new optimality results for group invariance tests. These statistics turn out to exist in three different flavors, depending on the space on which we specify our alternative. We apply these statistics to test against a Gaussian location shift, which yields connections to the $t$-test when testing sphericity, connections to the softmax function and its temperature when testing exchangeability, and yields an improved version of a known $e$-value for testing sign-symmetry. Moreover, we introduce an impatience parameter that allows users to obtain more power now in exchange for less power in the long run.

Interior point methods (IPMs) that handle nonconvex constraints such as IPOPT, KNITRO and LOQO have had enormous practical success. We consider IPMs in the setting where the objective and constraints are thrice differentiable, and have Lipschitz first and second derivatives on the feasible region. We provide an IPM that, starting from a strictly feasible point, finds a $\mu$-approximate Fritz John point by solving $\mathcal{O}( \mu^{-7/4})$ trust-region subproblems. For IPMs that handle nonlinear constraints, this result represents the first iteration bound with a polynomial dependence on $1/\mu$. We also show how to use our method to find scaled-KKT points starting from an infeasible solution and improve on existing complexity bounds.

While current NL2SQL tasks constructed using Foundation Models have achieved commendable results, their direct application to Natural Language to Graph Query Language (NL2GQL) tasks poses challenges due to the significant differences between GQL and SQL expressions, as well as the numerous types of GQL. Our extensive experiments reveal that in NL2GQL tasks, larger Foundation Models demonstrate superior cross-schema generalization abilities, while smaller Foundation Models struggle to improve their GQL generation capabilities through fine-tuning. However, after fine-tuning, smaller models exhibit better intent comprehension and higher grammatical accuracy. Diverging from rule-based and slot-filling techniques, we introduce R3-NL2GQL, which employs both smaller and larger Foundation Models as reranker, rewriter and refiner. The approach harnesses the comprehension ability of smaller models for information reranker and rewriter, and the exceptional generalization and generation capabilities of larger models to transform input natural language queries and code structure schema into any form of GQLs. Recognizing the lack of established datasets in this nascent domain, we have created a bilingual dataset derived from graph database documentation and some open-source Knowledge Graphs (KGs). We tested our approach on this dataset and the experimental results showed that delivers promising performance and robustness.Our code and dataset is available at //github.com/zhiqix/NL2GQL

The hierarchical matrix ($\mathcal{H}^{2}$-matrix) formalism provides a way to reinterpret the Fast Multipole Method and related fast summation schemes in linear algebraic terms. The idea is to tessellate a matrix into blocks in such as way that each block is either small or of numerically low rank; this enables the storage of the matrix and the application of it to a vector in linear or close to linear complexity. A key motivation for the reformulation is to extend the range of dense matrices that can be represented. Additionally, $\mathcal{H}^{2}$-matrices in principle also extend the range of operations that can be executed to include matrix inversion and factorization. While such algorithms can be highly efficient for certain specialized formats (such as HBS/HSS matrices based on ``weak admissibility''), inversion algorithms for general $\mathcal{H}^{2}$-matrices tend to be based on nested recursions and recompressions, making them challenging to implement efficiently. An exception is the \textit{strong recursive skeletonization (SRS)} algorithm by Minden, Ho, Damle, and Ying, which involves a simpler algorithmic flow. However, SRS greatly increases the number of blocks of the matrix that need to be stored explicitly, leading to high memory requirements. This manuscript presents the \textit{randomized strong recursive skeletonization (RSRS)} algorithm, which is a reformulation of SRS that incorporates the randomized SVD (RSVD) to simultaneously compress and factorize an $\mathcal{H}^{2}$-matrix. RSRS is a ``black box'' algorithm that interacts with the matrix to be compressed only via its action on vectors; this extends the range of the SRS algorithm (which relied on the ``proxy source'' compression technique) to include dense matrices that arise in sparse direct solvers.

Suppose we want to construct some structure on a bounded-degree graph, e.g., an almost maximum matching, and we want to decide about each edge depending only on its constant-radius neighborhood. We examine and compare the strengths of different extensions of these local algorithms. A common extension is to use preprocessing, which means that we can make some calculation about the whole graph, and each local decision can also depend on this calculation. In this paper, we show that preprocessing is needless: if a nearly optimal local algorithm uses preprocessing, then the same can be achieved by a local algorithm without preprocessing, but with a global randomization.

The emergence of foundation models in Computer Vision and Natural Language Processing have resulted in immense progress on downstream tasks. This progress was enabled by datasets with billions of training examples. Similar benefits are yet to be unlocked for quantum chemistry, where the potential of deep learning is constrained by comparatively small datasets with 100k to 20M training examples. These datasets are limited in size because the labels are computed using the accurate (but computationally demanding) predictions of Density Functional Theory (DFT). Notably, prior DFT datasets were created using CPU supercomputers without leveraging hardware acceleration. In this paper, we take a first step towards utilising hardware accelerators by introducing the data generator PySCF$_{\text{IPU}}$ using Intelligence Processing Units (IPUs). This allowed us to create the dataset QM1B with one billion training examples containing 9-11 heavy atoms. We demonstrate that a simple baseline neural network (SchNet 9M) improves its performance by simply increasing the amount of training data without additional inductive biases. To encourage future researchers to use QM1B responsibly, we highlight several limitations of QM1B and emphasise the low-resolution of our DFT options, which also serves as motivation for even larger, more accurate datasets. Code and dataset are available on Github: //github.com/graphcore-research/pyscf-ipu

北京阿比特科技有限公司