亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we introduce a new algorithm for the \emph{$k$-Shortest Simple Paths} (\kspp{k}) problem with an asymptotic running time matching the state of the art from the literature. It is based on a black-box algorithm due to \citet{Roditty12} that solves at most $2k$ instances of the \emph{Second Shortest Simple Path} (\kspp{2}) problem without specifying how this is done. We fill this gap using a novel approach: we turn the scalar \kspp{2} into instances of the Biobjective Shortest Path problem. Our experiments on grid graphs and on road networks show that the new algorithm is very efficient in practice.

相關內容

This paper investigates the possibility of approximating multiple mathematical operations in latent space for expression derivation. To this end, we introduce different multi-operational representation paradigms, modelling mathematical operations as explicit geometric transformations. By leveraging a symbolic engine, we construct a large-scale dataset comprising 1.7M derivation steps stemming from 61K premises and 6 operators, analysing the properties of each paradigm when instantiated with state-of-the-art neural encoders. Specifically, we investigate how different encoding mechanisms can approximate equational reasoning in latent space, exploring the trade-off between learning different operators and specialising within single operations, as well as the ability to support multi-step derivations and out-of-distribution generalisation. Our empirical analysis reveals that the multi-operational paradigm is crucial for disentangling different operators, while discriminating the conclusions for a single operation is achievable in the original expression encoder. Moreover, we show that architectural choices can heavily affect the training dynamics, structural organisation, and generalisation of the latent space, resulting in significant variations across paradigms and classes of encoders.

Distributional shift is a central challenge in the deployment of machine learning models as they can be ill-equipped for real-world data. This is particularly evident in text-to-audio generation where the encoded representations are easily undermined by unseen prompts, which leads to the degradation of generated audio -- the limited set of the text-audio pairs remains inadequate for conditional audio generation in the wild as user prompts are under-specified. In particular, we observe a consistent audio quality degradation in generated audio samples with user prompts, as opposed to training set prompts. To this end, we present a retrieval-based in-context prompt editing framework that leverages the training captions as demonstrative exemplars to revisit the user prompts. We show that the framework enhanced the audio quality across the set of collected user prompts, which were edited with reference to the training captions as exemplars.

This paper introduces the XOR-OR-AND normal form (XNF) for logical formulas. It is a generalization of the well-known Conjunctive Normal Form (CNF) where literals are replaced by XORs of literals. As a first theoretic result, we show that every formula is equisatisfiable to a formula in 2-XNF, i.e., a formula in XNF where each disjunction involves at most two XORs of literals. Subsequently, we present an algorithm which converts Boolean polynomials efficiently from their Algebraic Normal Form (ANF) to formulas in 2-XNF. Experiments with the cipher ASCON-128 show that cryptographic problems, which by design are based strongly on XOR-operations, can be represented using far fewer variables and clauses in 2-XNF than in CNF. In order to take advantage of this compact representation, new SAT solvers based on input formulas in 2-XNF need to be designed. By taking inspiration from graph-based 2-CNF SAT solving, we devise a new DPLL-based SAT solver for formulas in 2-XNF. Among others, we present advanced pre- and in-processing techniques. Finally, we give timings for random 2-XNF instances and instances related to key recovery attacks on round reduced ASCON-128, where our solver outperforms state-of-the-art alternative solving approaches.

Given an input graph $G = (V, E)$, an additive emulator $H = (V, E', w)$ is a sparse weighted graph that preserves all distances in $G$ with small additive error. A recent line of inquiry has sought to determine the best additive error achievable in the sparsest setting, when $H$ has a linear number of edges. In particular, the work of [Kogan and Parter, ICALP 2023], following [Pettie, ICALP 2007], constructed linear size emulators with $+O(n^{0.222})$ additive error. It is known that the worst-case additive error must be at least $+\Omega(n^{2/29})$ due to [Lu, Vassilevska Williams, Wein, and Xu, SODA 2022]. We present a simple linear-size emulator construction that achieves additive error $+O(n^{0.191})$. Our approach extends the path-buying framework developed by [Baswana, Kavitha, Mehlhorn, and Pettie, SODA 2005] and [Vassilevska Williams and Bodwin, SODA 2016] to the setting of sparse additive emulators.

We bound the smoothed running time of the FLIP algorithm for local Max-Cut as a function of $\alpha$, the arboricity of the input graph. We show that, with high probability, the following holds (where $n$ is the number of nodes and $\phi$ is the smoothing parameter): 1) When $\alpha = O(\sqrt{\log n})$ FLIP terminates in $\phi poly(n)$ iterations. Previous to our results the only graph families for which FLIP was known to achieve a smoothed polynomial running time were complete graphs and graphs with logarithmic maximum degree. 2) For arbitrary values of $\alpha$ we get a running time of $\phi n^{O(\frac{\alpha}{\log n} + \log \alpha)}$. This improves over the best known running time for general graphs of $\phi n^{O(\sqrt{ \log n })}$ for $\alpha = o(\log^{1.5} n)$. Specifically, when $\alpha = O(\log n)$ we get a significantly faster running time of $\phi n^{O(\log \log n)}$.

Feature bagging is a well-established ensembling method which aims to reduce prediction variance by combining predictions of many estimators trained on subsets or projections of features. Here, we develop a theory of feature-bagging in noisy least-squares ridge ensembles and simplify the resulting learning curves in the special case of equicorrelated data. Using analytical learning curves, we demonstrate that subsampling shifts the double-descent peak of a linear predictor. This leads us to introduce heterogeneous feature ensembling, with estimators built on varying numbers of feature dimensions, as a computationally efficient method to mitigate double-descent. Then, we compare the performance of a feature-subsampling ensemble to a single linear predictor, describing a trade-off between noise amplification due to subsampling and noise reduction due to ensembling. Our qualitative insights carry over to linear classifiers applied to image classification tasks with realistic datasets constructed using a state-of-the-art deep learning feature map.

The convex hull of a data set $P$ is the smallest convex set that contains $P$. In this work, we present a new data structure for convex hull, that allows for efficient dynamic updates. In a dynamic convex hull implementation, the following traits are desirable: (1) algorithms for efficiently answering queries as to whether a specified point is inside or outside the hull, (2) adhering to geometric robustness, and (3) algorithmic simplicity.Furthermore, a specific but well-motivated type of two-dimensional data is rank-based data. Here, the input is a set of real-valued numbers $Y$ where for any number $y\in Y$ its rank is its index in $Y$'s sorted order. Each value in $Y$ can be mapped to a point $(rank, value)$ to obtain a two-dimensional point set. In this work, we give an efficient, geometrically robust, dynamic convex hull algorithm, that facilitates queries to whether a point is internal. Furthermore, our construction can be used to efficiently update the convex hull of rank-ordered data, when the real-valued point set is subject to insertions and deletions. Our improved solution is based on an algorithmic simplification of the classical convex hull data structure by Overmars and van Leeuwen~[STOC'80], combined with new algorithmic insights. Our theoretical guarantees on the update time match those of Overmars and van Leeuwen, namely $O(\log^2 |P|)$, while we allow a wider range of functionalities (including rank-based data). Our algorithmic simplification includes simplifying an 11-case check down to a 3-case check that can be written in 20 lines of easily readable C-code. We extend our solution to provide a trade-off between theoretical guarantees and the practical performance of our algorithm. We test and compare our solutions extensively on inputs that were generated randomly or adversarially, including benchmarking datasets from the literature.

In this paper, we consider algorithms for edge-coloring multigraphs $G$ of bounded maximum degree, i.e., $\Delta(G) = O(1)$. Shannon's theorem states that any multigraph of maximum degree $\Delta$ can be properly edge-colored with $\lfloor3\Delta/2\rfloor$ colors. Our main results include algorithms for computing such colorings. We design deterministic and randomized sequential algorithms with running time $O(n\log n)$ and $O(n)$, respectively. This is the first improvement since the $O(n^2)$ algorithm in Shannon's original paper, and our randomized algorithm is optimal up to constant factors. We also develop distributed algorithms in the $\mathsf{LOCAL}$ model of computation. Namely, we design deterministic and randomized $\mathsf{LOCAL}$ algorithms with running time $\tilde O(\log^5 n)$ and $O(\log^2n)$, respectively. The deterministic sequential algorithm is a simplified extension of earlier work of Gabow et al. in edge-coloring simple graphs. The other algorithms apply the entropy compression method in a similar way to recent work by the author and Bernshteyn, where the authors design algorithms for Vizing's theorem for simple graphs. We also extend those results to Vizing's theorem for multigraphs.

We introduce \emph{ReMatching}, a new approach to the functional map framework that, by exploiting a novel and appropriate remeshing paradigm (\emph{Re}), enables us to target shape-matching tasks (\emph{Matching}) even on high-resolution meshes (\emph{Matching}) on which the original functional map framework does not apply or requires a massive computational cost. Instead, with our solution, we propose a time-efficient and metric-preserving remeshing algorithm that builds up a low-resolution geometry while acting conservatively on the lower frequencies of the shape and its Laplacian spectrum. Thanks to this last property, we can translate the functional maps optimization problem on this sparse representation, and thus, we can efficiently compute correspondences with functional map approaches. Finally, we design a robust technique for extending the estimate correspondence to wildly dense meshes. Through quantitative and qualitative evaluation and compared to existing alternatives, we show that our method is more efficient and effective, outperforming state-of-the-art pipelines in terms of quality and computational cost.

We consider the performance of a least-squares regression model, as judged by out-of-sample $R^2$. Shapley values give a fair attribution of the performance of a model to its input features, taking into account interdependencies between features. Evaluating the Shapley values exactly requires solving a number of regression problems that is exponential in the number of features, so a Monte Carlo-type approximation is typically used. We focus on the special case of least-squares regression models, where several tricks can be used to compute and evaluate regression models efficiently. These tricks give a substantial speed up, allowing many more Monte Carlo samples to be evaluated, achieving better accuracy. We refer to our method as least-squares Shapley performance attribution (LS-SPA), and describe our open-source implementation.

北京阿比特科技有限公司