亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We bound the smoothed running time of the FLIP algorithm for local Max-Cut as a function of $\alpha$, the arboricity of the input graph. We show that, with high probability and in expectation, the following holds (where $n$ is the number of nodes and $\phi$ is the smoothing parameter): 1) When $\alpha = O(\log^{1-\delta} n)$ FLIP terminates in $\phi poly(n)$ iterations, where $\delta \in (0,1]$ is an arbitrarily small constant. Previous to our results the only graph families for which FLIP was known to achieve a smoothed polynomial running time were complete graphs and graphs with logarithmic maximum degree. 2) For arbitrary values of $\alpha$ we get a running time of $\phi n^{O(\frac{\alpha}{\log n} + \log \alpha)}$. This improves over the best known running time for general graphs of $\phi n^{O(\sqrt{ \log n })}$ for $\alpha = o(\log^{1.5} n)$. Specifically, when $\alpha = O(\log n)$ we get a significantly faster running time of $\phi n^{O(\log \log n)}$.

相關內容

This paper presents an $O^{*}(1.42^{n})$ time algorithm for the Maximum Cut problem on split graphs, along with a subexponential time algorithm for its decision variant.

In Path Set Packing, the input is an undirected graph $G$, a collection $\calp$ of simple paths in $G$, and a positive integer $k$. The problem is to decide whether there exist $k$ edge-disjoint paths in $\calp$. We study the parameterized complexity of Path Set Packing with respect to both natural and structural parameters. We show that the problem is $W[1]$-hard with respect to vertex cover number, and $W[1]$-hard respect to pathwidth plus maximum degree plus solution size. These results answer an open question raised in COCOON 2018. On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in $\calp$. These positive results complement the hardness of Path Set Packing with respect to any subset of the parameters used in the FPT algorithms. We also give a $4$-approximation algorithm for maximum path set packing problem which runs in FPT time when parameterized by feedback edge number.

Oja's algorithm for streaming Principal Component Analysis (PCA) for $n$ datapoints in a $d$ dimensional space achieves the same sin-squared error $O(r_\mathsf{eff}/n)$ as the offline algorithm in $O(d)$ space and $O(nd)$ time and a single pass through the datapoints. Here $r_\mathsf{eff}$ is the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix $\Sigma$). Under this computational budget, we consider the problem of sparse PCA, where the principal eigenvector of $\Sigma$ is $s$-sparse, and $r_\mathsf{eff}$ can be large. In this setting, to our knowledge, \textit{there are no known single-pass algorithms} that achieve the minimax error bound in $O(d)$ space and $O(nd)$ time without either requiring strong initialization conditions or assuming further structure (e.g., spiked) of the covariance matrix. We show that a simple single-pass procedure that thresholds the output of Oja's algorithm (the Oja vector) can achieve the minimax error bound under some regularity conditions in $O(d)$ space and $O(nd)$ time as long as $r_\mathsf{eff}=O(n/\log n)$. We present a nontrivial and novel analysis of the entries of the unnormalized Oja vector, which involves the projection of a product of independent random matrices on a random initial vector. This is completely different from previous analyses of Oja's algorithm and matrix products, which have been done when the $r_\mathsf{eff}$ is bounded.

We present two randomised approximate counting algorithms with $\widetilde{O}(n^{2-c}/\varepsilon^2)$ running time for some constant $c>0$ and accuracy $\varepsilon$: (1) for the hard-core model with fugacity $\lambda$ on graphs with maximum degree $\Delta$ when $\lambda=O(\Delta^{-1.5-c_1})$ where $c_1=c/(2-2c)$; (2) for spin systems with strong spatial mixing (SSM) on planar graphs with quadratic growth, such as $\mathbb{Z}^2$. For the hard-core model, Weitz's algorithm (STOC, 2006) achieves sub-quadratic running time when correlation decays faster than the neighbourhood growth, namely when $\lambda = o(\Delta^{-2})$. Our first algorithm does not require this property and extends the range where sub-quadratic algorithms exist. Our second algorithm appears to be the first to achieve sub-quadratic running time up to the SSM threshold, albeit on a restricted family of graphs. It also extends to (not necessarily planar) graphs with polynomial growth, such as $\mathbb{Z}^d$, but with a running time of the form $\widetilde{O}\left(n^2\varepsilon^{-2}/2^{c(\log n)^{1/d}}\right)$ where $d$ is the exponent of the polynomial growth and $c>0$ is some constant.

We show that it is decidable, given an automatic sequence $\bf s$ and a constant $c$, whether all prefixes of $\bf s$ have a string attractor of size $\leq c$. Using a decision procedure based on this result, we show that all prefixes of the period-doubling sequence of length $\geq 2$ have a string attractor of size $2$. We also prove analogous results for other sequences, including the Thue-Morse sequence and the Tribonacci sequence. We also provide general upper and lower bounds on string attractor size for different kinds of sequences. For example, if $\bf s$ has a finite appearance constant, then there is a string attractor for ${\bf s}[0..n-1]$ of size $O(\log n)$. If further $\bf s$ is linearly recurrent, then there is a string attractor for ${\bf s}[0..n-1]$ of size $O(1)$. For automatic sequences, the size of the smallest string attractor for ${\bf s}[0..n-1]$ is either $\Theta(1)$ or $\Theta(\log n)$, and it is decidable which case occurs. Finally, we close with some remarks about greedy string attractors.

Minimum-entropy coupling (MEC) -- the process of finding a joint distribution with minimum entropy for given marginals -- has applications in areas such as causality and steganography. However, existing algorithms are either computationally intractable for large-support distributions or limited to specific distribution types and sensitive to hyperparameter choices. This work addresses these limitations by unifying a prior family of iterative MEC (IMEC) approaches into a generalized partition-based formalism. From this framework, we derive a novel IMEC algorithm called ARIMEC, capable of handling arbitrary discrete distributions, and introduce a method to make IMEC robust to suboptimal hyperparameter settings. These innovations facilitate the application of IMEC to high-throughput steganography with language models, among other settings. Our codebase is available at //github.com/ssokota/mec .

We develop realizability models of intensional type theory, based on groupoids, wherein realizers themselves carry non-trivial (non-discrete) homotopical structure. In the spirit of realizability, this is intended to formalize a homotopical BHK interpretation, whereby evidence for an identification is a path. Specifically, we study partitioned groupoidal assemblies. Categories of such are parameterised by "realizer categories" (instead of the usual partial combinatory algebras) that come equipped with an interval qua internal cogroupoid. The interval furnishes a notion of homotopy as well as a fundamental groupoid construction. Objects in a base groupoid are realized by points in the fundamental groupoid of some object from the realizer category; isomorphisms in the base groupoid are realized by paths in said fundamental groupoid. The main result is that, under mild conditions on the realizer category, the ensuing category of partitioned groupoidal assemblies models intensional (1-truncated) type theory without function extensionality. Moreover, when the underlying realizer category is "untyped", there exists an impredicative universe of 1-types (the modest fibrations). This is a groupoidal analogue of the traditional situation.

We give a randomness-efficient homomorphism test in the low soundness regime for functions, $f: G\to \mathbb{U}_t$, from an arbitrary finite group $G$ to $t\times t$ unitary matrices. We show that if such a function passes a derandomized Blum--Luby--Rubinfeld (BLR) test (using small-bias sets), then (i) it correlates with a function arising from a genuine homomorphism, and (ii) it has a non-trivial Fourier mass on a low-dimensional irreducible representation. In the full randomness regime, such a test for matrix-valued functions on finite groups implicitly appears in the works of Gowers and Hatami [Sbornik: Mathematics '17], and Moore and Russell [SIAM Journal on Discrete Mathematics '15]. Thus, our work can be seen as a near-optimal derandomization of their results. Our key technical contribution is a "degree-2 expander mixing lemma'' that shows that Gowers' $\mathrm{U}^2$ norm can be efficiently estimated by restricting it to a small-bias subset. Another corollary is a "derandomized'' version of a useful lemma due to Babai, Nikolov, and Pyber [SODA'08].

The ability of Large Language Models (LLMs) to process and generate coherent text is markedly weakened when the number of input tokens exceeds their pretraining length. Given the expensive overhead of finetuning large-scale models with longer sequences, we propose Dual Chunk Attention (DCA), which enables Llama2 70B to support context windows of more than 100k tokens without continual training. By decomposing the attention computation for long sequences into chunk-based modules, DCA manages to effectively capture the relative positional information of tokens within the same chunk (Intra-Chunk) and across distinct chunks (Inter-Chunk), as well as integrates seamlessly with Flash Attention. In addition to its impressive extrapolation capability, DCA achieves performance on practical long-context tasks that is comparable to or even better than that of finetuned models. When compared with proprietary models, our training-free 70B model attains 94% of the performance of gpt-3.5-16k, indicating it is a viable open-source alternative. All code and data used in this work are released at \url{//github.com/HKUNLP/ChunkLlama}.

Cold-start problems are long-standing challenges for practical recommendations. Most existing recommendation algorithms rely on extensive observed data and are brittle to recommendation scenarios with few interactions. This paper addresses such problems using few-shot learning and meta learning. Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks. To accomplish this, we combine the scenario-specific learning with a model-agnostic sequential meta-learning and unify them into an integrated end-to-end framework, namely Scenario-specific Sequential Meta learner (or s^2 meta). By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks while effectively adapting to specific tasks by leveraging learning-to-learn knowledge. Extensive experiments on various real-world datasets demonstrate that our proposed model can achieve significant gains over the state-of-the-arts for cold-start problems in online recommendation. Deployment is at the Guess You Like session, the front page of the Mobile Taobao.

北京阿比特科技有限公司