亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given an input graph $G = (V, E)$, an additive emulator $H = (V, E', w)$ is a sparse weighted graph that preserves all distances in $G$ with small additive error. A recent line of inquiry has sought to determine the best additive error achievable in the sparsest setting, when $H$ has a linear number of edges. In particular, the work of [Kogan and Parter, ICALP 2023], following [Pettie, ICALP 2007], constructed linear size emulators with $+O(n^{0.222})$ additive error. It is known that the worst-case additive error must be at least $+\Omega(n^{2/29})$ due to [Lu, Vassilevska Williams, Wein, and Xu, SODA 2022]. We present a simple linear-size emulator construction that achieves additive error $+O(n^{0.191})$. Our approach extends the path-buying framework developed by [Baswana, Kavitha, Mehlhorn, and Pettie, SODA 2005] and [Vassilevska Williams and Bodwin, SODA 2016] to the setting of sparse additive emulators.

相關內容

本專題討論會主要討論離散問題之有效演算法與資料結構。除了這些方法和結構的設計,還包括它們的使用、性能分析以及與它們的發展或局限性相關的數學問題。性能分析可以是分析性的,也可以是實驗性的,可以是針對最壞情況或預期情況的性能。研究可以是理論性的,也可以是基于實踐中出現的數據集,可以解決績效分析中涉及的方法學問題。官網鏈接: · 幾乎處處 · 決策樹 · 模式識別 · 大學 ·
2024 年 2 月 21 日

In an attempt to show that the acceptance probability of a quantum query algorithm making $q$ queries can be well-approximated almost everywhere by a classical decision tree of depth $\leq \text{poly}(q)$, Aaronson and Ambainis proposed the following conjecture: let $f: \{ \pm 1\}^n \rightarrow [0,1]$ be a degree $d$ polynomial with variance $\geq \epsilon$. Then, there exists a coordinate of $f$ with influence $\geq \text{poly} (\epsilon, 1/d)$. We show that for any polynomial $f: \{ \pm 1\}^n \rightarrow [0,1]$ of degree $d$ $(d \geq 2)$ and variance $\text{Var}[f] \geq 1/d$, if $\rho$ denotes a random restriction with survival probability $\dfrac{\log(d)}{C_1 d}$, $$ \text{Pr} \left[f_{\rho} \text{ has a coordinate with influence} \geq \dfrac{\text{Var}[f]^2 }{d^{C_2}} \right] \geq \dfrac{\text{Var}[f] \log(d)}{50C_1 d}$$ where $C_1, C_2>0$ are universal constants. Thus, Aaronson-Ambainis conjecture is true for a non-negligible fraction of random restrictions of the given polynomial assuming its variance is not too low.

Federated Learning (FL) is a distributed machine learning paradigm that allows clients to train models on their data while preserving their privacy. FL algorithms, such as Federated Averaging (FedAvg) and its variants, have been shown to converge well in many scenarios. However, these methods require clients to upload their local updates to the server in a synchronous manner, which can be slow and unreliable in realistic FL settings. To address this issue, researchers have developed asynchronous FL methods that allow clients to continue training on their local data using a stale global model. However, most of these methods simply aggregate all of the received updates without considering their relative contributions, which can slow down convergence. In this paper, we propose a contribution-aware asynchronous FL method that takes into account the staleness and statistical heterogeneity of the received updates. Our method dynamically adjusts the contribution of each update based on these factors, which can speed up convergence compared to existing methods.

Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines. Our code is publicly available at //github.com/dmksjfl/MCQ.

Given a dynamic graph $G$ with $n$ vertices and $m$ edges subject to insertion an deletions of edges, we show how to maintain a $(1+\varepsilon)\Delta$-edge-colouring of $G$ without the use of randomisation. More specifically, we show a deterministic dynamic algorithm with an amortised update time of $2^{\tilde{O}_{\log \varepsilon^{-1}}(\sqrt{\log n})}$ using $(1+\varepsilon)\Delta$ colours. If $\varepsilon^{-1} \in 2^{O(\log^{0.49} n)}$, then our update time is sub-polynomial in $n$. While there exists randomised algorithms maintaining colourings with the same number of colours [Christiansen STOC'23, Duan, He, Zhang SODA'19, Bhattacarya, Costa, Panski, Solomon SODA'24] in polylogarithmic and even constant update time, this is the first deterministic algorithm to go below the greedy threshold of $2\Delta-1$ colours for all input graphs. On the way to our main result, we show how to dynamically maintain a shallow hierarchy of degree-splitters with both recourse and update time in $n^{o(1)}$. We believe that this algorithm might be of independent interest.

Let $n$ be the size of a parameterized problem and $k$ the parameter. We present kernels for Feedback Vertex Set, Path Contraction and Cluster Editing/Deletion whose sizes are all polynomial in $k$ and that are computable in polynomial time and with $O(\rm{poly}(k) \log n)$ bits (of working memory). By using kernel cascades, we obtain the best known kernels in polynomial time with $O(\rm{poly}(k) \log n)$ bits.

Given a graph $G$ with $n$ nodes and two nodes $u,v\in G$, the {\em CoSimRank} value $s(u,v)$ quantifies the similarity between $u$ and $v$ based on graph topology. Compared to SimRank, CoSimRank is shown to be more accurate and effective in many real-world applications, including synonym expansion, lexicon extraction, and entity relatedness in knowledge graphs. The computation of all pairwise CoSimRanks in $G$ is highly expensive and challenging. Existing solutions all focus on devising approximate algorithms for the computation of all pairwise CoSimRanks. To attain a desired absolute accuracy guarantee $\epsilon$, the state-of-the-art approximate algorithm for computing all pairwise CoSimRanks requires $O(n^3\log_2(\ln(\frac{1}{\epsilon})))$ time, which is prohibitively expensive even though $\epsilon$ is large. In this paper, we propose \rsim, a fast randomized algorithm for computing all pairwise CoSimRank values. The basic idea of \rsim is to approximate the $n\times n$ matrix multiplications in CoSimRank computation via random projection. Theoretically, \rsim runs in $O(\frac{n^2\ln(n)}{\epsilon^2}\ln(\frac{1}{\epsilon}))$ time and meanwhile ensures an absolute error of at most $\epsilon$ in each CoSimRank value in $G$ with a high probability. Extensive experiments using six real graphs demonstrate that \rsim is more than orders of magnitude faster than the state of the art. In particular, on a million-edge Twitter graph, \rsim answers the $\epsilon$-approximate ($\epsilon=0.1$) all pairwise CoSimRank query within 4 hours, using a single commodity server, while existing solutions fail to terminate within a day.

Recent work has shown it is possible to construct adversarial examples that cause an aligned language model to emit harmful strings or perform harmful behavior. Existing attacks work either in the white-box setting (with full access to the model weights), or through transferability: the phenomenon that adversarial examples crafted on one model often remain effective on other models. We improve on prior work with a query-based attack that leverages API access to a remote language model to construct adversarial examples that cause the model to emit harmful strings with (much) higher probability than with transfer-only attacks. We validate our attack on GPT-3.5 and OpenAI's safety classifier; we can cause GPT-3.5 to emit harmful strings that current transfer attacks fail at, and we can evade the safety classifier with nearly 100% probability.

Fault-tolerant connectivity labelings are schemes that, given an $n$-vertex graph $G=(V,E)$ and $f\geq 1$, produce succinct yet informative labels for the elements of the graph. Given only the labels of two vertices $u,v$ and of the elements in a faulty-set $F$ with $|F|\leq f$, one can determine if $u,v$ are connected in $G-F$, the surviving graph after removing $F$. For the edge or vertex faults models, i.e., $F\subseteq E$ or $F\subseteq V$, a sequence of recent work established schemes with $poly(f,\log n)$-bit labels. This paper considers the color faults model, recently introduced in the context of spanners [Petruschka, Sapir and Tzalik, ITCS'24], which accounts for known correlations between failures. Here, the edges (or vertices) of the input $G$ are arbitrarily colored, and the faulty elements in $F$ are colors; a failing color causes all edges (vertices) of that color to crash. Our main contribution is settling the label length complexity for connectivity under one color fault ($f=1$). The existing implicit solution, by applying the state-of-the-art scheme for edge faults of [Dory and Parter, PODC'21], might yield labels of $\Omega(n)$ bits. We provide a deterministic scheme with labels of $\tilde{O}(\sqrt{n})$ bits in the worst case, and a matching lower bound. Moreover, our scheme is universally optimal: even schemes tailored to handle only colorings of one specific graph topology cannot produce asymptotically smaller labels. We extend our labeling approach to yield a routing scheme avoiding a single forbidden color. We also consider the centralized setting, and show an $\tilde{O}(n)$-space oracle, answering connectivity queries under one color fault in $\tilde{O}(1)$ time. Turning to $f\geq 2$ color faults, we give a randomized labeling scheme with $\tilde{O}(n^{1-1/2^f})$-bit labels, along with a lower bound of $\Omega(n^{1-1/(f+1)})$ bits.

Recent work has shown that it is possible to train an $\textit{unsupervised}$ automatic speech recognition (ASR) system using only unpaired audio and text. Existing unsupervised ASR methods assume that no labeled data can be used for training. We argue that even if one does not have any labeled audio for a given language, there is $\textit{always}$ labeled data available for other languages. We show that it is possible to use character-level acoustic models (AMs) from other languages to bootstrap an $\textit{unsupervised}$ AM in a new language. Here, "unsupervised" means no labeled audio is available for the $\textit{target}$ language. Our approach is based on two key ingredients: (i) generating pseudo-labels (PLs) of the $\textit{target}$ language using some $\textit{other}$ language AM and (ii) constraining these PLs with a $\textit{target language model}$. Our approach is effective on Common Voice: e.g. transfer of English AM to Swahili achieves 18% WER. It also outperforms character-based wav2vec-U 2.0 by 15% absolute WER on LJSpeech with 800h of labeled German data instead of 60k hours of unlabeled English data.

Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), Image-Text Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use conditioned masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal setting for the combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR2.

北京阿比特科技有限公司