亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a binary word relation $\tau$ onto $A^*$ and a finite language $X\subseteq A^*$, a $\tau$-Gray cycle over $X$ consists in a permutation $\left(w_{[i]}\right)_{0\le i\le |X|-1}$ of $X$ such that each word $w_{[i]}$ is an image under $\tau$ of the previous word $w_{{[i-1]}}$. We define the complexity measure $\lambda_{A,\tau}(n)$, equal to the largest cardinality of a language $X$ having words of length at most $n$, and s.t. some $\tau$-Gray cycle over $X$ exists. The present paper is concerned with $\tau=\sigma_k$, the so-called $k$-character substitution, s.t. $(u,v)\in\sigma_k$ holds if, and only if, the Hamming distance of $u$ and $v$ is $k$. We present loopless (resp., constant amortized time) algorithms for computing specific maximum length $$\sigma_k$-Gray cycles.

相關內容

For a fixed set ${\cal H}$ of graphs, a graph $G$ is ${\cal H}$-subgraph-free if $G$ does not contain any $H \in {\cal H}$ as a (not necessarily induced) subgraph. A recently proposed framework gives a complete classification on ${\cal H}$-subgraph-free graphs (for finite sets ${\cal H}$) for problems that are solvable in polynomial time on graph classes of bounded treewidth, NP-complete on subcubic graphs, and whose NP-hardness is preserved under edge subdivision. While a lot of problems satisfy these conditions, there are also many problems that do not satisfy all three conditions and for which the complexity ${\cal H}$-subgraph-free graphs is unknown. In this paper, we study problems for which only the first two conditions of the framework hold (they are solvable in polynomial time on classes of bounded treewidth and NP-complete on subcubic graphs, but NP-hardness is not preserved under edge subdivision). In particular, we make inroads into the classification of the complexity of four such problems: $k$-Induced Disjoint Paths, $C_5$-Colouring, Hamilton Cycle and Star $3$-Colouring. Although we do not complete the classifications, we show that the boundary between polynomial time and NP-complete differs among our problems and differs from problems that do satisfy all three conditions of the framework. Hence, we exhibit a rich complexity landscape among problems for ${\cal H}$-subgraph-free graph classes.

For an undirected unweighted graph $G=(V,E)$ with $n$ vertices and $m$ edges, let $d(u,v)$ denote the distance from $u\in V$ to $v\in V$ in $G$. An $(\alpha,\beta)$-stretch approximate distance oracle (ADO) for $G$ is a data structure that given $u,v\in V$ returns in constant (or near constant) time a value $\hat d (u,v)$ such that $d(u,v) \le \hat d (u,v) \le \alpha\cdot d(u,v) + \beta$, for some reals $\alpha >1, \beta$. If $\beta = 0$, we say that the ADO has stretch $\alpha$. Thorup and Zwick~\cite{thorup2005approximate} showed that one cannot beat stretch 3 with subquadratic space (in terms of $n$) for general graphs. P\v{a}tra\c{s}cu and Roditty~\cite{patrascu2010distance} showed that one can obtain stretch 2 using $O(m^{1/3}n^{4/3})$ space, and so if $m$ is subquadratic in $n$ then the space usage is also subquadratic. Moreover, P\v{a}tra\c{s}cu and Roditty~\cite{patrascu2010distance} showed that one cannot beat stretch 2 with subquadratic space even for graphs where $m=\tilde{O}(n)$, based on the set-intersection hypothesis. In this paper we explore the conditions for which an ADO can be stored using subquadratic space while supporting a sub-2 stretch. In particular, we show that if the maximum degree in $G$ is $\Delta_G \leq O(n^{1/2-\varepsilon})$ for some $0<\varepsilon \leq 1/2$, then there exists an ADO for $G$ that uses $\tilde{O}(n^{2-\frac {2\varepsilon}{3}})$ space and has a sub-2 stretch. Moreover, we prove a conditional lower bound, based on the set intersection hypothesis, which states that for any positive integer $k \leq \log n$, obtaining a sub-$\frac{k+2}{k}$ stretch for graphs with maximum degree $\Theta(n^{1/k})$ requires quadratic space. Thus, for graphs with maximum degree $\Theta(n^{1/2})$, obtaining a sub-2 stretch requires quadratic space.

Large language models (LLMs), such as ChatGPT, have simplified text generation tasks, yet their inherent privacy risks are increasingly garnering attention. While differential privacy techniques have been successfully applied to text classification tasks, the resultant semantic bias makes them unsuitable for text generation. Homomorphic encryption inference methods have also been introduced, however, the significant computational and communication costs limit their viability. Furthermore, closed-source, black-box models such as GPT-4 withhold their architecture, thwarting certain privacy-enhancing strategies such as splitting inference into local and remote and then adding noise when communicating. To overcome these challenges, we introduce PrivInfer, the first privacy-preserving inference framework for black-box LLMs in text generation. Inspired by human writing, PrivInfer employs differential privacy methods to generate perturbed prompts for remote LLMs inference and extracts the meaningful response from the remote perturbed results. We also introduce RANTEXT, a differential privacy scheme specifically for LLMs that leverages random adjacency in text perturbations. Experimental results indicate that PrivInfer is comparable to GPT-4 in terms of text generation quality while protecting privacy, and RANTEXT provides enhanced privacy protection against three types of differential privacy attacks, including our newly introduced GPT inference attack, compared to baseline methods.

In this paper we show derivations among logarithmic space bounded counting classes based on closure properties of $#L$ that leads us to the result that $NL=PL=C_=L$.

A 2-packing set for an undirected graph $G=(V,E)$ is a subset $\mathcal{S} \subset V$ such that any two vertices $v_1,v_2 \in \mathcal{S}$ have no common neighbors. Finding a 2-packing set of maximum cardinality is a NP-hard problem. We develop a new approach to solve this problem on arbitrary graphs using its close relation to the independent set problem. Thereby, our algorithm red2pack uses new data reduction rules specific to the 2-packing set problem as well as a graph transformation. Our experiments show that we outperform the state-of-the-art for arbitrary graphs with respect to solution quality and also are able to compute solutions multiple orders of magnitude faster than previously possible. For example, we are able to solve 63% of our graphs to optimality in less than a second while the competitor for arbitrary graphs can only solve 5% of the graphs in the data set to optimality even with a 10 hour time limit. Moreover, our approach can solve a wide range of large instances that have previously been unsolved.

We consider kernels of the form $(x,x') \mapsto \phi(\|x-x'\|^2_\Sigma)$ parametrized by $\Sigma$. For such kernels, we study a variant of the kernel ridge regression problem which simultaneously optimizes the prediction function and the parameter $\Sigma$ of the reproducing kernel Hilbert space. The eigenspace of the $\Sigma$ learned from this kernel ridge regression problem can inform us which directions in covariate space are important for prediction. Assuming that the covariates have nonzero explanatory power for the response only through a low dimensional subspace (central mean subspace), we find that the global minimizer of the finite sample kernel learning objective is also low rank with high probability. More precisely, the rank of the minimizing $\Sigma$ is with high probability bounded by the dimension of the central mean subspace. This phenomenon is interesting because the low rankness property is achieved without using any explicit regularization of $\Sigma$, e.g., nuclear norm penalization. Our theory makes correspondence between the observed phenomenon and the notion of low rank set identifiability from the optimization literature. The low rankness property of the finite sample solutions exists because the population kernel learning objective grows "sharply" when moving away from its minimizers in any direction perpendicular to the central mean subspace.

We propose a novel unsupervised learning approach for non-rigid 3D shape matching. Our approach improves upon recent state-of-the art deep functional map methods and can be applied to a broad range of different challenging scenarios. Previous deep functional map methods mainly focus on feature extraction and aim exclusively at obtaining more expressive features for functional map computation. However, the importance of the functional map computation itself is often neglected and the relationship between the functional map and point-wise map is underexplored. In this paper, we systematically investigate the coupling relationship between the functional map from the functional map solver and the point-wise map based on feature similarity. To this end, we propose a self-adaptive functional map solver to adjust the functional map regularisation for different shape matching scenarios, together with a vertex-wise contrastive loss to obtain more discriminative features. Using different challenging datasets (including non-isometry, topological noise and partiality), we demonstrate that our method substantially outperforms previous state-of-the-art methods.

We study the following characterization problem. Given a set $T$ of terminals and a $(2^{|T|}-2)$-dimensional vector $\pi$ whose coordinates are indexed by proper subsets of $T$, is there a graph $G$ that contains $T$, such that for all subsets $\emptyset\subsetneq S\subsetneq T$, $\pi_S$ equals the value of the min-cut in $G$ separating $S$ from $T\setminus S$? The only known necessary conditions are submodularity and a special class of linear inequalities given by Chaudhuri, Subrahmanyam, Wagner and Zaroliagis. Our main result is a new class of linear inequalities concerning laminar families, that generalize all previous ones. Using our new class of inequalities, we can generalize Karger's approximate min-cut counting result to graphs with terminals.

End-to-end spoken language understanding (SLU) remains elusive even with current large pretrained language models on text and speech, especially in multilingual cases. Machine translation has been established as a powerful pretraining objective on text as it enables the model to capture high-level semantics of the input utterance and associations between different languages, which is desired for speech models that work on lower-level acoustic frames. Motivated particularly by the task of cross-lingual SLU, we demonstrate that the task of speech translation (ST) is a good means of pretraining speech models for end-to-end SLU on both intra- and cross-lingual scenarios. By introducing ST, our models reach higher performance over baselines on monolingual and multilingual intent classification as well as spoken question answering using SLURP, MINDS-14, and NMSQA benchmarks. To verify the effectiveness of our methods, we also create new benchmark datasets from both synthetic and real sources, for speech summarization and low-resource/zero-shot transfer from English to French or Spanish. We further show the value of preserving knowledge for the ST pretraining task for better downstream performance, possibly using Bayesian transfer regularizers.

A sememe is defined as the minimum semantic unit of human languages. Sememe knowledge bases (KBs), which contain words annotated with sememes, have been successfully applied to many NLP tasks. However, existing sememe KBs are built on only a few languages, which hinders their widespread utilization. To address the issue, we propose to build a unified sememe KB for multiple languages based on BabelNet, a multilingual encyclopedic dictionary. We first build a dataset serving as the seed of the multilingual sememe KB. It manually annotates sememes for over $15$ thousand synsets (the entries of BabelNet). Then, we present a novel task of automatic sememe prediction for synsets, aiming to expand the seed dataset into a usable KB. We also propose two simple and effective models, which exploit different information of synsets. Finally, we conduct quantitative and qualitative analyses to explore important factors and difficulties in the task. All the source code and data of this work can be obtained on //github.com/thunlp/BabelNet-Sememe-Prediction.

北京阿比特科技有限公司