亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that the problem of whether a query is equivalent to a query of tree-width $k$ is decidable, for the class of Unions of Conjunctive Regular Path Queries with two-way navigation (UC2RPQs). A previous result by Barcel\'o, Romero, and Vardi [SIAM Journal on Computing, 2016] has shown decidability for the case $k=1$, and here we extend this result showing that decidability in fact holds for any arbitrary $k\geq 1$. The algorithm is in 2ExpSpace, but for the restricted but practically relevant case where all regular expressions of the query are of the form $a^*$ or $(a_1 + \dotsb + a_n)$ we show that the complexity of the problem drops to $\Pi^P_2$. We also investigate the related problem of approximating a UC2RPQ by queries of small tree-width. We exhibit an algorithm which, for any fixed number $k$, builds the maximal under-approximation of tree-width $k$ of a UC2RPQ. The maximal under-approximation of tree-width $k$ of a query $q$ is a query $q'$ of tree-width $k$ which is contained in $q$ in a maximal and unique way, that is, such that for every query $q''$ of tree-width $k$, if $q''$ is contained in $q$ then $q''$ is also contained in $q'$. Our approach is shown to be robust, in the sense that it allows also to test equivalence with queries of a given path-width, it also covers the previously known result for $k=1$, and it allows to test for equivalence of whether a (one-way) UCRPQ is equivalent to a UCRPQ of a given tree-width (or path-width).

相關內容

We introduce a call-by-name lambda-calculus $\lambda Jn$ with generalized applications which is equipped with distant reduction. This allows to unblock $\beta$-redexes without resorting to the standard permutative conversions of generalized applications used in the original $\Lambda J$-calculus with generalized applications of Joachimski and Matthes. We show strong normalization of simply-typed terms, and we then fully characterize strong normalization by means of a quantitative (i.e. non-idempotent intersection) typing system. This characterization uses a non-trivial inductive definition of strong normalization --related to others in the literature--, which is based on a weak-head normalizing strategy. We also show that our calculus $\lambda Jn$ relates to explicit substitution calculi by means of a faithful translation, in the sense that it preserves strong normalization. Moreover, our calculus $\lambda Jn$ and the original $\Lambda J$-calculus determine equivalent notions of strong normalization. As a consequence, $\lambda J$ inherits a faithful translation into explicit substitutions, and its strong normalization can also be characterized by the quantitative typing system designed for $\lambda Jn$, despite the fact that quantitative subject reduction fails for permutative conversions.

A Particle Swarm Optimizer for the search of balanced Boolean functions with good cryptographic properties is proposed in this paper. The algorithm is a modified version of the permutation PSO by Hu, Eberhart and Shi which preserves the Hamming weight of the particles positions, coupled with the Hill Climbing method devised by Millan, Clark and Dawson to improve the nonlinearity and deviation from correlation immunity of Boolean functions. The parameters for the PSO velocity equation are tuned by means of two meta-optimization techniques, namely Local Unimodal Sampling (LUS) and Continuous Genetic Algorithms (CGA), finding that CGA produces better results. Using the CGA-evolved parameters, the PSO algorithm is then run on the spaces of Boolean functions from $n=7$ to $n=12$ variables. The results of the experiments are reported, observing that this new PSO algorithm generates Boolean functions featuring similar or better combinations of nonlinearity, correlation immunity and propagation criterion with respect to the ones obtained by other optimization methods.

Let $X$ be a finite set. A family $P$ of subsets of $X$ is called a convex geometry with ground set $X$ if (1) $\emptyset, X\in P$; (2) $A\cap B\in P$ whenever $A,B\in P$; and (3) if $A\in P$ and $A\neq X$, there is an element $\alpha\in X-A$ such that $A\cup\{\alpha\}\in P$. As a non-empty family of sets, a convex geometry has a well defined VC-dimension. In the literature, a second parameter, called convex dimension, has been defined expressly for these structures. Partially ordered by inclusion, a convex geometry is also a poset, and four additional dimension parameters have been defined for this larger class, called Dushnik-Miller dimension, Boolean dimension, local dimension, and fractional dimension, espectively. For each pair of these six dimension parameters, we investigate whether there is an infinite class of convex geometries on which one parameter is bounded and the other is not.

We propose a new approach for approximating functions in $C([0,1]^d)$ via Kolmogorov superposition theorem (KST) based on the linear spline approximation of the K-outer function in Kolmogorov superposition representation. We improve the results in \cite{LaiShenKST21} by showing that the optimal approximation rate based on our proposed approach is $\mathcal{O}(\frac{1}{n^2})$, with $n$ being the number of knots over $[0,1]$, and the approximation constant increases linearly in $d$. We show that there is a dense subclass in $C([0,1]^d)$ whose approximation can achieve such optimal rate, and the number of parameters needed in such approximation is at most $\mathcal{O}(nd)$. Moreover, for $d\geq 4$, we apply the tensor product spline denoising technique to smooth the KB-splines and get the corresponding LKB-splines. We use those LKB-splines as the basis to approximate functions for the cases when $d=4$ and $d=6$, which extends the results in \cite{LaiShenKST21} for $d=2$ and $d=3$. Based on the idea of pivotal data locations introduced in \cite{LaiShenKST21}, we validate via numerical experiments that fewer than $\mathcal{O}(nd)$ function values are enough to achieve the approximation rates such as $\mathcal{O}(\frac{1}{n})$ or $\mathcal{O}(\frac{1}{n^2})$ based on the smoothness of the K-outer function. Finally, we demonstrate that our approach can be applied to numerically solving partial differential equation such as the Poisson equation with accurate approximation results.

The Learning With Errors ($\mathsf{LWE}$) problem asks to find $\mathbf{s}$ from an input of the form $(\mathbf{A}, \mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}) \in (\mathbb{Z}/q\mathbb{Z})^{m \times n} \times (\mathbb{Z}/q\mathbb{Z})^{m}$, for a vector $\mathbf{e}$ that has small-magnitude entries. In this work, we do not focus on solving $\mathsf{LWE}$ but on the task of sampling instances. As these are extremely sparse in their range, it may seem plausible that the only way to proceed is to first create $\mathbf{s}$ and $\mathbf{e}$ and then set $\mathbf{b} = \mathbf{A}\mathbf{s}+\mathbf{e}$. In particular, such an instance sampler knows the solution. This raises the question whether it is possible to obliviously sample $(\mathbf{A}, \mathbf{A}\mathbf{s}+\mathbf{e})$, namely, without knowing the underlying $\mathbf{s}$. A variant of the assumption that oblivious $\mathsf{LWE}$ sampling is hard has been used in a series of works constructing Succinct Non-interactive Arguments of Knowledge (SNARKs) in the standard model. As the assumption is related to $\mathsf{LWE}$, these SNARKs have been conjectured to be secure in the presence of quantum adversaries. Our main result is a quantum polynomial-time algorithm that samples well-distributed $\mathsf{LWE}$ instances while provably not knowing the solution, under the assumption that $\mathsf{LWE}$ is hard. Moreover, the approach works for a vast range of $\mathsf{LWE}$ parametrizations, including those used in the above-mentioned SNARKs.

In an instance of the weighted Nash Social Welfare problem, we are given a set of $m$ indivisible items, $\mathscr{G}$, and $n$ agents, $\mathscr{A}$, where each agent $i \in \mathscr{A}$ has a valuation $v_{ij}\geq 0$ for each item $j\in \mathscr{G}$. In addition, every agent $i$ has a non-negative weight $w_i$ such that the weights collectively sum up to $1$. The goal is to find an assignment $\sigma:\mathscr{G}\rightarrow \mathscr{A}$ that maximizes $\prod_{i\in \mathscr{A}} \left(\sum_{j\in \sigma^{-1}(i)} v_{ij}\right)^{w_i}$, the product of the weighted valuations of the players. When all the weights equal $\frac1n$, the problem reduces to the classical Nash Social Welfare problem, which has recently received much attention. In this work, we present a $5\cdot\exp\left(2\cdot D_{\text{KL}}(\mathbf{w}\, ||\, \frac{\vec{\mathbf{1}}}{n})\right) = 5\cdot\exp\left(2\log{n} + 2\sum_{i=1}^n w_i \log{w_i}\right)$-approximation algorithm for the weighted Nash Social Welfare problem, where $D_{\text{KL}}(\mathbf{w}\, ||\, \frac{\vec{\mathbf{1}}}{n})$ denotes the KL-divergence between the distribution induced by $\mathbf{w}$ and the uniform distribution on $[n]$. We show a novel connection between the convex programming relaxations for the unweighted variant of Nash Social Welfare presented in \cite{cole2017convex, anari2017nash}, and generalize the programs to two different mathematical programs for the weighted case. The first program is convex and is necessary for computational efficiency, while the second program is a non-convex relaxation that can be rounded efficiently. The approximation factor derives from the difference in the objective values of the convex and non-convex relaxation.

We extend the free cornering of a symmetric monoidal category, a double categorical model of concurrent interaction, to support branching communication protocols and iterated communication protocols. We validate our constructions by showing that they inherit significant categorical structure from the free cornering, including that they form monoidal double categories. We also establish some elementary properties of the novel structure they contain. Further, we give a model of the free cornering in terms of strong functors and strong natural transformations, inspired by the literature on computational effects.

Foundation models (e.g., ChatGPT, DALL-E, PengCheng Mind, PanGu-$\Sigma$) have demonstrated extraordinary performance in key technological areas, such as natural language processing and visual recognition, and have become the mainstream trend of artificial general intelligence. This has led more and more major technology giants to dedicate significant human and financial resources to actively develop their foundation model systems, which drives continuous growth of these models' parameters. As a result, the training and serving of these models have posed significant challenges, including substantial computing power, memory consumption, bandwidth demands, etc. Therefore, employing efficient training and serving strategies becomes particularly crucial. Many researchers have actively explored and proposed effective methods. So, a comprehensive survey of them is essential for system developers and researchers. This paper extensively explores the methods employed in training and serving foundation models from various perspectives. It provides a detailed categorization of these state-of-the-art methods, including finer aspects such as network, computing, and storage. Additionally, the paper summarizes the challenges and presents a perspective on the future development direction of foundation model systems. Through comprehensive discussion and analysis, it hopes to provide a solid theoretical basis and practical guidance for future research and applications, promoting continuous innovation and development in foundation model systems.

Pre-trained Language Models (PLMs) which are trained on large text corpus via self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Research has been dedicated to incorporating knowledge into PLMs to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight these two main tasks of NLP. For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司