亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

After substantial progress over the last 15 years, the "algebraic CSP-dichotomy conjecture" reduces to the following: every local constraint satisfaction problem (CSP) associated with a finite idempotent algebra is tractable if and only if the algebra has a Taylor term operation. Despite the tremendous achievements in this area (including recently announce proofs of the general conjecture), there remain examples of small algebras with just a single binary operation whose CSP resists direct classification as either tractable or NP-complete using known methods. In this paper we present some new methods for approaching such problems, with particular focus on those techniques that help us attack the class of finite algebras known as "commutative idempotent binars" (CIBs). We demonstrate the utility of these methods by using them to prove that every CIB of cardinality at most 4 yields a tractable CSP.

相關內容

Orthology and paralogy relations are often inferred by methods based on gene similarity, which usually yield a graph depicting the relationships between gene pairs. Such relation graphs are known to frequently contain errors, as they cannot be explained via a gene tree that both contains the depicted orthologs/paralogs, and that is consistent with a species tree $S$. This idea of detecting errors through inconsistency with a species tree has mostly been studied in the presence of speciation and duplication events only. In this work, we ask: could the given set of relations be consistent if we allow lateral gene transfers in the evolutionary model? We formalize this question and provide a variety of algorithmic results regarding the underlying problems. Namely, we show that deciding if a relation graph $R$ is consistent with a given species network $N$ is NP-hard, and that it is W[1]-hard under the parameter "minimum number of transfers". However, we present an FPT algorithm based on the degree of the $DS$-tree associated with $R$. We also study analogous problems in the case that the transfer highways on a species tree are unknown.

The width of a well partial ordering (wpo) is the ordinal rank of the set of its antichains ordered by inclusion. We compute the width of wpos obtained as cartesian products of finitely many well-orderings.

We show that the vast majority of extensions of the description logic $\mathcal{EL}$ do not enjoy the Craig interpolation nor the projective Beth definability property. This is the case, for example, for $\mathcal{EL}$ with nominals, $\mathcal{EL}$ with the universal role, $\mathcal{EL}$ with a role inclusion of the form $r\circ s\sqsubseteq s$, and for $\mathcal{ELI}$. It follows in particular that the existence of an explicit definition of a concept or individual name cannot be reduced to subsumption checking via implicit definability. We show that nevertheless the existence of interpolants and explicit definitions can be decided in polynomial time for standard tractable extensions of $\mathcal{EL}$ (such as $\mathcal{EL}^{++}$) and in ExpTime for $\mathcal{ELI}$ and various extensions. It follows that these existence problems are not harder than subsumption which is in sharp contrast to the situation for expressive DLs. We also obtain tight bounds for the size of interpolants and explicit definitions and the complexity of computing them: single exponential for tractable standard extensions of $\mathcal{EL}$ and double exponential for $\mathcal{ELI}$ and extensions. We close with a discussion of Horn-DLs such as Horn-$\mathcal{ALCI}$.

We show that some natural problems that are XNLP-hard (which implies W[t]-hardness for all t) when parameterized by pathwidth or treewidth, become FPT when parameterized by stable gonality, a novel graph parameter based on optimal maps from graphs to trees. The problems we consider are classical flow and orientation problems, such as Undirected Flow with Lower Bounds (which is strongly NP-complete, as shown by Itai), Minimum Maximum Outdegree (for which W[1]-hardness for treewidth was proven by Szeider), and capacitated optimization problems such as Capacitated (Red-Blue) Dominating Set (for which W[1]-hardness was proven by Dom, Lokshtanov, Saurabh and Villanger). Our hardness proofs (that beat existing results) use reduction to a recent XNLP-complete problem (Accepting Non-deterministic Checking Counter Machine). The new easy parameterized algorithms use a novel notion of weighted tree partition with an associated parameter that we call treebreadth, inspired by Seese's notion of tree-partite graphs, as well as techniques from dynamical programming and integer linear programming.

Ultrafinitism postulates that we can only compute on relatively short objects, and numbers beyond certain value are not available. This approach would also forbid many forms of infinitary reasoning and allow to remove certain paradoxes stemming from enumeration theorems. However, philosophers still disagree of whether such a finitist logic would be consistent. We present preliminary work on a proof system based on Curry-Howard isomorphism. We also try to present some well-known theorems that stop being true in such systems, whereas opposite statements become provable. This approach presents certain impossibility results as logical paradoxes stemming from a profligate use of transfinite reasoning.

We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs under a vertex-percolation subcriticality condition. We show that this subcriticality condition is optimal in the sense that the problem of (approximately) sampling weighted rooted graphlets becomes impossible for infinite graphs and intractable for finite graphs if the condition does not hold. We apply our rooted graphlet sampling algorithm as a subroutine to give a fast perfect sampling algorithm for polymer models and a fast perfect sampling algorithm for weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. We apply this polymer model algorithm to give improved sampling algorithms for spin systems at low temperatures on expander graphs and other structured families of graphs: under the least restrictive conditions known we give near linear-time algorithms, while previous algorithms in these regimes required large polynomial running times.

Multi-material problems often exhibit complex geometries along with physical responses presenting large spatial gradients or discontinuities. In these cases, providing high-quality body-fitted finite element analysis meshes and obtaining accurate solutions remain challenging. Immersed boundary techniques provide elegant solutions for such problems. Enrichment methods alleviate the need for generating conforming analysis grids by capturing discontinuities within mesh elements. Additionally, increased accuracy of physical responses and geometry description can be achieved with higher-order approximation bases. In particular, using B-splines has become popular with the development of IsoGeometric Analysis. In this work, an eXtended IsoGeometric Analysis (XIGA) approach is proposed for multi-material problems. The computational domain geometry is described implicitly by level set functions. A novel generalized Heaviside enrichment strategy is employed to accommodate an arbitrary number of materials without artificially stiffening the physical response. Higher-order B-spline functions are used for both geometry representation and analysis. Boundary and interface conditions are enforced weakly via Nitsche's method, and a new face-oriented ghost stabilization methodology is used to mitigate numerical instabilities arising from small material integration subdomains. Two- and three-dimensional heat transfer and elasticity problems are solved to validate the approach. Numerical studies provide insight into the ability to handle multiple materials considering sharp-edged and curved interfaces, as well as the impact of higher-order bases and stabilization on the solution accuracy and conditioning.

We study semantic security for classical-quantum channels. Our security functions are functional forms of mosaics of combinatorial designs. We extend methods for classical channels to classical-quantum channels to demonstrate that mosaics of designs ensure semantic security for classical-quantum channels, and are also capacity achieving coding scheme. The legitimate channel users share an additional public resource, more precisely, a seed chosen uniformly at random. An advantage of these modular wiretap codes is that we provide explicit code constructions that can be implemented in practice for every channels, giving an arbitrary public code.

We investigate the complexity of explicit construction problems, where the goal is to produce a particular object of size $n$ possessing some pseudorandom property in time polynomial in $n$. We give overwhelming evidence that $\bf{APEPP}$, defined originally by Kleinberg et al., is the natural complexity class associated with explicit constructions of objects whose existence follows from the probabilistic method, by placing a variety of such construction problems in this class. We then demonstrate that a result of Je\v{r}\'{a}bek on provability in Bounded Arithmetic, when reinterpreted as a reduction between search problems, shows that constructing a truth table of high circuit complexity is complete for $\bf{APEPP}$ under $\bf{P}^{\bf{NP}}$ reductions. This illustrates that Shannon's classical proof of the existence of hard boolean functions is in fact a $\textit{universal}$ probabilistic existence argument: derandomizing his proof implies a generic derandomization of the probabilistic method. As a corollary, we prove that $\bf{EXP}^{\bf{NP}}$ contains a language of circuit complexity $2^{n^{\Omega(1)}}$ if and only if it contains a language of circuit complexity $\frac{2^n}{2n}$. Finally, for several of the problems shown to lie in $\bf{APEPP}$, we demonstrate direct polynomial time reductions to the explicit construction of hard truth tables.

We introduce Monte-Carlo Attention (MCA), a randomized approximation method for reducing the computational cost of self-attention mechanisms in Transformer architectures. MCA exploits the fact that the importance of each token in an input sequence varies with respect to their attention scores; thus, some degree of error can be tolerable when encoding tokens with low attention. Using approximate matrix multiplication, MCA applies different error bounds to encode input tokens such that those with low attention scores are computed with relaxed precision, whereas errors of salient elements are minimized. MCA can operate in parallel with other attention optimization schemes and does not require model modification. We study the theoretical error bounds and demonstrate that MCA reduces attention complexity (in FLOPS) for various Transformer models by up to 11$\times$ in GLUE benchmarks without compromising model accuracy.

北京阿比特科技有限公司