亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay for a weaker notion of degeneracy, or even parameterized by the maximum degree $\Delta$, i.e., whether it can be turned into an algorithm with delay $f(\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, and as a first step toward answering that question, they note that they could not achieve these time bounds even for the particular case of minimal dominating sets enumeration. In this paper, using ordered generation, we show that an FPT-delay algorithm can be devised for minimal transversals enumeration parameterized by the degeneracy and dimension, giving a positive and more general answer to the latter question.

相關內容

STOC論文的典型但非排他性的主題包括基礎領域,如算法和數據結構、計算復雜性、并行和分布式算法、量子計算、連續和離散優化、計算中的隨機性、近似算法、組合數學和算法圖論,密碼學,計算幾何,代數計算,邏輯計算應用,算法編碼理論。典型的主題還包括計算和基礎方面的領域,如機器學習,經濟學,公平性,隱私,網絡,數據管理和生物學。STOC鼓勵那些拓寬計算理論研究范圍,或提出可從理論調查和分析中受益的重要問題的論文。官網鏈接: · 線性的 · 樣例 · 寬度 ·
2024 年 6 月 2 日

In this article, we propose a new classification of $\Sigma^0_2$ formulas under the realizability interpretation of many-one reducibility (i.e., Levin reducibility). For example, ${\sf Fin}$, the decision of being eventually zero for sequences, is many-one/Levin complete among $\Sigma^0_2$ formulas of the form $\exists n\forall m\geq n.\varphi(m,x)$, where $\varphi$ is decidable. The decision of boundedness for sequences ${\sf BddSeq}$ and for width of posets ${\sf FinWidth}$ are many-one/Levin complete among $\Sigma^0_2$ formulas of the form $\exists n\forall m\geq n\forall k.\varphi(m,k,x)$, where $\varphi$ is decidable. However, unlike the classical many-one reducibility, none of the above is $\Sigma^0_2$-complete. The decision of non-density of linear order ${\sf NonDense}$ is truly $\Sigma^0_2$-complete.

We study an interacting particle method (IPM) for computing the large deviation rate function of entropy production for diffusion processes, with emphasis on the vanishing-noise limit and high dimensions. The crucial ingredient to obtain the rate function is the computation of the principal eigenvalue $\lambda$ of elliptic, non-self-adjoint operators. We show that this principal eigenvalue can be approximated in terms of the spectral radius of a discretized evolution operator obtained from an operator splitting scheme and an Euler--Maruyama scheme with a small time step size, and we show that this spectral radius can be accessed through a large number of iterations of this discretized semigroup, suitable for the IPM. The IPM applies naturally to problems in unbounded domains, scales easily to high dimensions, and adapts to singular behaviors in the vanishing-noise limit. We show numerical examples in dimensions up to 16. The numerical results show that our numerical approximation of $\lambda$ converges to the analytical vanishing-noise limit within visual tolerance with a fixed number of particles and a fixed time step size. Our paper appears to be the first one to obtain numerical results of principal eigenvalue problems for non-self-adjoint operators in such high dimensions.

Optimization constrained by high-fidelity computational models has potential for transformative impact. However, such optimization is frequently unattainable in practice due to the complexity and computational intensity of the model. An alternative is to optimize a low-fidelity model and use limited evaluations of the high-fidelity model to assess the quality of the solution. This article develops a framework to use limited high-fidelity simulations to update the optimization solution computed using the low-fidelity model. Building off a previous article [22], which introduced hyper-differential sensitivity analysis with respect to model discrepancy, this article provides novel extensions of the algorithm to enable uncertainty quantification of the optimal solution update via a Bayesian framework. Specifically, we formulate a Bayesian inverse problem to estimate the model discrepancy and propagate the posterior model discrepancy distribution through the post-optimality sensitivity operator for the low-fidelity optimization problem. We provide a rigorous treatment of the Bayesian formulation, a computationally efficient algorithm to compute posterior samples, a guide to specify and interpret the algorithm hyper-parameters, and a demonstration of the approach on three examples which highlight various types of discrepancy between low and high-fidelity models.

We present algorithms and a C code to reveal quantum contextuality and evaluate the contextuality degree (a way to quantify contextuality) for a variety of point-line geometries located in binary symplectic polar spaces of small rank. With this code we were not only able to recover, in a more efficient way, all the results of a recent paper by de Boutray et al [(2022). Journal of Physics A: Mathematical and Theoretical 55 475301], but also arrived at a bunch of new noteworthy results. The paper first describes the algorithms and the C code. Then it illustrates its power on a number of subspaces of symplectic polar spaces whose rank ranges from 2 to 7. The most interesting new results include: (i) non-contextuality of configurations whose contexts are subspaces of dimension 2 and higher, (ii) non-existence of negative subspaces of dimension 3 and higher, (iii) considerably improved bounds for the contextuality degree of both elliptic and hyperbolic quadrics for rank 4, as well as for a particular subgeometry of the three-qubit space whose contexts are the lines of this space, (iv) proof for the non-contextuality of perpsets and, last but not least, (v) contextual nature of a distinguished subgeometry of a multi-qubit doily, called a two-spread, and computation of its contextuality degree. Finally, in the three-qubit polar space we correct and improve the contextuality degree of the full configuration and also describe finite geometric configurations formed by unsatisfiable/invalid constraints for both types of quadrics as well as for the geometry whose contexts are all 315 lines of the space.

We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied from the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.

This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.

Improving a 2003 result of Bohman and Holzman, we show that for $n \geq 1$, the Shannon capacity of the complement of the $2n+1$-cycle is at least $(2^{r_n} + 1)^{1/r_n} = 2 + \Omega(2^{-r_n}/r_n)$, where $r_n = \exp(O((\log n)^2))$ is the number of partitions of $2(n-1)$ into powers of $2$. We also discuss a connection between this result and work by Day and Johnson in the context of graph Ramsey numbers.

The problems of optimal recovering univariate functions and their derivatives are studied. To solve these problems, two variants of the truncation method are constructed, which are order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information. For numerical summation, it has been established how the parameters characterizing the problem being solved affect its stability.

Given a finite set of matrices with integer entries, the matrix mortality problem asks if there exists a product of these matrices equal to the zero matrix. We consider a special case of this problem where all entries of the matrices are nonnegative. This case is equivalent to the NFA mortality problem, which, given an NFA, asks for a word $w$ such that the image of every state under $w$ is the empty set. The size of the alphabet of the NFA is then equal to the number of matrices in the set. We study the length of shortest such words depending on the size of the alphabet. We show that this length for an NFA with $n$ states can be at least $2^n - 1$, $2^{(n - 4)/2}$ and $2^{(n - 2)/3}$ if the size of the alphabet is, respectively, equal to $n$, three and two.

Both Morse theory and Lusternik-Schnirelmann theory link algebra, topology and analysis in a geometric setting. The two theories can be formulated in finite geometries like graph theory or within finite abstract simplicial complexes. We work here mostly in graph theory and review the Morse inequalities b(k)-b(k-1) + ... + b(0) less of equal than c(k)-c(k-1) + ... + c(0) for the Betti numbers b(k) and the minimal number c(k) of Morse critical points of index k and the Lusternik-Schnirelmann inequalities cup+1 less or equal than cat less or equal than cri, between the algebraic cup length cup, the topological category cat and the analytic number cri counting the minimal number of critical points of a function.

北京阿比特科技有限公司