亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The $d$-independence number of a graph $G$ is the largest possible size of an independent set $I$ in $G$ where each vertex of $I$ has degree at least $d$ in $G$. Upper bounds for the $d$-independence number in planar graphs are well-known for $d=3,4,5$, and can in fact be matched with constructions that actually have minimum degree $d$. In this paper, we explore the same questions for 1-planar graphs, i.e., graphs that can be drawn in the plane with at most one crossing per edge. We give upper bounds for the $d$-independence number for all $d$. Then we give constructions that match the upper bound, and (for small $d$) also have minimum degree $d$.

相關內容

Computing the crossing number of a graph is one of the most classical problems in computational geometry. Both it and numerous variations of the problem have been studied, and overcoming their frequent computational difficulty is an active area of research. Particularly recently, there has been increased effort to show and understand the parameterized tractability of various crossing number variants. While many results in this direction use a similar approach, a general framework remains elusive. We suggest such a framework that generalizes important previous results, and can even be used to show the tractability of deciding crossing number variants for which this was stated as an open problem in previous literature. Our framework targets variants that prescribe a partial predrawing and some kind of topological restrictions on crossings. Additionally, to provide evidence for the non-generalizability of previous approaches for the partially crossing number problem to allow for geometric restrictions, we show a new more constrained hardness result for partially predrawn rectilinear crossing number. In particular, we show W-hardness of deciding Straight-Line Planarity Extension parameterized by the number of missing edges.

We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bounded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.

A numerical algorithm (implemented in Matlab) for computing the zeros of the parabolic cylinder function $U(a,z)$ in domains of the complex plane is presented. The algorithm uses accurate approximations to the first zero plus a highly efficient method based on a fourth-order fixed point method with the parabolic cylinder functions computed by Taylor series and carefully selected steps, to compute the rest of the zeros. For $|a|$ small, the asymptotic approximations are complemented with a few fixed point iterations requiring the evaluation of $U(a,z)$ and $U'(a,z)$ in the region where the complex zeros are located. Liouville-Green expansions are derived to enhance the performance of a computational scheme to evaluate $U(a,z)$ and $U'(a,z)$ in that region. Several tests show the accuracy and efficiency of the numerical algorithm.

A convergent numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation is derived. The method is based on applying a tailor-made projection operator to the initial data, and then solving exactly using the generalized method of characteristics. The projection step is the only step that introduces any approximation error. It is therefore crucial that its design ensures not only a good approximation of the initial data, but also that errors due to the energy dissipation at later times remain small. Furthermore, it is shown that the main quantity of interest, the wave profile, converges in $L^{\infty}$ for all $t \geq 0$, while a subsequence of the energy density converges weakly for almost every time.

We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (\UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is \NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the \UGC).

We propose and justify a matrix reduction method for calculating the optimal approximation of an observed matrix $A \in {\mathbb C}^{m \times n}$ by a sum $\sum_{i=1}^p \sum_{j=1}^q B_iX_{ij}C_j$ of matrix products where each $B_i \in {\mathbb C}^{m \times g_i}$ and $C_j \in {\mathbb C}^{h_j \times n}$ is known and where the unknown matrix kernels $X_{ij}$ are determined by minimizing the Frobenius norm of the error. The sum can be represented as a bounded linear mapping $BXC$ with unknown kernel $X$ from a prescribed subspace ${\mathcal T} \subseteq {\mathbb C}^n$ onto a prescribed subspace ${\mathcal S} \subseteq {\mathbb C}^m$ defined respectively by the collective domains and ranges of the given matrices $C_1,\ldots,C_q$ and $B_1,\ldots,B_p$. We show that the optimal kernel is $X = B^{\dag}AC^{\dag}$ and that the optimal approximation $BB^{\dag}AC^{\dag}C$ is the projection of the observed mapping $A$ onto a mapping from ${\mathcal T}$ to ${\mathcal S}$. If $A$ is large $B$ and $C$ may also be large and direct calculation of $B^{\dag}$ and $C^{\dag}$ becomes unwieldy and inefficient. { The proposed method avoids} this difficulty by reducing the solution process to finding the pseudo-inverses of a collection of much smaller matrices. This significantly reduces the computational burden.

We study the finite element approximation of problems involving the weighted $p$-Laplacian for $p \in (1,\infty)$ and weights belonging to the Muckenhoupt class $A_1$. In particular, we consider an equation and an obstacle problem for the weighted $p$-Laplacian and derive error estimates in both cases. The analysis is based on the language of weighted Orlicz and Orlicz--Sobolev spaces.

This work introduces a novel approach to constructing DNA codes from linear codes over a non-chain extension of $\mathbb{Z}_4$. We study $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}=\mathbb{Z}_4+\omega\mathbb{Z}_4, \omega^2=\omega,$ with an $\mathfrak{R}$-automorphism $\text{\textbaro}$ and a $\text{\textbaro}$-derivation $\mathfrak{d}$ over $\mathfrak{R}.$ Further, we determine the generators of the $(\text{\textbaro},\mathfrak{d}, \gamma)$-constacyclic codes over the ring $\mathfrak{R}$ of any arbitrary length and establish the reverse constraint for these codes. Besides the necessary and sufficient criterion to derive reverse-complement codes, we present a construction to obtain DNA codes from these reversible codes. Moreover, we use another construction on the $(\text{\textbaro},\mathfrak{d},\gamma)$-constacyclic codes to generate additional optimal and new classical codes. Finally, we provide several examples of $(\text{\textbaro},\mathfrak{d}, \gamma)$ constacyclic codes and construct DNA codes from established results. The parameters of these linear codes over $\mathbb{Z}_4$ are better and optimal according to the codes available at \cite{z4codes}.

We consider two-dimensional $(\lambda_1, \lambda_2)$-constacyclic codes over $\mathbb{F}_{q}$ of area $M N$, where $q$ is some power of prime $p$ with $\gcd(M,p)=1$ and $\gcd(N,p)=1$. With the help of common zero (CZ) set, we characterize 2-D constacyclic codes. Further, we provide an algorithm to construct an ideal basis of these codes by using their essential common zero (ECZ) sets. We describe the dual of 2-D constacyclic codes. Finally, we provide an encoding scheme for generating 2-D constacyclic codes. We present an example to illustrate that 2-D constacyclic codes can have better minimum distance compared to their cyclic counterparts with the same code size and code rate.

We propose a $C^0$ interior penalty method for the fourth-order stream function formulation of the surface Stokes problem. The scheme utilizes continuous, piecewise polynomial spaces defined on an approximate surface. We show that the resulting discretization is positive definite and derive error estimates in various norms in terms of the polynomial degree of the finite element space as well as the polynomial degree to define the geometry approximation. A notable feature of the scheme is that it does not explicitly depend on the Gauss curvature of the surface. This is achieved via a novel integration-by-parts formula for the surface biharmonic operator.

北京阿比特科技有限公司