亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An edge $e$ of a graph $G$ is called deletable for some orientation $o$ if the restriction of $o$ to $G-e$ is a strong orientation. In 2021, H\"orsch and Szigeti proposed a new parameter for $3$-edge-connected graphs, called the Frank number, which refines $k$-edge-connectivity. The Frank number is defined as the minimum number of orientations of $G$ for which every edge of $G$ is deletable in at least one of them. They showed that every $3$-edge-connected graph has Frank number at most $7$ and that in case these graphs are also $3$-edge-colourable graphs the parameter is at most $3$. Here we strengthen both results by showing that every $3$-edge-connected graph has Frank number at most $4$ and that every graph which is $3$-edge-connected and $3$-edge-colourable graph has Frank number $2$. The latter also confirms a conjecture by Bar\'at and Bl\'azsik. Furthermore, we prove two sufficient conditions for cubic graphs to have Frank number $2$ and use them in an algorithm to computationally show that the Petersen graph is the only cyclically $4$-edge-connected cubic graph up to $36$ vertices having Frank number greater than $2$.

相關內容

Let $G$ be a multigraph and $L\,:\,E(G) \to 2^\mathbb{N}$ be a list assignment on the edges of $G$. Suppose additionally, for every vertex $x$, the edges incident to $x$ have at least $f(x)$ colors in common. We consider a variant of local edge-colorings wherein the color received by an edge $e$ must be contained in $L(e)$. The locality appears in the function $f$, i.e., $f(x)$ is some function of the local structure of $x$ in $G$. Such a notion is a natural generalization of traditional local edge-coloring. Our main results include sufficient conditions on the function $f$ to construct such colorings. As corollaries, we obtain local analogs of Vizing and Shannon's theorems, recovering a recent result of Conley, Greb\'ik and Pikhurko.

We study the problem of maximizing a non-negative monotone $k$-submodular function $f$ under a knapsack constraint, where a $k$-submodular function is a natural generalization of a submodular function to $k$ dimensions. We present a deterministic $(\frac12-\frac{1}{2e})\approx 0.316$-approximation algorithm that evaluates $f$ $O(n^4k^3)$ times, based on the result of Sviridenko (2004) on submodular knapsack maximization.

This paper concerns an expansion of first-order Belnap-Dunn logic which is called $\mathrm{BD}^{\supset,\mathsf{F}}$. Its connectives and quantifiers are all familiar from classical logic and its logical consequence relation is very closely connected to the one of classical logic. Results that convey this close connection are established. Fifteen classical laws of logical equivalence are used to distinguish $\mathrm{BD}^{\supset,\mathsf{F}}$ from all other four-valued logics with the same connectives and quantifiers whose logical consequence relation is as closely connected to the logical consequence relation of classical logic. It is shown that several interesting non-classical connectives added to Belnap-Dunn logic in its expansions that have been studied earlier are definable in $\mathrm{BD}^{\supset,\mathsf{F}}$. It is also established that $\mathrm{BD}^{\supset,\mathsf{F}}$ is both paraconsistent and paracomplete. Moreover, a sequent calculus proof system that is sound and complete with respect to the logical consequence relation of $\mathrm{BD}^{\supset,\mathsf{F}}$ is presented.

Binary codes of length $n$ may be viewed as subsets of vertices of the Boolean hypercube $\{0,1\}^n$. The ability of a linear error-correcting code to recover erasures is connected to influences of particular monotone Boolean functions. These functions provide insight into the role that particular coordinates play in a code's erasure repair capability. In this paper, we consider directly the influences of coordinates of a code. We describe a family of codes, called codes with minimum disjoint support, for which all influences may be determined. As a consequence, we find influences of repetition codes and certain distinct weight codes. Computing influences is typically circumvented by appealing to the transitivity of the automorphism group of the code. Some of the codes considered here fail to meet the transitivity conditions requires for these standard approaches, yet we can compute them directly.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.

In this paper, we derive a variant of the Taylor theorem to obtain a new minimized remainder. For a given function $f$ defined on the interval $[a,b]$, this formula is derived by introducing a linear combination of $f'$ computed at $n+1$ equally spaced points in $[a,b]$, together with $f''(a)$ and $f''(b)$. We then consider two classical applications of this Taylor-like expansion: the interpolation error and the numerical quadrature formula. We show that using this approach improves both the Lagrange $P_2$ - interpolation error estimate and the error bound of the Simpson rule in numerical integration.

We show that spectral data of the Koopman operator arising from an analytic expanding circle map $\tau$ can be effectively calculated using an EDMD-type algorithm combining a collocation method of order m with a Galerkin method of order n. The main result is that if $m \geq \delta n$, where $\delta$ is an explicitly given positive number quantifying by how much $\tau$ expands concentric annuli containing the unit circle, then the method converges and approximates the spectrum of the Koopman operator, taken to be acting on a space of analytic hyperfunctions, exponentially fast in n. Additionally, these results extend to more general expansive maps on suitable annuli containing the unit circle.

The signed double Roman domination problem is a combinatorial optimization problem on a graph asking to assign a label from $\{\pm{}1,2,3\}$ to each vertex feasibly, such that the total sum of assigned labels is minimized. Here feasibility is given whenever (i) vertices labeled $\pm{}1$ have at least one neighbor with label in $\{2,3\}$; (ii) each vertex labeled $-1$ has one $3$-labeled neighbor or at least two $2$-labeled neighbors; and (iii) the sum of labels over the closed neighborhood of any vertex is positive. The cumulative weight of an optimal labeling is called signed double Roman domination number (SDRDN). In this work, we first consider the problem on general cubic graphs of order $n$ for which we present a sharp $n/2+\Theta(1)$ lower bound for the SDRDN by means of the discharging method. Moreover, we derive a new best upper bound. Observing that we are often able to minimize the SDRDN over the class of cubic graphs of a fixed order, we then study in this context generalized Petersen graphs for independent interest, for which we propose a constraint programming guided proof. We then use these insights to determine the SDRDNs of subcubic $2\times m$ grid graphs, among other results.

Multivariate histograms are difficult to construct due to the curse of dimensionality. Motivated by $k$-d trees in computer science, we show how to construct an efficient data-adaptive partition of Euclidean space that possesses the following two properties: With high confidence the distribution from which the data are generated is close to uniform on each rectangle of the partition; and despite the data-dependent construction we can give guaranteed finite sample simultaneous confidence intervals for the probabilities (and hence for the average densities) of each rectangle in the partition. This partition will automatically adapt to the sizes of the regions where the distribution is close to uniform. The methodology produces confidence intervals whose widths depend only on the probability content of the rectangles and not on the dimensionality of the space, thus avoiding the curse of dimensionality. Moreover, the widths essentially match the optimal widths in the univariate setting. The simultaneous validity of the confidence intervals allows to use this construction, which we call {\sl Beta-trees}, for various data-analytic purposes. We illustrate this by using Beta-trees for visualizing data and for multivariate mode-hunting.

Let $\Sigma$ be an alphabet. For two strings $X$, $Y$, and a constrained string $P$ over the alphabet $\Sigma$, the constrained longest common subsequence and substring problem for two strings $X$ and $Y$ with respect to $P$ is to find a longest string $Z$ which is a subsequence of $X$, a substring of $Y$, and has $P$ as a subsequence. In this paper, we propose an algorithm for the constrained longest common subsequence and substring problem for two strings with a constrained string.

北京阿比特科技有限公司