亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A walk $u_0u_1 \ldots u_{k-1}u_k$ is a \textit{weakly toll walk} if $u_0u_i \in E(G)$ implies $u_i = u_1$ and $u_ju_k\in E(G)$ implies $u_j=u_{k-1}$. A set $S$ of vertices of $G$ is {\it weakly toll convex} if for any two non-adjacent vertices $x,y \in S$ any vertex in a weakly toll walk between $x$ and $y$ is also in $S$. The {\em weakly toll convexity} is the graph convexity space defined over weakly toll convex sets. Many studies are devoted to determine if a graph equipped with a convexity space is a {\em convex geometry}. An \emph{extreme vertex} is an element $x$ of a convex set $S$ such that the set $S\backslash\{x\}$ is also convex. A graph convexity space is said to be a convex geometry if it satisfies the Minkowski-Krein-Milman property, which states that every convex set is the convex hull of its extreme vertices. It is known that chordal, Ptolemaic, weakly polarizable, and interval graphs can be characterized as convex geometries with respect to the monophonic, geodesic, $m^3$, and toll convexities, respectively. Other important classes of graphs can also be characterized in this way. In this paper, we prove that a graph is a convex geometry with respect to the weakly toll convexity if and only if it is a proper interval graph. Furthermore, some well-known graph invariants are studied with respect to the weakly toll convexity.

相關內容

We introduce $\varepsilon$-approximate versions of the notion of Euclidean vector bundle for $\varepsilon \geq 0$, which recover the classical notion of Euclidean vector bundle when $\varepsilon = 0$. In particular, we study \v{C}ech cochains with coefficients in the orthogonal group that satisfy an approximate cocycle condition. We show that $\varepsilon$-approximate vector bundles can be used to represent classical vector bundles when $\varepsilon > 0$ is sufficiently small. We also introduce distances between approximate vector bundles and use them to prove that sufficiently similar approximate vector bundles represent the same classical vector bundle. This gives a way of specifying vector bundles over finite simplicial complexes using a finite amount of data, and also allows for some tolerance to noise when working with vector bundles in an applied setting. As an example, we prove a reconstruction theorem for vector bundles from finite samples. We give algorithms for the effective computation of low-dimensional characteristic classes of vector bundles directly from discrete and approximate representations and illustrate the usage of these algorithms with computational examples.

A proper $k$-coloring of a graph $G$ is a \emph{neighbor-locating $k$-coloring} if for each pair of vertices in the same color class, the two sets of colors found in their respective neighborhoods are different. The \textit{neighbor-locating chromatic number} $\chi_{NL}(G)$ is the minimum $k$ for which $G$ admits a neighbor-locating $k$-coloring. A proper $k$-vertex-coloring of a graph $G$ is a \emph{locating $k$-coloring} if for each pair of vertices $x$ and $y$ in the same color-class, there exists a color class $S_i$ such that $d(x,S_i)\neq d(y,S_i)$. The locating chromatic number $\chi_{L}(G)$ is the minimum $k$ for which $G$ admits a locating $k$-coloring. Our main results concern the largest possible order of a sparse graph of given neighbor-locating chromatic number. More precisely, we prove that if $G$ has order $n$, neighbor-locating chromatic number $k$ and average degree at most $2a$, where $2a\le k-1$ is a positive integer, then $n$ is upper-bounded by $\mathcal{O}(a^2(k^{2a+1}))$. We also design a family of graphs of bounded maximum degree whose order is close to reaching this upper bound. Our upper bound generalizes two previous bounds from the literature, which were obtained for graphs of bounded maximum degree and graphs of bounded cycle rank, respectively. Also, we prove that determining whether $\chi_L(G)\le k$ and $\chi_{NL}(G)\le k$ are NP-complete for sparse graphs: more precisely, for graphs with average degree at most 7, maximum average degree at most 20 and that are $4$-partite. We also study the possible relation between the ordinary chromatic number, the locating chromatic number and the neighbor-locating chromatic number of a graph.

Input-output conformance simulation (iocos) has been proposed by Gregorio-Rodr\'iguez, Llana and Mart\'inez-Torres as a simulation-based behavioural preorder underlying model-based testing. This relation is inspired by Tretmans' classic ioco relation, but has better worst-case complexity than ioco and supports stepwise refinement. The goal of this paper is to develop the theory of iocos by studying logical characterisations of this relation, rule formats for it and its compositionality. More specifically, this article presents characterisations of iocos in terms of modal logics and compares them with an existing logical characterisation for ioco proposed by Beohar and Mousavi. It also offers a characteristic-formula construction for iocos over finite processes in an extension of the proposed modal logics with greatest fixed points. A precongruence rule format for iocos and a rule format ensuring that operations take quiescence properly into account are also given. Both rule formats are based on the GSOS format by Bloom, Istrail and Meyer. The general modal decomposition methodology of Fokkink and van Glabbeek is used to show how to check the satisfaction of properties expressed in the logic for iocos in a compositional way for operations specified by rules in the precongruence rule format for iocos .

A frame $(x_j)_{j\in J}$ for a Hilbert space $H$ allows for a linear and stable reconstruction of any vector $x\in H$ from the linear measurements $(\langle x,x_j\rangle)_{j\in J}$. However, there are many situations where some information in the frame coefficients is lost. In applications where one is using sensors with a fixed dynamic range, any measurement above that range is registered as the maximum, and any measurement below that range is registered as the minimum. Depending on the context, recovering a vector from such measurements is called either declipping or saturation recovery. We initiate a frame theoretic approach to saturation recovery in a similar way to what [BCE06] did for phase retrieval. We characterize when saturation recovery is possible, show optimal frames for use with saturation recovery correspond to minimal multi-fold packings in projective space, and prove that the classical frame algorithm may be adapted to this non-linear problem to provide a reconstruction algorithm.

A roadmap for an algebraic set $V$ defined by polynomials with coefficients in some real field, say $\mathbb{R}$, is an algebraic curve contained in $V$ whose intersection with all connected components of $V\cap\mathbb{R}^{n}$ is connected. These objects, introduced by Canny, can be used to answer connectivity queries over $V\cap \mathbb{R}^{n}$ provided that they are required to contain the finite set of query points $\mathcal{P}\subset V$; in this case,we say that the roadmap is associated to $(V, \mathcal{P})$. In this paper, we make effective a connectivity result we previously proved, to design a Monte Carlo algorithm which, on input (i) a finite sequence of polynomials defining $V$ (and satisfying some regularity assumptions) and (ii) an algebraic representation of finitely many query points $\mathcal{P}$ in $V$, computes a roadmap for $(V, \mathcal{P})$. This algorithm generalizes the nearly optimal one introduced by the last two authors by dropping a boundedness assumption on the real trace of $V$. The output size and running times of our algorithm are both polynomial in $(nD)^{n\log d}$, where $D$ is the maximal degree of the input equations and $d$ is the dimension of $V$. As far as we know, the best previously known algorithm dealing with such sets has an output size and running time polynomial in $(nD)^{n\log^2 n}$.

Given a graph $G$ and two independent sets of $G$, the independent set reconfiguration problem asks whether one independent set can be transformed into the other by moving a single vertex at a time, such that at each intermediate step we have an independent set of $G$. We study the complexity of this problem for $H$-free graphs under the token sliding and token jumping rule. Our contribution is twofold. First, we prove a reconfiguration analogue of Alekseev's theorem, showing that the problem is PSPACE-complete unless $H$ is a path or a subdivision of the claw. We then show that under the token sliding rule, the problem admits a polynomial-time algorithm if the input graph is fork-free.

We study the problem of identifying a small set $k\sim n^\theta$, $0<\theta<1$, of infected individuals within a large population of size $n$ by testing groups of individuals simultaneously. All tests are conducted concurrently. The goal is to minimise the total number of tests required. In this paper we make the (realistic) assumption that tests are noisy, i.e.\ that a group that contains an infected individual may return a negative test result or one that does not contain an infected individual may return a positive test results with a certain probability. The noise need not be symmetric. We develop an algorithm called SPARC that correctly identifies the set of infected individuals up to $o(k)$ errors with high probability with the asymptotically minimum number of tests. Additionally, we develop an algorithm called SPEX that exactly identifies the set of infected individuals w.h.p. with a number of tests that matches the information-theoretic lower bound for the constant column design, a powerful and well-studied test design.

We give an approximate Menger-type theorem for when a graph $G$ contains two $X-Y$ paths $P_1$ and $P_2$ such that $P_1 \cup P_2$ is an induced subgraph of $G$. More generally, we prove that there exists a function $f(d) \in O(d)$, such that for every graph $G$ and $X,Y \subseteq V(G)$, either there exist two $X-Y$ paths $P_1$ and $P_2$ such that the distance between $P_1$ and $P_2$ is at least $d$, or there exists $v \in V(G)$ such that the ball of radius $f(d)$ centered at $v$ intersects every $X-Y$ path.

The joint bidiagonalization (JBD) process iteratively reduces a matrix pair $\{A,L\}$ to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of $\{A,L\}$. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of $(A^T,L^T)^T$ while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.

This paper proposes a novel technique for the approximation of strong solutions $u \in C(\overline{\Omega}) \cap W^{2,n}_\mathrm{loc}(\Omega)$ to uniformly elliptic linear PDE of second order in nondivergence form with continuous leading coefficient in nonsmooth domains by finite element methods. These solutions satisfy the Alexandrov-Bakelman-Pucci (ABP) maximum principle, which provides an a~posteriori error control for $C^1$ conforming approximations. By minimizing this residual, we obtain an approximation to the solution $u$ in the $L^\infty$ norm. Although discontinuous functions do not satisfy the ABP maximum principle, this approach extends to nonconforming FEM as well thanks to well-established enrichment operators. Convergence of the proposed FEM is established for uniform mesh-refinements. The built-in a~posteriori error control (even for inexact solve) can be utilized in adaptive computations for the approximation of singular solutions, which performs superiorly in the numerical benchmarks in comparison to the uniform mesh-refining algorithm.

北京阿比特科技有限公司