亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A paired dominating set $P$ is a dominating set with the additional property that $P$ has a perfect matching. While the maximum cardainality of a minimal dominating set in a graph $G$ is called the upper domination number of $G$, denoted by $\Gamma(G)$, the maximum cardinality of a minimal paired dominating set in $G$ is called the upper paired domination number of $G$, denoted by $\Gamma_{pr}(G)$. By Henning and Pradhan (2019), we know that $\Gamma_{pr}(G)\leq 2\Gamma(G)$ for any graph $G$ without isolated vertices. We focus on the graphs satisfying the equality $\Gamma_{pr}(G)= 2\Gamma(G)$. We give characterizations for two special graph classes: bipartite and unicyclic graphs with $\Gamma_{pr}(G)= 2\Gamma(G)$ by using the results of Ulatowski (2015). Besides, we study the graphs with $\Gamma_{pr}(G)= 2\Gamma(G)$ and a restricted girth. In this context, we provide two characterizations: one for graphs with $\Gamma_{pr}(G)= 2\Gamma(G)$ and girth at least 6 and the other for $C_3$-free cactus graphs with $\Gamma_{pr}(G)= 2\Gamma(G)$. We also pose the characterization of the general case of $C_3$-free graphs with $\Gamma_{pr}(G)= 2\Gamma(G)$ as an open question.

相關內容

兩人親密社交應用,官網:

In this paper we prove upper and lower bounds on the minimal spherical dispersion. In particular, we see that the inverse $N(\varepsilon,d)$ of the minimal spherical dispersion is, for fixed $\varepsilon>0$, up to logarithmic terms linear in the dimension $d$. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere.

The study of statistical estimation without distributional assumptions on data values, but with knowledge of data collection methods was recently introduced by Chen, Valiant and Valiant (NeurIPS 2020). In this framework, the goal is to design estimators that minimize the worst-case expected error. Here the expectation is over a known, randomized data collection process from some population, and the data values corresponding to each element of the population are assumed to be worst-case. Chen, Valiant and Valiant show that, when data values are $\ell_{\infty}$-normalized, there is a polynomial time algorithm to compute an estimator for the mean with worst-case expected error that is within a factor $\frac{\pi}{2}$ of the optimum within the natural class of semilinear estimators. However, their algorithm is based on optimizing a somewhat complex concave objective function over a constrained set of positive semidefinite matrices, and thus does not come with explicit runtime guarantees beyond being polynomial time in the input. In this paper we design provably efficient algorithms for approximating the optimal semilinear estimator based on online convex optimization. In the setting where data values are $\ell_{\infty}$-normalized, our algorithm achieves a $\frac{\pi}{2}$-approximation by iteratively solving a sequence of standard SDPs. When data values are $\ell_2$-normalized, our algorithm iteratively computes the top eigenvector of a sequence of matrices, and does not lose any multiplicative approximation factor. We complement these positive results by stating a simple combinatorial condition which, if satisfied by a data collection process, implies that any (not necessarily semilinear) estimator for the mean has constant worst-case expected error.

In randomized trials, once the total effect of the intervention has been estimated, it is often of interest to explore mechanistic effects through mediators along the causal pathway between the randomized treatment and the outcome. In the setting with two sequential mediators, there are a variety of decompositions of the total risk difference into mediation effects. We derive sharp and valid bounds for a number of mediation effects in the setting of two sequential mediators both with unmeasured confounding with the outcome. We provide five such bounds in the main text corresponding to two different decompositions of the total effect, as well as the controlled direct effect, with an additional thirty novel bounds provided in the supplementary materials corresponding to the terms of twenty-four four-way decompositions. We also show that, although it may seem that one can produce sharp bounds by adding or subtracting the limits of the sharp bounds for terms in a decomposition, this almost always produces valid, but not sharp bounds that can even be completely noninformative. We investigate the properties of the bounds by simulating random probability distributions under our causal model and illustrate how they are interpreted in a real data example.

Let $f$ be a nonnegative integer valued function on the vertex set of a graph. A graph is \textbf{strictly $f$-degenerate} if each nonempty subgraph $\Gamma$ has a vertex $v$ such that $\mathrm{deg}_{\Gamma}(v) < f(v)$. In this paper, we define a new concept, strictly $f$-degenerate transversal, which generalizes list coloring, signed coloring, DP-coloring, $L$-forested-coloring, and $(f_{1}, f_{2}, \dots, f_{s})$-partition. A \textbf{cover} of a graph $G$ is a graph $H$ with vertex set $V(H) = \bigcup_{v \in V(G)} X_{v}$, where $X_{v} = \{(v, 1), (v, 2), \dots, (v, s)\}$; the edge set $\mathscr{M} = \bigcup_{uv \in E(G)}\mathscr{M}_{uv}$, where $\mathscr{M}_{uv}$ is a matching between $X_{u}$ and $X_{v}$. A vertex set $R \subseteq V(H)$ is a \textbf{transversal} of $H$ if $|R \cap X_{v}| = 1$ for each $v \in V(G)$. A transversal $R$ is a \textbf{strictly $f$-degenerate transversal} if $H[R]$ is strictly $f$-degenerate. The main result of this paper is a degree type result, which generalizes Brooks' theorem, Gallai's theorem, degree-choosable result, signed degree-colorable result, and DP-degree-colorable result. We also give some structural results on critical graphs with respect to strictly $f$-degenerate transversal. Using these results, we can uniformly prove many new and known results. In the final section, we pose some open problems.

The modeling of dependence between maxima is an important subject in several applications in risk analysis. To this aim, the extreme value copula function, characterised via the madogram, can be used as a margin-free description of the dependence structure. From a practical point of view, the family of extreme value distributions is very rich and arise naturally as the limiting distribution of properly normalised component-wise maxima. In this paper, we investigate the nonparametric estimation of the madogram where data are completely missing at random. We provide the functional central limit theorem for the considered multivariate madrogram correctly normalized, towards a tight Gaussian process for which the covariance function depends on the probabilities of missing. Explicit formula for the asymptotic variance is also given. Our results are illustrated in a finite sample setting with a simulation study.

In communication complexity the Arthur-Merlin (AM) model is the most natural one that allows both randomness and non-determinism. Presently we do not have any super-logarithmic lower bound for the AM-complexity of an explicit function. Obtaining such a bound is a fundamental challenge to our understanding of communication phenomena. In this article we explore the gap between the known techniques and the complexity class AM. In the first part we define a new natural class, Small-advantage Layered Arthur-Merlin (SLAM), that has the following properties: - SLAM is (strictly) included in AM and includes all previously known subclasses of AM with non-trivial lower bounds. - SLAM is qualitatively stronger than the union of those classes. - SLAM is a subject to the discrepancy bound: in particular, the inner product function does not have an efficient SLAM-protocol. Structurally this can be summarised as SBP $\cup$ UAM $\subset$ SLAM $\subseteq$ AM $\cap$ PP. In the second part we ask why proving a lower bound of $\omega(\sqrt n)$ on the MA-complexity of an explicit function seems to be difficult. Both of these results are related to the notion of layer complexity, which is, informally, the number of "layers of non-determinism" used by a protocol.

The combinatorial diameter $\operatorname{diam}(P)$ of a polytope $P$ is the maximum shortest path distance between any pair of vertices. In this paper, we provide upper and lower bounds on the combinatorial diameter of a random "spherical" polytope, which is tight to within one factor of dimension when the number of inequalities is large compared to the dimension. More precisely, for an $n$-dimensional polytope $P$ defined by the intersection of $m$ i.i.d.\ half-spaces whose normals are chosen uniformly from the sphere, we show that $\operatorname{diam}(P)$ is $\Omega(n m^{\frac{1}{n-1}})$ and $O(n^2 m^{\frac{1}{n-1}} + n^5 4^n)$ with high probability when $m \geq 2^{\Omega(n)}$. For the upper bound, we first prove that the number of vertices in any fixed two dimensional projection sharply concentrates around its expectation when $m$ is large, where we rely on the $\Theta(n^2 m^{\frac{1}{n-1}})$ bound on the expectation due to Borgwardt [Math. Oper. Res., 1999]. To obtain the diameter upper bound, we stitch these ``shadows paths'' together over a suitable net using worst-case diameter bounds to connect vertices to the nearest shadow. For the lower bound, we first reduce to lower bounding the diameter of the dual polytope $P^\circ$, corresponding to a random convex hull, by showing the relation $\operatorname{diam}(P) \geq (n-1)(\operatorname{diam}(P^\circ)-2)$. We then prove that the shortest path between any ``nearly'' antipodal pair vertices of $P^\circ$ has length $\Omega(m^{\frac{1}{n-1}})$.

In this article, we address a class of non convex, integer, non linear mathematical programs using dynamic programming. The mathematical program considered, whose properties are studied in this article, may be used to model the optimal liquidation problem of a single asset portfolio, held in a very large quantity, in a low volatility and perfect memory market, with few market participants. In this context, the Portfolio Manager's selling actions convey information to market participants, which in turn lower bid prices and further penalize the liquidation proceeds we attempt to maximize. We show the problem can be solved exactly using Dynamic Programming (DP) in polynomial time. However, exact resolution is only efficient for small instances. For medium size and large instances, we introduce dedicated heuristics which provide thin admissible solutions, hence tight lower bounds for the initial problem. We also benchmark them against a commercial solver, such as LocalSolver [7]. We are also interested in the continuously relaxed problem, which is non convex. Firstly, we use continuous solutions, obtained by free solver NLopt [26] and transform them into thin admissible solutions of the discrete problem. Secondly, we provide, under some convexity assumptions, an upper bound for the continuous relaxation, and hence for the initial (integer) problem. Numerical experiments confirm the quality of proposed heuristics (lower bounds), which often reach the optimal, or prove very tight, for small and medium size instances, with a very fast CPU time. Our upper bound, however, is not tight.

We consider the interaction between a free flowing fluid and a porous medium flow, where the free flowing fluid is described using the time dependent Stokes equations, and the porous medium flow is described using Darcy's law in the primal formulation. To solve this problem numerically, we use the diffuse interface approach, where the weak form of the coupled problem is written on an extended domain which contains both Stokes and Darcy regions. This is achieved using a phase-field function which equals one in the Stokes region and zero in the Darcy region, and smoothly transitions between these two values on a diffuse region of width $\epsilon$ around the Stokes-Darcy interface. We prove the convergence of the diffuse interface formulation to the standard, sharp interface formulation, and derive the rates of the convergence. This is performed by analyzing the modeling error of the diffuse interface approach at the continuous level, and by deriving the a priori error estimates for the diffuse interface method at the discrete level. The convergence rates are also derived computationally in a numerical example.

Feature attribution is often loosely presented as the process of selecting a subset of relevant features as a rationale of a prediction. This lack of clarity stems from the fact that we usually do not have access to any notion of ground-truth attribution and from a more general debate on what good interpretations are. In this paper we propose to formalise feature selection/attribution based on the concept of relaxed functional dependence. In particular, we extend our notions to the instance-wise setting and derive necessary properties for candidate selection solutions, while leaving room for task-dependence. By computing ground-truth attributions on synthetic datasets, we evaluate many state-of-the-art attribution methods and show that, even when optimised, some fail to verify the proposed properties and provide wrong solutions.

北京阿比特科技有限公司