亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A walk $u_0u_1 \ldots u_{k-1}u_k$ is a \textit{weakly toll walk} if $u_0u_i \in E(G)$ implies $u_i = u_1$ and $u_ju_k\in E(G)$ implies $u_j=u_{k-1}$. A set $S$ of vertices of $G$ is {\it weakly toll convex} if for any two non-adjacent vertices $x,y \in S$ any vertex in a weakly toll walk between $x$ and $y$ is also in $S$. The {\em weakly toll convexity} is the graph convexity space defined over weakly toll convex sets. Many studies are devoted to determine if a graph equipped with a convexity space is a {\em convex geometry}. An \emph{extreme vertex} is an element $x$ of a convex set $S$ such that the set $S\backslash\{x\}$ is also convex. A graph convexity space is said to be a convex geometry if it satisfies the Minkowski-Krein-Milman property, which states that every convex set is the convex hull of its extreme vertices. It is known that chordal, Ptolemaic, weakly polarizable, and interval graphs can be characterized as convex geometries with respect to the monophonic, geodesic, $m^3$, and toll convexities, respectively. Other important classes of graphs can also be characterized in this way. In this paper, we prove that a graph is a convex geometry with respect to the weakly toll convexity if and only if it is a proper interval graph. Furthermore, some well-known graph invariants are studied with respect to the weakly toll convexity.

相關內容

We study $L_2$-approximation problems $\text{APP}_d$ in the worst case setting in the weighted Korobov spaces $H_{d,\a,{\bm \ga}}$ with parameter sequences ${\bm \ga}=\{\ga_j\}$ and $\a=\{\az_j\}$ of positive real numbers $1\ge \ga_1\ge \ga_2\ge \cdots\ge 0$ and $\frac1 2<\az_1\le \az_2\le \cdots$. We consider the minimal worst case error $e(n,\text{APP}_d)$ of algorithms that use $n$ arbitrary continuous linear functionals with $d$ variables. We study polynomial convergence of the minimal worst case error, which means that $e(n,\text{APP}_d)$ converges to zero polynomially fast with increasing $n$. We recall the notions of polynomial, strongly polynomial, weak and $(t_1,t_2)$-weak tractability. In particular, polynomial tractability means that we need a polynomial number of arbitrary continuous linear functionals in $d$ and $\va^{-1}$ with the accuracy $\va$ of the approximation. We obtain that the matching necessary and sufficient condition on the sequences ${\bm \ga}$ and $\a$ for strongly polynomial tractability or polynomial tractability is $$\dz:=\liminf_{j\to\infty}\frac{\ln \ga_j^{-1}}{\ln j}>0,$$ and the exponent of strongly polynomial tractability is $$p^{\text{str}}=2\max\big\{\frac 1 \dz, \frac 1 {2\az_1}\big\}.$$

This paper deals with the problem of numerically computing the roots of polynomials $p_k(x)$, $k=1,2,\ldots$, of degree $n=2^k-1$ recursively defined by $p_1(x)=x+1$, $p_k(x)=xp_{k-1}(x)^2+1$. An algorithm based on the Ehrlich-Aberth simultaneous iterations complemented by the Fast Multi-pole Method and the fast search of near neighbors of a set of complex numbers is provided. The algorithm, which relies on a specific strategy of selecting initial approximations, costs $O(n\log n)$ arithmetic operations per step. A Fortran 95 implementation is given and numerical experiments are carried out. Experimentally, it turns out that the number of iterations needed to arrive at numerical convergence is $O(\log n)$. This allows us to compute the roots of $p_k(x)$ up to degree $n=2^{24}-1$ in about 16 minutes on a laptop with 16 GB RAM, and up to degree $n=2^{28}-1$ in about 69 minutes on a machine with 256 GB RAM. The case of degree $n=2^{30}-1$ would require higher memory and higher precision to separate the roots. With a suitable adaptation of FMM to the limit of 256 GB RAM and by performing the computation in extended precision (i.e. with 10-byte floating point representation) we were able to compute all the roots in about two weeks of CPU time for $n=2^{30}-1$. From the experimental analysis, explicit asymptotic expressions of the real roots of $p_k(x)$ and an explicit expression of $\min_{i\ne j}|\xi_i^{(k)}-\xi_j^{(k)}|$ for the roots $\xi_i^{(k)}$ of $p_k(x)$ are deduced. The approach is effectively applied to general classes of polynomials defined by a doubling recurrence.

This paper gives a self-contained introduction to the Hilbert projective metric $\mathcal{H}$ and its fundamental properties, with a particular focus on the space of probability measures. We start by defining the Hilbert pseudo-metric on convex cones, focusing mainly on dual formulations of $\mathcal{H}$ . We show that linear operators on convex cones contract in the distance given by the hyperbolic tangent of $\mathcal{H}$, which in particular implies Birkhoff's classical contraction result for $\mathcal{H}$. Turning to spaces of probability measures, where $\mathcal{H}$ is a metric, we analyse the dual formulation of $\mathcal{H}$ in the general setting, and explore the geometry of the probability simplex under $\mathcal{H}$ in the special case of discrete probability measures. Throughout, we compare $\mathcal{H}$ with other distances between probability measures. In particular, we show how convergence in $\mathcal{H}$ implies convergence in total variation, $p$-Wasserstein distance, and any $f$-divergence. Furthermore, we derive a novel sharp bound for the total variation between two probability measures in terms of their Hilbert distance.

Challenges with data in the big-data era include (i) the dimension $p$ is often larger than the sample size $n$ (ii) outliers or contaminated points are frequently hidden and more difficult to detect. Challenge (i) renders most conventional methods inapplicable. Thus, it attracts tremendous attention from statistics, computer science, and bio-medical communities. Numerous penalized regression methods have been introduced as modern methods for analyzing high-dimensional data. Disproportionate attention has been paid to the challenge (ii) though. Penalized regression methods can do their job very well and are expected to handle the challenge (ii) simultaneously. Most of them, however, can break down by a single outlier (or single adversary contaminated point) as revealed in this article. The latter systematically examines leading penalized regression methods in the literature in terms of their robustness, provides quantitative assessment, and reveals that most of them can break down by a single outlier. Consequently, a novel robust penalized regression method based on the least sum of squares of depth trimmed residuals is proposed and studied carefully. Experiments with simulated and real data reveal that the newly proposed method can outperform some leading competitors in estimation and prediction accuracy in the cases considered.

A $hole$ is an induced cycle of length at least four, and an odd hole is a hole of odd length. A {\em fork} is a graph obtained from $K_{1,3}$ by subdividing an edge once. An {\em odd balloon} is a graph obtained from an odd hole by identifying respectively two consecutive vertices with two leaves of $K_{1, 3}$. A {\em gem} is a graph that consists of a $P_4$ plus a vertex adjacent to all vertices of the $P_4$. A {\em butterfly} is a graph obtained from two traingles by sharing exactly one vertex. A graph $G$ is perfectly divisible if for each induced subgraph $H$ of $G$, $V(H)$ can be partitioned into $A$ and $B$ such that $H[A]$ is perfect and $\omega(H[B])<\omega(H)$. In this paper, we show that (odd balloon, fork)-free graphs are perfectly divisible (this generalizes some results of Karthick {\em et al}). As an application, we show that $\chi(G)\le\binom{\omega(G)+1}{2}$ if $G$ is (fork, gem)-free or (fork, butterfly)-free.

Given a matrix-valued function $\mathcal{F}(\lambda)=\sum_{i=1}^d f_i(\lambda) A_i$, with complex matrices $A_i$ and $f_i(\lambda)$ analytic functions for $i=1,\ldots,d$, we discuss a method for the numerical approximation of the distance to singularity for $\mathcal{F}(\lambda)$. The closest matrix-valued function $\widetilde {\mathcal{F}}(\lambda)$ with respect to the Frobenius norm is approximated using an iterative method. The condition of singularity on the matrix-valued function is translated into a numerical constraint for a suitable minimization problem. Unlike the case of matrix polynomials, in the general setting of matrix-valued functions the main issue is that the function $\det ( \widetilde{\mathcal{F}}(\lambda) )$ may have an infinite number of roots. The main feature of the numerical method consists in the possibility of extending it to different structures, such as sparsity patterns induced by the matrix coefficients.

We consider the minimal thermodynamic cost of an individual computation, where a single input $x$ is mapped to a single output $y$. In prior work, Zurek proposed that this cost was given by $K(x\vert y)$, the conditional Kolmogorov complexity of $x$ given $y$ (up to an additive constant which does not depend on $x$ or $y$). However, this result was derived from an informal argument, applied only to deterministic computations, and had an arbitrary dependence on the choice of protocol (via the additive constant). Here we use stochastic thermodynamics to derive a generalized version of Zurek's bound from a rigorous Hamiltonian formulation. Our bound applies to all quantum and classical processes, whether noisy or deterministic, and it explicitly captures the dependence on the protocol. We show that $K(x\vert y)$ is a minimal cost of mapping $x$ to $y$ that must be paid using some combination of heat, noise, and protocol complexity, implying a tradeoff between these three resources. Our result is a kind of "algorithmic fluctuation theorem" with implications for the relationship between the Second Law and the Physical Church-Turing thesis.

We consider problems of minimizing functionals $\mathcal{F}$ of probability measures on the Euclidean space. To propose an accelerated gradient descent algorithm for such problems, we consider gradient flow of transport maps that give push-forward measures of an initial measure. Then we propose a deterministic accelerated algorithm by extending Nesterov's acceleration technique with momentum. This algorithm do not based on the Wasserstein geometry. Furthermore, to estimate the convergence rate of the accelerated algorithm, we introduce new convexity and smoothness for $\mathcal{F}$ based on transport maps. As a result, we can show that the accelerated algorithm converges faster than a normal gradient descent algorithm. Numerical experiments support this theoretical result.

We derive an intuitionistic version of G\"odel-L\"ob modal logic ($\sf{GL}$) in the style of Simpson, via proof theoretic techniques. We recover a labelled system, $\sf{\ell IGL}$, by restricting a non-wellfounded labelled system for $\sf{GL}$ to have only one formula on the right. The latter is obtained using techniques from cyclic proof theory, sidestepping the barrier that $\sf{GL}$'s usual frame condition (converse well-foundedness) is not first-order definable. While existing intuitionistic versions of $\sf{GL}$ are typically defined over only the box (and not the diamond), our presentation includes both modalities. Our main result is that $\sf{\ell IGL}$ coincides with a corresponding semantic condition in birelational semantics: the composition of the modal relation and the intuitionistic relation is conversely well-founded. We call the resulting logic $\sf{IGL}$. While the soundness direction is proved using standard ideas, the completeness direction is more complex and necessitates a detour through several intermediate characterisations of $\sf{IGL}$.

Given a hypergraph $\mathcal{H}$, the dual hypergraph of $\mathcal{H}$ is the hypergraph of all minimal transversals of $\mathcal{H}$. The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs. While we do not settle the computational complexity status of recognizing this property, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension $3$, we reduce the problem to $2$-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most $k$, for any fixed $k$.

北京阿比特科技有限公司