亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a digraph $G$, a set $X\subseteq V(G)$ is said to be absorbing set (resp. dominating set) if every vertex in the graph is either in $X$ or is an in-neighbour (resp. out-neighbour) of a vertex in $X$. A set $S\subseteq V(G)$ is said to be an independent set if no two vertices in $S$ are adjacent in $G$. A kernel (resp. solution) of $G$ is an independent and absorbing (resp. dominating) set in $G$. We explore the algorithmic complexity of these problems in the well known class of interval digraphs. A digraph $G$ is an interval digraph if a pair of intervals $(S_u,T_u)$ can be assigned to each vertex $u$ of $G$ such that $(u,v)\in E(G)$ if and only if $S_u\cap T_v\neq\emptyset$. Many different subclasses of interval digraphs have been defined and studied in the literature by restricting the kinds of pairs of intervals that can be assigned to the vertices. We observe that several of these classes, like interval catch digraphs, interval nest digraphs, adjusted interval digraphs and chronological interval digraphs, are subclasses of the more general class of reflexive interval digraphs -- which arise when we require that the two intervals assigned to a vertex have to intersect. We show that all the problems mentioned above are efficiently solvable, in most of the cases even linear-time solvable, in the class of reflexive interval digraphs, but are APX-hard on even the very restricted class of interval digraphs called point-point digraphs, where the two intervals assigned to each vertex are required to be degenerate, i.e. they consist of a single point each. The results we obtain improve and generalize several existing algorithms and structural results for subclasses of reflexive interval digraphs.

相關內容

In this work, we introduce a vertex separator in trees known as a sweep-cover that is defined by an ancestor-descendent relationship with all nodes in the tree. We prove the recurrence relation of sweep-covers with $n$ subcovers $P_{\Delta, \gamma}(n)$ on a class of infinite $\Delta$-ary trees with constant path lengths $\gamma$ between the $\Delta$-star internal nodes. Then, we provide recurrence relations for Raney numbers over integer compositions and show that they provide a lower-bound for sweep-covers such that $P_{\Delta, \gamma}(n) = \Omega\left( \frac{\sqrt{2 \pi} n^{\Delta n + \Delta + \frac{3}{2}}}{e^n ((\Delta-1)n+\Delta+1)!(n+1)!} \gamma \right)$.

Let $\Pi$ be a hereditary graph class. The problem of deletion to $\Pi$, takes as input a graph $G$ and asks for a minimum number (or a fixed integer $k$) of vertices to be deleted from $G$ so that the resulting graph belongs to $\Pi$. This is a well-studied problem in paradigms including approximation and parameterized complexity. Recently, the study of a natural extension of the problem was initiated where we are given a finite set of hereditary graph classes, and the goal is to determine whether $k$ vertices can be deleted from a given graph so that the connected components of the resulting graph belong to one of the given hereditary graph classes. The problem is shown to be FPT as long as the deletion problem to each of the given hereditary graph classes is fixed-parameter tractable, and the property of being in any of the graph classes is expressible in the counting monodic second order (CMSO) logic. While this was shown using some black box theorems, faster algorithms were shown when each of the hereditary graph classes has a finite forbidden set. In this paper, we do a deep dive on pairs of specific graph classes ($\Pi_1, \Pi_2$) in which we would like the connected components of the resulting graph to belong to, and design simpler and more efficient FPT algorithms. We design a general FPT algorithm and approximation algorithm for pairs of graph classes (possibly having infinite forbidden sets) satisfying certain conditions. These algorithms cover several pairs of popular graph classes. Our algorithm makes non-trivial use of the branching technique and as a black box, FPT algorithms for deletion to individual graph classes.

We consider Broyden's method and some accelerated schemes for nonlinear equations having a strongly regular singularity of first order with a one-dimensional nullspace. Our two main results are as follows. First, we show that the use of a preceding Newton-like step ensures convergence for starting points in a starlike domain with density 1. This extends the domain of convergence of these methods significantly. Second, we establish that the matrix updates of Broyden's method converge q-linearly with the same asymptotic factor as the iterates. This contributes to the long-standing question whether the Broyden matrices converge by showing that this is indeed the case for the setting at hand. Furthermore, we prove that the Broyden directions violate uniform linear independence, which implies that existing results for convergence of the Broyden matrices cannot be applied. Numerical experiments of high precision confirm the enlarged domain of convergence, the q-linear convergence of the matrix updates, and the lack of uniform linear independence. In addition, they suggest that these results can be extended to singularities of higher order and that Broyden's method can converge r-linearly without converging q-linearly. The underlying code is freely available.

We first consider the problem of approximating a few eigenvalues of a rational matrix-valued function closest to a prescribed target. It is assumed that the proper rational part of the rational matrix-valued function is expressed in the transfer function form $H(s) = C (sI - A)^{-1} B$, where the middle factor is large, whereas the number of rows of $C$ and the number of columns of $B$ are equal and small. We propose a subspace framework that performs two-sided or one-sided projections on the state-space representation of $H(\cdot)$, commonly employed in model reduction and giving rise to a reduced transfer function. At every iteration, the projection subspaces are expanded to attain Hermite interpolation conditions at the eigenvalues of the reduced transfer function closest to the target, which in turn leads to a new reduced transfer function. We prove in theory that, when a sequence of eigenvalues of the reduced transfer functions converges to an eigenvalue of the full problem, it converges at least at a quadratic rate. In the second part, we extend the proposed framework to locate the eigenvalues of a general square large-scale nonlinear meromorphic matrix-valued function $T(\cdot)$, where we exploit a representation $\mathcal{R}(s) = C(s) A(s)^{-1} B(s) - D(s)$ defined in terms of the block components of $T(\cdot)$. The numerical experiments illustrate that the proposed framework is reliable in locating a few eigenvalues closest to the target point, and that, with respect to runtime, it is competitive to established methods for nonlinear eigenvalue problems.

Consider a random graph process with $n$ vertices corresponding to points $v_{i} \sim {Unif}[0,1]$ embedded randomly in the interval, and where edges are inserted between $v_{i}, v_{j}$ independently with probability given by the graphon $w(v_{i},v_{j}) \in [0,1]$. Following Chuangpishit et al. (2015), we call a graphon $w$ diagonally increasing if, for each $x$, $w(x,y)$ decreases as $y$ moves away from $x$. We call a permutation $\sigma \in S_{n}$ an ordering of these vertices if $v_{\sigma(i)} < v_{\sigma(j)}$ for all $i < j$, and ask: how can we accurately estimate $\sigma$ from an observed graph? We present a randomized algorithm with output $\hat{\sigma}$ that, for a large class of graphons, achieves error $\max_{1 \leq i \leq n} | \sigma(i) - \hat{\sigma}(i)| = O^{*}(\sqrt{n})$ with high probability; we also show that this is the best-possible convergence rate for a large class of algorithms and proof strategies. Under an additional assumption that is satisfied by some popular graphon models, we break this "barrier" at $\sqrt{n}$ and obtain the vastly better rate $O^{*}(n^{\epsilon})$ for any $\epsilon > 0$. These improved seriation bounds can be combined with previous work to give more efficient and accurate algorithms for related tasks, including: estimating diagonally increasing graphons, and testing whether a graphon is diagonally increasing.

A graph is called a sum graph if its vertices can be labelled by distinct positive integers such that there is an edge between two vertices if and only if the sum of their labels is the label of another vertex of the graph. Most papers on sum graphs consider combinatorial questions like the minimum number of isolated vertices that need to be added to a given graph to make it a sum graph. In this paper, we initiate the study of sum graphs from the viewpoint of computational complexity. Notice that every $n$-vertex sum graph can be represented by a sorted list of $n$ positive integers where edge queries can be answered in $O(\log n)$ time. Therefore, limiting the size of the vertex labels also upper-bounds the space complexity of storing the graph in the database. We show that every $n$-vertex, $m$-edge, $d$-degenerate graph can be made a sum graph by adding at most $m$ isolated vertices to it, such that the size of each vertex label is at most $O(n^2d)$. This enables us to store the graph using $O(m\log n)$ bits of memory. For sparse graphs (graphs with $O(n)$ edges), this matches the trivial lower bound of $\Omega(n\log n)$. Since planar graphs and forests have constant degeneracy, our result implies an upper bound of $O(n^2)$ on their label size. The previously best known upper bound on the label size of general graphs with the minimum number of isolated vertices was $O(4^n)$, due to Kratochv\'il, Miller & Nguyen. Furthermore, their proof was existential, whereas our labelling can be constructed in polynomial time.

This paper presents local minimax regret lower bounds for adaptively controlling linear-quadratic-Gaussian (LQG) systems. We consider smoothly parametrized instances and provide an understanding of when logarithmic regret is impossible which is both instance specific and flexible enough to take problem structure into account. This understanding relies on two key notions: That of local-uninformativeness; when the optimal policy does not provide sufficient excitation for identification of the optimal policy, and yields a degenerate Fisher information matrix; and that of information-regret-boundedness, when the small eigenvalues of a policy-dependent information matrix are boundable in terms of the regret of that policy. Combined with a reduction to Bayesian estimation and application of Van Trees' inequality, these two conditions are sufficient for proving regret bounds on order of magnitude $\sqrt{T}$ in the time horizon, $T$. This method yields lower bounds that exhibit tight dimensional dependencies and scale naturally with control-theoretic problem constants. For instance, we are able to prove that systems operating near marginal stability are fundamentally hard to learn to control. We further show that large classes of systems satisfy these conditions, among them any state-feedback system with both $A$- and $B$-matrices unknown. Most importantly, we also establish that a nontrivial class of partially observable systems, essentially those that are over-actuated, satisfy these conditions, thus providing a $\sqrt{T}$ lower bound also valid for partially observable systems. Finally, we turn to two simple examples which demonstrate that our lower bound captures classical control-theoretic intuition: our lower bounds diverge for systems operating near marginal stability or with large filter gain -- these can be arbitrarily hard to (learn to) control.

When assessing the performance of wireless communication systems operating over fading channels, one often encounters the problem of computing expectations of some functional of sums of independent random variables (RVs). The outage probability (OP) at the output of Equal Gain Combining (EGC) and Maximum Ratio Combining (MRC) receivers is among the most important performance metrics that falls within this framework. In general, closed form expressions of expectations of functionals applied to sums of RVs are out of reach. A naive Monte Carlo (MC) simulation is of course an alternative approach. However, this method requires a large number of samples for rare event problems (small OP values for instance). Therefore, it is of paramount importance to use variance reduction techniques to develop fast and efficient estimation methods. In this work, we use importance sampling (IS), being known for its efficiency in requiring less computations for achieving the same accuracy requirement. In this line, we propose a state-dependent IS scheme based on a stochastic optimal control (SOC) formulation to calculate rare events quantities that could be written in a form of an expectation of some functional of sums of independent RVs. Our proposed algorithm is generic and can be applicable without any restriction on the univariate distributions of the different fading envelops/gains or on the functional that is applied to the sum. We apply our approach to the Log-Normal distribution to compute the OP at the output of diversity receivers with and without co-channel interference. For each case, we show numerically that the proposed state-dependent IS algorithm compares favorably to most of the well-known estimators dealing with similar problems.

Cellular networks are expected to be the main communication infrastructure to support the expanding applications of Unmanned Aerial Vehicles (UAVs). As these networks are deployed to serve ground User Equipment (UES), several issues need to be addressed to enhance cellular UAVs'services.In this paper, we propose a realistic communication model on the downlink,and we show that the Quality of Service (QoS)for the users is affected by the number of interfering BSs and the impact they cause. The joint problem of sub-carrier and power allocation is therefore addressed. Given its complexity, which is known to be NP-hard, we introduce a solution based on game theory. First, we argue that separating between UAVs and UEs in terms of the assigned sub-carriers reduces the interference impact on the users. This is materialized through a matching game. Moreover, in order to boost the partition, we propose a coalitional game that considers the outcome of the first one and enables users to change their coalitions and enhance their QoS. Furthermore, a power optimization solution is introduced, which is considered in the two games. Performance evaluations are conducted, and the obtained results demonstrate the effectiveness of the propositions.

In this work, we compare three different modeling approaches for the scores of soccer matches with regard to their predictive performances based on all matches from the four previous FIFA World Cups 2002 - 2014: Poisson regression models, random forests and ranking methods. While the former two are based on the teams' covariate information, the latter method estimates adequate ability parameters that reflect the current strength of the teams best. Within this comparison the best-performing prediction methods on the training data turn out to be the ranking methods and the random forests. However, we show that by combining the random forest with the team ability parameters from the ranking methods as an additional covariate we can improve the predictive power substantially. Finally, this combination of methods is chosen as the final model and based on its estimates, the FIFA World Cup 2018 is simulated repeatedly and winning probabilities are obtained for all teams. The model slightly favors Spain before the defending champion Germany. Additionally, we provide survival probabilities for all teams and at all tournament stages as well as the most probable tournament outcome.

北京阿比特科技有限公司