亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Petri nets, equivalently presentable as vector addition systems with states, are an established model of concurrency with widespread applications. The reachability problem, where we ask whether from a given initial configuration there exists a sequence of valid execution steps reaching a given final configuration, is the central algorithmic problem for this model. The complexity of the problem has remained, until recently, one of the hardest open questions in verification of concurrent systems. A first upper bound has been provided only in 2015 by Leroux and Schmitz, then refined by the same authors to non-primitive recursive Ackermannian upper bound in 2019. The exponential space lower bound, shown by Lipton already in 1976, remained the only known for over 40 years until a breakthrough non-elementary lower bound by Czerwi{\'n}ski, Lasota, Lazic, Leroux and Mazowiecki in 2019. Finally, a matching Ackermannian lower bound announced this year by Czerwi{\'n}ski and Orlikowski, and independently by Leroux, established the complexity of the problem. Our contribution is an improvement of the former construction, making it conceptually simpler and more direct. On the way we improve the lower bound for vector addition systems with states in fixed dimension (or, equivalently, Petri nets with fixed number of places): while Czerwi{\'n}ski and Orlikowski prove $F_k$-hardness (hardness for $k$th level in Grzegorczyk Hierarchy) in dimension $6k$, and Leroux in dimension $4k+5$, our simplified construction yields $F_k$-hardness already in dimension $3k+2$.

相關內容

With the rapid popularization of big data, the dichotomy between tractable and intractable problems in big data computing has been shifted. Sublinear time, rather than polynomial time, has recently been regarded as the new standard of tractability in big data computing. This change brings the demand for new methodologies in computational complexity theory in the context of big data. Based on the prior work for sublinear-time complexity classes \cite{DBLP:journals/tcs/GaoLML20}, this paper focuses on sublinear-time reductions specialized for problems in big data computing. First, the pseudo-sublinear-time reduction is proposed and the complexity classes \Pproblem and \PsT are proved to be closed under it. To establish \PsT-intractability for certain problems in \Pproblem, we find the first problem in $\Pproblem \setminus \PsT$. Using the pseudo-sublinear-time reduction, we prove that the nearest edge query is in \PsT but the algebraic equation root problem is not. Then, the pseudo-polylog-time reduction is introduced and the complexity class \PsPL is proved to be closed under it. The \PsT-completeness under it is regarded as an evidence that some problems can not be solved in polylogarithmic time after a polynomial-time preprocessing, unless \PsT = \PsPL. We prove that all \PsT-complete problems are also \Pproblem-complete, which gives a further direction for identifying \PsT-complete problems.

We present a numerical stability analysis of the immersed boundary(IB) method for a special case which is constructed so that Fourier analysis is applicable. We examine the stability of the immersed boundary method with the discrete Fourier transforms defined differently on the fluid grid and the boundary grid. This approach gives accurate theoretical results about the stability boundary since it takes the effects of the spreading kernel of the immersed boundary method on the numerical stability into account. In this paper, the spreading kernel is the standard 4-point IB delta function. A three-dimensional incompressible viscous flow and a no-slip planar boundary are considered. The case of a planar elastic membrane is also analyzed using the same analysis framework and it serves as an example of many possible generalizations of our theory. We present some numerical results and show that the observed stability behaviors are consistent with what are predicted by our theory.

We study the Hamilton cycle problem with input a random graph G=G(n,p) in two settings. In the first one, G is given to us in the form of randomly ordered adjacency lists while in the second one we are given the adjacency matrix of G. In each of the settings we give a deterministic algorithm that w.h.p. either it finds a Hamilton cycle or it returns a certificate that such a cycle does not exists, for p > 0. The running times of our algorithms are w.h.p. O(n) and O(n/p) respectively each being best possible in its own setting.

We study the Maximum Independent Set (MIS) problem under the notion of stability introduced by Bilu and Linial (2010): a weighted instance of MIS is $\gamma$-stable if it has a unique optimal solution that remains the unique optimum under multiplicative perturbations of the weights by a factor of at most $\gamma\geq 1$. The goal then is to efficiently recover the unique optimal solution. In this work, we solve stable instances of MIS on several graphs classes: we solve $\widetilde{O}(\Delta/\sqrt{\log \Delta})$-stable instances on graphs of maximum degree $\Delta$, $(k - 1)$-stable instances on $k$-colorable graphs and $(1 + \varepsilon)$-stable instances on planar graphs. For general graphs, we present a strong lower bound showing that there are no efficient algorithms for $O(n^{\frac{1}{2} - \varepsilon})$-stable instances of MIS, assuming the planted clique conjecture. We also give an algorithm for $(\varepsilon n)$-stable instances. As a by-product of our techniques, we give algorithms and lower bounds for stable instances of Node Multiway Cut. Furthermore, we prove a general result showing that the integrality gap of convex relaxations of several maximization problems reduces dramatically on stable instances. Moreover, we initiate the study of certified algorithms, a notion recently introduced by Makarychev and Makarychev (2018), which is a class of $\gamma$-approximation algorithms that satisfy one crucial property: the solution returned is optimal for a perturbation of the original instance. We obtain $\Delta$-certified algorithms for MIS on graphs of maximum degree $\Delta$, and $(1+\varepsilon)$-certified algorithms on planar graphs. Finally, we analyze the algorithm of Berman and Furer (1994) and prove that it is a $\left(\frac{\Delta + 1}{3} + \varepsilon\right)$-certified algorithm for MIS on graphs of maximum degree $\Delta$ where all weights are equal to 1.

In budget-constrained settings aimed at mitigating unfairness, like law enforcement, it is essential to prioritize the sources of unfairness before taking measures to mitigate them in the real world. Unlike previous works, which only serve as a caution against possible discrimination and de-bias data after data generation, this work provides a toolkit to mitigate unfairness during data generation, given by the Unfair Edge Prioritization algorithm, in addition to de-biasing data after generation, given by the Discrimination Removal algorithm. We assume that a non-parametric Markovian causal model representative of the data generation procedure is given. The edges emanating from the sensitive nodes in the causal graph, such as race, are assumed to be the sources of unfairness. We first quantify Edge Flow in any edge X -> Y, which is the belief of observing a specific value of Y due to the influence of a specific value of X along X -> Y. We then quantify Edge Unfairness by formulating a non-parametric model in terms of edge flows. We then prove that cumulative unfairness towards sensitive groups in a decision, like race in a bail decision, is non-existent when edge unfairness is absent. We prove this result for the non-trivial non-parametric model setting when the cumulative unfairness cannot be expressed in terms of edge unfairness. We then measure the Potential to mitigate the Cumulative Unfairness when edge unfairness is decreased. Based on these measurements, we propose the Unfair Edge Prioritization algorithm that can then be used by policymakers. We also propose the Discrimination Removal Procedure that de-biases a data distribution by eliminating optimization constraints that grow exponentially in the number of sensitive attributes and values taken by them. Extensive experiments validate the theorem and specifications used for quantifying the above measures.

In this paper a class of optimization problems with uncertain linear constraints is discussed. It is assumed that the constraint coefficients are random vectors whose probability distributions are only partially known. Possibility theory is used to model the imprecise probabilities. In one of the interpretations, a possibility distribution (a membership function of a fuzzy set) in the set of coefficient realizations induces a necessity measure, which in turn defines a family of probability distributions in this set. The distributionally robust approach is then used to transform the imprecise constraints into deterministic counterparts. Namely, the uncertain left-had side of each constraint is replaced with the expected value with respect to the worst probability distribution that can occur. It is shown how to represent the resulting problem by using linear or second order cone constraints. This leads to problems which are computationally tractable for a wide class of optimization models, in particular for linear programming.

In blind compression of quantum states, a sender Alice is given a specimen of a quantum state $\rho$ drawn from a known ensemble (but without knowing what $\rho$ is), and she transmits sufficient quantum data to a receiver Bob so that he can decode a near perfect specimen of $\rho$. For many such states drawn iid from the ensemble, the asymptotically achievable rate is the number of qubits required to be transmitted per state. The Holevo information is a lower bound for the achievable rate, and is attained for pure state ensembles, or in the related scenario of entanglement-assisted visible compression of mixed states wherein Alice knows what state is drawn. In this paper, we prove a general and robust lower bound on the achievable rate for ensembles of classical states, which holds even in the least demanding setting when Alice and Bob share free entanglement and a constant per-copy error is allowed. We apply the bound to a specific ensemble of only two states and prove a near-maximal separation (saturating the dimension bound in leading order) between the best achievable rate and the Holevo information for constant error. This also implies that the ensemble is incompressible -- compression does not reduce the communication cost by much. Since the states are classical, the observed incompressibility is not fundamentally quantum mechanical. We lower bound the difference between the achievable rate and the Holevo information in terms of quantitative limitations to clone the specimen or to distinguish the two classical states.

Let $G$ be a strongly connected directed graph and $u,v,w\in V(G)$ be three vertices. Then $w$ strongly resolves $u$ to $v$ if there is a shortest $u$-$w$-path containing $v$ or a shortest $w$-$v$-path containing $u$. A set $R\subseteq V(G)$ of vertices is a strong resolving set for a directed graph $G$ if for every pair of vertices $u,v\in V(G)$ there is at least one vertex in $R$ that strongly resolves $u$ to $v$ and at least one vertex in $R$ that strongly resolves $v$ to $u$. The distances of the vertices of $G$ to and from the vertices of a strong resolving set $R$ uniquely define the connectivity structure of the graph. The Strong Metric Dimension of a directed graph $G$ is the size of a smallest strong resolving set for $G$. The decision problem Strong Metric Dimension is the question whether $G$ has a strong resolving set of size at most $r$, for a given directed graph $G$ and a given number $r$. In this paper we study undirected and directed co-graphs and introduce linear time algorithms for Strong Metric Dimension. These algorithms can also compute strong resolving sets for co-graphs in linear time.

Let CMSO denote the counting monadic second order logic of graphs. We give a constructive proof that for some computable function $f$, there is an algorithm $\mathfrak{A}$ that takes as input a CMSO sentence $\varphi$, a positive integer $t$, and a connected graph $G$ of maximum degree at most $\Delta$, and determines, in time $f(|\varphi|,t)\cdot 2^{O(\Delta \cdot t)}\cdot |G|^{O(t)}$, whether $G$ has a supergraph $G'$ of treewidth at most $t$ such that $G'\models \varphi$. The algorithmic metatheorem described above sheds new light on certain unresolved questions within the framework of graph completion algorithms. In particular, using this metatheorem, we provide an explicit algorithm that determines, in time $f(d)\cdot 2^{O(\Delta \cdot d)}\cdot |G|^{O(d)}$, whether a connected graph of maximum degree $\Delta$ has a planar supergraph of diameter at most $d$. Additionally, we show that for each fixed $k$, the problem of determining whether $G$ has an $k$-outerplanar supergraph of diameter at most $d$ is strongly uniformly fixed parameter tractable with respect to the parameter $d$. This result can be generalized in two directions. First, the diameter parameter can be replaced by any contraction-closed effectively CMSO-definable parameter $\mathbf{p}$. Examples of such parameters are vertex-cover number, dominating number, and many other contraction-bidimensional parameters. In the second direction, the planarity requirement can be relaxed to bounded genus, and more generally, to bounded local treewidth.

We study timed systems in which some timing features are unknown parameters. Parametric timed automata (PTAs) are a classical formalism for such systems but for which most interesting problems are undecidable. Notably, the parametric reachability emptiness problem, i.e., the emptiness of the parameter valuations set allowing to reach some given discrete state, is undecidable. Lower-bound/upper-bound parametric timed automata (L/U-PTAs) achieve decidability for reachability properties by enforcing a separation of parameters used as upper bounds in the automaton constraints, and those used as lower bounds. In this paper, we first study reachability. We exhibit a subclass of PTAs (namely integer-points PTAs) with bounded rational-valued parameters for which the parametric reachability emptiness problem is decidable. Using this class, we present further results improving the boundary between decidability and undecidability for PTAs and their subclasses such as L/U-PTAs. We then study liveness. We prove that: (1) the existence of at least one parameter valuation for which there exists an infinite run in an L/U-PTA is PSPACE-complete; (2) the existence of a parameter valuation such that the system has a deadlock is however undecidable; (3) the problem of the existence of a valuation for which a run remains in a given set of locations exhibits a very thin border between decidability and undecidability.

北京阿比特科技有限公司