亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The support of a flow $x$ in a network is the subdigraph induced by the arcs $ij$ for which $x_{ij}>0$. We discuss a number of results on flows in networks where we put certain restrictions on structure of the support of the flow. Many of these problems are NP-hard because they generalize linkage problems for digraphs. For example deciding whether a network ${\cal N}=(D,s,t,c)$ has a maximum flow $x$ such that the maximum out-degree of the support $D_x$ of $x$ is at most 2 is NP-complete as it contains the 2-linkage problem as a very special case. Another problem which is NP-complete for the same reason is that of deciding the maximum flow we can send from $s$ to $t$ along 2 paths (called a maximum 2-path-flow) in ${\cal N}$. Baier et al. (2005) gave a polynomial algorithm which finds a 2-path-flow $x$ whose value is at least $\frac{2}{3}$ of the value of a optimum 2-path-flow. This is best possible unless P=NP. They also obtained a $\frac{2}{p}$-approximation for the maximum value of a $p$-path-flow for every $p\geq 2$. In this paper we give an algorithm which gets within a factor $\frac{1}{H(p)}$ of the optimum solution, where $H(p)$ is the $p$'th harmonic number ($H(p) \sim \ln(p)$). This improves the approximation bound due to Baier et al. when $p\geq 5$. We show that in the case where the network is acyclic, we can find a maximum $p$-path-flow in polynomial time for every $p$. We determine the complexity of a number of related problems concerning the structure of flows. For the special case of acyclic digraphs, some of the results we obtain are in some sense best possible.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

A random $m\times n$ matrix $S$ is an oblivious subspace embedding (OSE) with parameters $\epsilon>0$, $\delta\in(0,1/3)$ and $d\leq m\leq n$, if for any $d$-dimensional subspace $W\subseteq R^n$, $P\big(\,\forall_{x\in W}\ (1+\epsilon)^{-1}\|x\|\leq\|Sx\|\leq (1+\epsilon)\|x\|\,\big)\geq 1-\delta.$ It is known that the embedding dimension of an OSE must satisfy $m\geq d$, and for any $\theta > 0$, a Gaussian embedding matrix with $m\geq (1+\theta) d$ is an OSE with $\epsilon = O_\theta(1)$. However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having $s\ll m$ non-zeros per column, with applications to problems such as least squares regression and low-rank approximation. We show that, given any $\theta > 0$, an $m\times n$ random matrix $S$ with $m\geq (1+\theta)d$ consisting of randomly sparsified $\pm1/\sqrt s$ entries and having $s= O(\log^4(d))$ non-zeros per column, is an oblivious subspace embedding with $\epsilon = O_{\theta}(1)$. Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve $m=O(d)$ embedding dimension, and it improves on $m=O(d\log(d))$ shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression. We further extend our results to construct even sparser non-oblivious embeddings, leading to the first subspace embedding with low distortion $\epsilon=o(1)$ and optimal embedding dimension $m=O(d/\epsilon^2)$ that can be applied in current matrix multiplication time.

Most modern computing tasks are constrained to having digital electronic input and output data. Due to these constraints imposed by the user, any analog computing accelerator must perform an analog-to-digital conversion on its input data and a subsequent digital-to-analog conversion on its output data. This places performance limits on analog computing accelerator hardware. To avoid this, analog hardware must replace the full functionality of traditional digital electronic computer hardware. This is not currently possible for optical computing accelerators due to limitations in gain, input-output isolation, and information storage in current optical hardware. In our case study we profiled 27 benchmarks on an analog optical Fourier transform and convolution accelerator. We estimate that an ideal optical accelerator that accelerates Fourier transforms and convolutions can produce an average speedup of 9.4 times, and a median speedup of 1.9 times for the set of benchmarks. The case study shows that the optical Fourier transform and convolution accelerator only produces significant speedup for applications consisting exclusively of Fourier transforms (45.3 times) and convolutions (159.4 times).

We derive novel and sharp high-dimensional Berry--Esseen bounds for the sum of $m$-dependent random vectors over the class of hyper-rectangles exhibiting only a poly-logarithmic dependence in the dimension. Our results hold under minimal assumptions, such as non-degenerate covariances and finite third moments, and yield a sample complexity of order $\sqrt{m/n}$, aside from logarithmic terms, matching the optimal rates established in the univariate case. When specialized to the sums of independent non-degenerate random vectors, we obtain sharp rates under the weakest possible conditions. On the technical side, we develop an inductive relationship between anti-concentration inequalities and Berry--Esseen bounds, inspired by the classical Lindeberg swapping method and the concentration inequality approach for dependent data, that may be of independent interest.

We present algorithms that compute the terminal configurations for sandpile instances in $O(n \log n)$ time on trees and $O(n)$ time on paths, where $n$ is the number of vertices. The Abelian Sandpile model is a well-known model used in exploring self-organized criticality. Despite a large amount of work on other aspects of sandpiles, there have been limited results in efficiently computing the terminal state, known as the sandpile prediction problem. Our algorithm improves the previous best runtime of $O(n \log^5 n)$ on trees [Ramachandran-Schild SODA '17] and $O(n \log n)$ on paths [Moore-Nilsson '99]. To do so, we move beyond the simulation of individual events by directly computing the number of firings for each vertex. The computation is accelerated using splittable binary search trees. In addition, we give algorithms in $O(n)$ time on cliques and $O(n \log^2 n)$ time on pseudotrees. Towards solving on general graphs, we provide a reduction that transforms the prediction problem on an arbitrary graph into problems on its subgraphs separated by any vertex set $P$. The reduction gives a time complexity of $O(\log^{|P|} n \cdot T)$ where $T$ denotes the total time for solving on each subgraph. We also give algorithms that works well with this reduction scheme.

The \emph{$f$-fault-tolerant connectivity labeling} ($f$-FTC labeling) is a scheme of assigning each vertex and edge with a small-size label such that one can determine the connectivity of two vertices $s$ and $t$ under the presence of at most $f$ faulty edges only from the labels of $s$, $t$, and the faulty edges. This paper presents a new deterministic $f$-FTC labeling scheme attaining $O(f^2 \mathrm{polylog}(n))$-bit label size and a polynomial construction time, which settles the open problem left by Dory and Parter [PODC'21]. The key ingredient of our construction is to develop a deterministic counterpart of the graph sketch technique by Ahn, Guha, and McGreger [SODA'12], via some natural connection with the theory of error-correcting codes. This technique removes one major obstacle in de-randomizing the Dory-Parter scheme. The whole scheme is obtained by combining this technique with a new deterministic graph sparsification algorithm derived from the seminal $\epsilon$-net theory, which is also of independent interest. As byproducts, our result deduces the first deterministic fault-tolerant approximate distance labeling scheme with a non-trivial performance guarantee and an improved deterministic fault-tolerant compact routing. The authors believe that our new technique is potentially useful in the future exploration of more efficient FTC labeling schemes and other related applications based on graph sketches.

Subgraph and homomorphism counting are fundamental algorithmic problems. Given a constant-sized pattern graph $H$ and a large input graph $G$, we wish to count the number of $H$-homomorphisms/subgraphs in $G$. Given the massive sizes of real-world graphs and the practical importance of counting problems, we focus on when (near) linear time algorithms are possible. The seminal work of Chiba-Nishizeki (SICOMP 1985) shows that for bounded degeneracy graphs $G$, clique and $4$-cycle counting can be done linear time. Recent works (Bera et al, SODA 2021, JACM 2022) show a dichotomy theorem characterizing the patterns $H$ for which $H$-homomorphism counting is possible in linear time, for bounded degeneracy inputs $G$. At the other end, Ne\v{s}et\v{r}il and Ossona de Mendez used their deep theory of "sparsity" to define bounded expansion graphs. They prove that, for all $H$, $H$-homomorphism counting can be done in linear time for bounded expansion inputs. What lies between? For a specific $H$, can we characterize input classes where $H$-homomorphism counting is possible in linear time? We discover a hierarchy of dichotomy theorems that precisely answer the above questions. We show the existence of an infinite sequence of graph classes $\mathcal{G}_0$ $\supseteq$ $\mathcal{G}_1$ $\supseteq$ ... $\supseteq$ $\mathcal{G}_\infty$ where $\mathcal{G}_0$ is the class of bounded degeneracy graphs, and $\mathcal{G}_\infty$ is the class of bounded expansion graphs. Fix any constant sized pattern graph $H$. Let $LICL(H)$ denote the length of the longest induced cycle in $H$. We prove the following. If $LICL(H) < 3(r+2)$, then $H$-homomorphisms can be counted in linear time for inputs in $\mathcal{G}_r$. If $LICL(H) \geq 3(r+2)$, then $H$-homomorphism counting on inputs from $\mathcal{G}_r$ takes $\Omega(m^{1+\gamma})$ time. We prove similar dichotomy theorems for subgraph counting.

We show that the problem of whether a query is equivalent to a query of tree-width $k$ is decidable, for the class of Unions of Conjunctive Regular Path Queries with two-way navigation (UC2RPQs). A previous result by Barcel\'o, Romero, and Vardi [SIAM Journal on Computing, 2016] has shown decidability for the case $k=1$, and here we extend this result showing that decidability in fact holds for any arbitrary $k\geq 1$. The algorithm is in 2ExpSpace, but for the restricted but practically relevant case where all regular expressions of the query are of the form $a^*$ or $(a_1 + \dotsb + a_n)$ we show that the complexity of the problem drops to $\Pi^P_2$. We also investigate the related problem of approximating a UC2RPQ by queries of small tree-width. We exhibit an algorithm which, for any fixed number $k$, builds the maximal under-approximation of tree-width $k$ of a UC2RPQ. The maximal under-approximation of tree-width $k$ of a query $q$ is a query $q'$ of tree-width $k$ which is contained in $q$ in a maximal and unique way, that is, such that for every query $q''$ of tree-width $k$, if $q''$ is contained in $q$ then $q''$ is also contained in $q'$. Our approach is shown to be robust, in the sense that it allows also to test equivalence with queries of a given path-width, it also covers the previously known result for $k=1$, and it allows to test for equivalence of whether a (one-way) UCRPQ is equivalent to a UCRPQ of a given tree-width (or path-width).

We study the emptiness and $\lambda$-reachability problems for unary and binary Probabilistic Finite Automata (PFA) and characterise the complexity of these problems in terms of the degree of ambiguity of the automaton and the size of its alphabet. Our main result is that emptiness and $\lambda$-reachability are solvable in EXPTIME for polynomially ambiguous unary PFA and if, in addition, the transition matrix is binary, we show they are in NP. In contrast to the Skolem-hardness of the $\lambda$-reachability and emptiness problems for exponentially ambiguous unary PFA, we show that these problems are NP-hard even for finitely ambiguous unary PFA. For binary polynomially ambiguous PFA with fixed and commuting transition matrices, we prove NP-hardness of the $\lambda$-reachability (dimension 9), nonstrict emptiness (dimension 37) and strict emptiness (dimension 40) problems.

A property of prefix codes called strong monotonicity is introduced. Then it is proven that for a prefix code $C$ for a given probability distribution, the following are equivalent: (i) $C$ is expected length minimal; (ii) $C$ is length equivalent to a Huffman code; and (iii) $C$ is complete and strongly monotone. Also, three relations are introduced between prefix code trees called same-parent, same-row, and same-probability swap equivalence, and it is shown that for a given source, all Huffman codes are same-parent, same-probability swap equivalent, and all expected length minimal prefix codes are same-row, same-probability swap equivalent.

Many current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods, in addition to new approaches of causal recommendation and show significant improvements.

北京阿比特科技有限公司