亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Synthesis consists in deciding whether a given labeled transition system (TS) $A$ can be implemented by a net $N$ of type $\tau$. In case of a negative decision, it may be possible to convert $A$ into an implementable TS $B$ by applying various modification techniques, like relabeling edges that previously had the same label, suppressing edges/states/events, etc. It may however be useful to limit the number of such modifications to stay close to the original problem, or optimize the technique. In this paper, we show that most of the corresponding problems are NP-complete if $\tau$ corresponds to the type of flip-flop nets or some flip-flop net derivatives.

相關內容

We revisit the problem of estimating the profile (also known as the rarity) in the data stream model. Given a sequence of $m$ elements from a universe of size $n$, its profile is a vector $\phi$ whose $i$-th entry $\phi_i$ represents the number of distinct elements that appear in the stream exactly $i$ times. A classic paper by Datar and Muthukrishan from 2002 gave an algorithm which estimates any entry $\phi_i$ up to an additive error of $\pm \epsilon D$ using $O(1/\epsilon^2 (\log n + \log m))$ bits of space, where $D$ is the number of distinct elements in the stream. In this paper, we considerably improve on this result by designing an algorithm which simultaneously estimates many coordinates of the profile vector $\phi$ up to small overall error. We give an algorithm which, with constant probability, produces an estimated profile $\hat\phi$ with the following guarantees in terms of space and estimation error: - For any constant $\tau$, with $O(1 / \epsilon^2 + \log n)$ bits of space, $\sum_{i=1}^\tau |\phi_i - \hat\phi_i| \leq \epsilon D$. - With $O(1/ \epsilon^2\log (1/\epsilon) + \log n + \log \log m)$ bits of space, $\sum_{i=1}^m |\phi_i - \hat\phi_i| \leq \epsilon m$. In addition to bounding the error across multiple coordinates, our space bounds separate the terms that depend on $1/\epsilon$ and those that depend on $n$ and $m$. We prove matching lower bounds on space in both regimes. Application of our profile estimation algorithm gives estimates within error $\pm \epsilon D$ of several symmetric functions of frequencies in $O(1/\epsilon^2 + \log n)$ bits. This generalizes space-optimal algorithms for the distinct elements problems to other problems including estimating the Huber and Tukey losses as well as frequency cap statistics.

We introduce innovative algorithms for computing exact or approximate (minimum-norm) solutions to $Ax=b$ or the {\it normal equation} $A^TAx=A^Tb$, where $A$ is an $m \times n$ real matrix of arbitrary rank. We present more efficient algorithms when $A$ is symmetric PSD. First, we introduce the {\it Triangle Algorithm} (TA), a {\it convex-hull membership} algorithm that given $b_k=Ax_k$ in the ellipsoid $E_{A,\rho}=\{Ax: \Vert x \Vert \leq \rho\}$, it either computes an improved approximation $b_{k+1}=Ax_{k+1}$ or proves $b \not \in E_{A,\rho}$. We then give a dynamic variant of TA, the {\it Centering Triangle Algorithm} (CTA), generating residual, $r_k=b -Ax_k$ via the iteration of $F_1(r)=r-(r^THr/r^TH^2r)Hr$, where $H=AA^T$. If $A$ is symmetric PSD, $H$ can be taken as $A$. Next, for each $t=1, \dots, m$, we derive $F_t(r)=r- \sum_{i=1}^t \alpha_{t,i}(r) H^i r$ whose iterations correspond to a Krylov subspace method with restart. If $\kappa^+(H)$ is the ratio of the largest to smallest positive eigenvalues of $H$, when $Ax=b$ is consistent, in $k=O({\kappa^+(H)}{t^{-1}} \ln \varepsilon^{-1})$ iterations of $F_t$, $\Vert r_k \Vert \leq \varepsilon$. Each iteration takes $O(tN+t^3)$ operations, $N$ the number of nonzero entries in $A$. By directly applying $F_t$ to the normal equation, we get $\Vert A^TAx_k - A^Tb \Vert \leq \varepsilon$ in $O({\kappa^+(AA^T)}{t}^{-1} \ln \varepsilon^{-1})$ iterations. On the other hand, given any residual $r$, we compute $s$, the degree of its minimal polynomial with respect to $H$ in $O(sN+s^3)$ operations. Then $F_s(r)$ gives the minimum-norm solution of $Ax=b$ or an exact solution of $A^TAx=A^Tb$. The proposed algorithms are simple to implementation and theoretically robust. We present sample computational results, comparing the performance of CTA with CG and GMRES methods. The results support CTA as a highly competitive option.

The $\textit{planar slope number}$ $psn(G)$ of a planar graph $G$ is the minimum number of edge slopes in a planar straight-line drawing of $G$. It is known that $psn(G) \in O(c^\Delta)$ for every planar graph $G$ of maximum degree $\Delta$. This upper bound has been improved to $O(\Delta^5)$ if $G$ has treewidth three, and to $O(\Delta)$ if $G$ has treewidth two. In this paper we prove $psn(G) \leq \max\{4,\Delta\}$ when $G$ is a Halin graph, and thus has treewidth three. Furthermore, we present the first polynomial upper bound on the planar slope number for a family of graphs having treewidth four. Namely we show that $O(\Delta^2)$ slopes suffice for nested pseudotrees.

Incomplete LU (ILU) smoothers are effective in the algebraic multigrid (AMG) $V$-cycle for reducing high-frequency components of the error. However, the requisite direct triangular solves are comparatively slow on GPUs. Previous work has demonstrated the advantages of Jacobi iteration as an alternative to direct solution of these systems. Depending on the threshold and fill-level parameters chosen, the factors can be highly non-normal and Jacobi is unlikely to converge in a low number of iterations. We demonstrate that row scaling can reduce the departure from normality, allowing us to replace the inherently sequential solve with a rapidly converging Richardson iteration. There are several advantages beyond the lower compute time. Scaling is performed locally for a diagonal block of the global matrix because it is applied directly to the factor. Further, an ILUT Schur complement smoother maintains a constant GMRES iteration count as the number of MPI ranks increases, and thus parallel strong-scaling is improved. Our algorithms have been incorporated into hypre, and we demonstrate improved time to solution for linear systems arising in the Nalu-Wind and PeleLM pressure solvers. For large problem sizes, GMRES$+$AMG executes at least five times faster when using iterative triangular solves compared with direct solves on massively-parallel GPUs.

Top-$k$ frequent items detection is a fundamental task in data stream mining. Many promising solutions are proposed to improve memory efficiency while still maintaining high accuracy for detecting the Top-$k$ items. Despite the memory efficiency concern, the users could suffer from privacy loss if participating in the task without proper protection, since their contributed local data streams may continually leak sensitive individual information. However, most existing works solely focus on addressing either the memory-efficiency problem or the privacy concerns but seldom jointly, which cannot achieve a satisfactory tradeoff between memory efficiency, privacy protection, and detection accuracy. In this paper, we present a novel framework HG-LDP to achieve accurate Top-$k$ item detection at bounded memory expense, while providing rigorous local differential privacy (LDP) protection. Specifically, we identify two key challenges naturally arising in the task, which reveal that directly applying existing LDP techniques will lead to an inferior ``accuracy-privacy-memory efficiency'' tradeoff. Therefore, we instantiate three advanced schemes under the framework by designing novel LDP randomization methods, which address the hurdles caused by the large size of the item domain and by the limited space of the memory. We conduct comprehensive experiments on both synthetic and real-world datasets to show that the proposed advanced schemes achieve a superior ``accuracy-privacy-memory efficiency'' tradeoff, saving $2300\times$ memory over baseline methods when the item domain size is $41,270$. Our code is open-sourced via the link.

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$ for which the corresponding Sobolev or Besov space compactly embeds into $L_p$. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

The Euclidean algorithm is one of the oldest algorithms known to mankind. Given two integral numbers $a_1$ and $a_2$, it computes the greatest common divisor (gcd) of $a_1$ and $a_2$ in a very elegant way. From a lattice perspective, it computes a basis of the sum of two one-dimensional lattices $a_1 \mathbb{Z}$ and $a_2 \mathbb{Z}$ as $\gcd(a_1,a_2) \mathbb{Z} = a_1 \mathbb{Z} + a_2 \mathbb{Z}$. In this paper, we show that the classical Euclidean algorithm can be adapted in a very natural way to compute a basis of a general lattice $L(a_1, \ldots , a_m)$ given vectors $a_1, \ldots , a_m \in \mathbb{Z}^n$ with $m> \mathrm{rank}(a_1, \ldots ,a_m)$. Similar to the Euclidean algorithm, our algorithm is very easy to describe and implement and can be written within 12 lines of pseudocode. While the Euclidean algorithm halves the largest number in every iteration, our generalized algorithm halves the determinant of a full rank subsystem leading to at most $\log (\det B)$ many iterations, for some initial subsystem $B$. Therefore, we can compute a basis of the lattice using at most $\tilde{O}((m-n)n\log(\det B) + mn^{\omega-1}\log(||A||_\infty))$ arithmetic operations, where $\omega$ is the matrix multiplication exponent and $A = (a_1, \ldots, a_m)$. Even using the worst case Hadamard bound for the determinant, our algorithm improves upon existing algorithm. Another major advantage of our algorithm is that we can bound the entries of the resulting lattice basis by $\tilde{O}(n^2\cdot ||A||_{\infty})$ using a simple pivoting rule. This is in contrast to the typical approach for computing lattice basis, where the Hermite normal form (HNF) is used. In the HNF, entries can be as large as the determinant and hence can only be bounded by an exponential term.

A novel linear integration rule called $\textit{control neighbors}$ is proposed in which nearest neighbor estimates act as control variates to speed up the convergence rate of the Monte Carlo procedure. The main result is the $\mathcal{O}(n^{-1/2} n^{-1/d})$ convergence rate -- where $n$ stands for the number of evaluations of the integrand and $d$ for the dimension of the domain -- of this estimate for Lipschitz functions, a rate which, in some sense, is optimal. Several numerical experiments validate the complexity bound and highlight the good performance of the proposed estimator.

We consider geometric problems on planar $n^2$-point sets in the congested clique model. Initially, each node in the $n$-clique network holds a batch of $n$ distinct points in the Euclidean plane given by $O(\log n)$-bit coordinates. In each round, each node can send a distinct $O(\log n)$-bit message to each other node in the clique and perform unlimited local computations. We show that the convex hull of the input $n^2$-point set can be constructed in $O(\min\{ h,\log n\})$ rounds, where $h$ is the size of the hull, on the congested clique. We also show that a triangulation of the input $n^2$-point set can be constructed in $O(\log^2n)$ rounds on the congested clique. Finally, we demonstrate that the Voronoi diagram of $n^2$ points with $O(\log n)$-bit coordinates drawn uniformly at random from a unit square can be computed within the square with high probability in $O(1)$ rounds on the congested clique.

The \emph{local edge-length ratio} of a planar straight-line drawing $\Gamma$ is the largest ratio between the lengths of any pair of edges of $\Gamma$ that share a common vertex. The \emph{global edge-length ratio} of $\Gamma$ is the largest ratio between the lengths of any pair of edges of $\Gamma$. The local (global) edge-length ratio of a planar graph is the infimum over all local (global) edge-length ratios of its planar straight-line drawings. We show that there exist planar graphs with $n$ vertices whose local edge-length ratio is $\Omega(\sqrt{n})$. We then show a technique to establish upper bounds on the global (and hence local) edge-length ratio of planar graphs and~apply~it to Halin graphs and to other families of graphs having outerplanarity two.

北京阿比特科技有限公司