亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For $k, n \geq 0$, and $c \in Z^n$, we consider ILP problems \begin{gather*} \max\bigl\{ c^\top x \colon A x = b,\, x \in Z^n_{\geq 0} \bigr\}\text{ with $A \in Z^{k \times n}$, $rank(A) = k$, $b \in Z^{k}$ and} \max\bigl\{ c^\top x \colon A x \leq b,\, x \in Z^n \bigr\} \text{ with $A \in Z^{(n+k) \times n}$, $rank(A) = n$, $b \in Z^{n+k}$.} \end{gather*} The first problem is called an \emph{ILP problem in the standard form of the codimension $k$}, and the second problem is called an \emph{ILP problem in the canonical form with $n+k$ constraints.} We show that, for any sufficiently large $\Delta$, both problems can be solved with $$ 2^{O(k)} \cdot (f_{k,d} \cdot \Delta)^2 / 2^{\Omega\bigl(\sqrt{\log(f_{k,d} \cdot \Delta)}\bigr)} $$ operations, where $ f_{k,d} = \min \Bigl\{ k^{k/2}, \bigl(\log k \cdot \log (d + k)\bigr)^{k/2} \Bigr\} $, $d$ is the dimension of a corresponding polyhedron and $\Delta$ is the maximum absolute value of $rank(A) \times rank(A)$ sub-determinants of $A$. As our second main result, we show that the feasibility variants of both problems can be solved with $$ 2^{O(k)} \cdot f_{k,d} \cdot \Delta \cdot \log^3(f_{k,d} \cdot \Delta) $$ operations. The constant $f_{k,d}$ can be replaced by other constant $g_{k,\Delta} = \bigl(\log k \cdot \log (k \Delta)\bigr)^{k/2}$ that depends only on $k$ and $\Delta$. Additionally, we consider different partial cases with $k=0$ and $k=1$, which have interesting applications. As a result of independent interest, we propose an $n^2/2^{\Omega\bigl(\sqrt{\log n}\bigr)}$-time algorithm for the tropical convolution problem on sequences, indexed by elements of a finite Abelian group of the order $n$. This result is obtained, reducing the above problem to the matrix multiplication problem on a tropical semiring and using seminal algorithm by R. Williams.

相關內容

歸納邏輯程序設計(ILP)是機器學習的一個分支,它依賴于邏輯程序作為一種統一的表示語言來表達例子、背景知識和假設。基于一階邏輯的ILP具有很強的表示形式,為多關系學習和數據挖掘提供了一種很好的方法。International Conference on Inductive Logic Programming系列始于1991年,是學習結構化或半結構化關系數據的首要國際論壇。最初專注于邏輯程序的歸納,多年來,它大大擴展了研究范圍,并歡迎在邏輯學習、多關系數據挖掘、統計關系學習、圖形和樹挖掘等各個方面作出貢獻,學習其他(非命題)基于邏輯的知識表示框架,探索統計學習和其他概率方法的交叉點。官網鏈接: · 論文 · 離散數學 ·
2024 年 7 月 5 日

For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative Frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper, we derive a parity check matrix when $A$ is a finite commutative Frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-modules where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.

We study convergence rates of the Trotter-Kato splitting $e^{A+L} = \lim_{n \to \infty} (e^{L/n} e^{A/n})^n$ in the strong operator topology. In the first part, we use complex interpolation theory to treat generators $L$ and $A$ of contraction semigroups on Banach spaces, with $L$ relatively $A$-bounded. In the second part, we study unitary dynamics on Hilbert spaces and develop a new technique based on the concept of energy constraints. Our results provide a complete picture of the convergence rates for the Trotter splitting for all common types of Schr\"odinger and Dirac operators, including singular, confining and magnetic vector potentials, as well as molecular many-body Hamiltonians in dimension $d=3$. Using the Brezis-Mironescu inequality, we derive convergence rates for the Schr\"odinger operator with $V(x)=\pm |x|^{-a}$ potential. In each case, our conditions are fully explicit.

A $(1 \pm \epsilon)$-sparsifier of a hypergraph $G(V,E)$ is a (weighted) subgraph that preserves the value of every cut to within a $(1 \pm \epsilon)$-factor. It is known that every hypergraph with $n$ vertices admits a $(1 \pm \epsilon)$-sparsifier with $\tilde{O}(n/\epsilon^2)$ hyperedges. In this work, we explore the task of building such a sparsifier by using only linear measurements (a \emph{linear sketch}) over the hyperedges of $G$, and provide nearly-matching upper and lower bounds for this task. Specifically, we show that there is a randomized linear sketch of size $\widetilde{O}(n r \log(m) / \epsilon^2)$ bits which with high probability contains sufficient information to recover a $(1 \pm \epsilon)$ cut-sparsifier with $\tilde{O}(n/\epsilon^2)$ hyperedges for any hypergraph with at most $m$ edges each of which has arity bounded by $r$. This immediately gives a dynamic streaming algorithm for hypergraph cut sparsification with an identical space complexity, improving on the previous best known bound of $\widetilde{O}(n r^2 \log^4(m) / \epsilon^2)$ bits of space (Guha, McGregor, and Tench, PODS 2015). We complement our algorithmic result above with a nearly-matching lower bound. We show that for every $\epsilon \in (0,1)$, one needs $\Omega(nr \log(m/n) / \log(n))$ bits to construct a $(1 \pm \epsilon)$-sparsifier via linear sketching, thus showing that our linear sketch achieves an optimal dependence on both $r$ and $\log(m)$.

Sparse suffix sorting is the problem of sorting $b=o(n)$ suffixes of a string of length $n$. Efficient sparse suffix sorting algorithms have existed for more than a decade. Despite the multitude of works and their justified claims for applications in text indexing, the existing algorithms have not been employed by practitioners. Arguably this is because there are no simple, direct, and efficient algorithms for sparse suffix array construction. We provide two new algorithms for constructing the sparse suffix and LCP arrays that are simultaneously simple, direct, small, and fast. In particular, our algorithms are: simple in the sense that they can be implemented using only basic data structures; direct in the sense that the output arrays are not a byproduct of constructing the sparse suffix tree or an LCE data structure; fast in the sense that they run in $\mathcal{O}(n\log b)$ time, in the worst case, or in $\mathcal{O}(n)$ time, when the total number of suffixes with an LCP value greater than $2^{\lfloor \log \frac{n}{b} \rfloor + 1}-1$ is in $\mathcal{O}(b/\log b)$, matching the time of the optimal yet much more complicated algorithms [Gawrychowski and Kociumaka, SODA 2017; Birenzwige et al., SODA 2020]; and small in the sense that they can be implemented using only $8b+o(b)$ machine words. Our algorithms are non-trivial space-efficient adaptations of the Monte Carlo algorithm by I et al. for constructing the sparse suffix tree in $\mathcal{O}(n\log b)$ time [STACS 2014]. We provide extensive experiments to justify our claims on simplicity and on efficiency.

Given a graph $G=(V,E)$, a function $f:V\to \{0,1,2\}$ is said to be a \emph{Roman Dominating function} if for every $v\in V$ with $f(v)=0$, there exists a vertex $u\in N(v)$ such that $f(u)=2$. A Roman Dominating function $f$ is said to be an \emph{Independent Roman Dominating function} (or IRDF), if $V_1\cup V_2$ forms an independent set, where $V_i=\{v\in V~\vert~f(v)=i\}$, for $i\in \{0,1,2\}$. The total weight of $f$ is equal to $\sum_{v\in V} f(v)$, and is denoted as $w(f)$. The \emph{Independent Roman Domination Number} of $G$, denoted by $i_R(G)$, is defined as min$\{w(f)~\vert~f$ is an IRDF of $G\}$. For a given graph $G$, the problem of computing $i_R(G)$ is defined as the \emph{Minimum Independent Roman Domination problem}. The problem is already known to be NP-hard for bipartite graphs. In this paper, we further study the algorithmic complexity of the problem. In this paper, we propose a polynomial-time algorithm to solve the Minimum Independent Roman Domination problem for distance-hereditary graphs, split graphs, and $P_4$-sparse graphs.

Consider a graph $G = (V, E)$ and a function $f: V \rightarrow \{0, 1, 2\}$. A vertex $u$ with $f(u)=0$ is defined as \emph{undefended} by $f$ if it lacks adjacency to any vertex with a positive $f$-value. The function $f$ is said to be a \emph{Weak Roman Dominating function} (WRD function) if, for every vertex $u$ with $f(u) = 0$, there exists a neighbour $v$ of $u$ with $f(v) > 0$ and a new function $f': V \rightarrow \{0, 1, 2\}$ defined in the following way: $f'(u) = 1$, $f'(v) = f(v) - 1$, and $f'(w) = f(w)$, for all vertices $w$ in $V\setminus\{u,v\}$; so that no vertices are undefended by $f'$. The total weight of $f$ is equal to $\sum_{v\in V} f(v)$, and is denoted as $w(f)$. The \emph{Weak Roman Domination Number} denoted by $\gamma_r(G)$, represents $min\{w(f)~\vert~f$ is a WRD function of $G\}$. For a given graph $G$, the problem of finding a WRD function of weight $\gamma_r(G)$ is defined as the \emph{Minimum Weak Roman domination problem}. The problem is already known to be NP-hard for bipartite and chordal graphs. In this paper, we further study the algorithmic complexity of the problem. We prove the NP-hardness of the problem for star convex bipartite graphs and comb convex bipartite graphs, which are subclasses of bipartite graphs. In addition, we show that for the bounded degree star convex bipartite graphs, the problem is efficiently solvable. We also prove the NP-hardness of the problem for split graphs, a subclass of chordal graphs. On the positive side, we give polynomial-time algorithms to solve the problem for $P_4$-sparse graphs. Further, we have presented some approximation results.

The {\em discrepancy} of a matrix $M \in \mathbb{R}^{d \times n}$ is given by $\mathrm{DISC}(M) := \min_{\boldsymbol{x} \in \{-1,1\}^n} \|M\boldsymbol{x}\|_\infty$. An outstanding conjecture, attributed to Koml\'os, stipulates that $\mathrm{DISC}(M) = O(1)$, whenever $M$ is a Koml\'os matrix, that is, whenever every column of $M$ lies within the unit sphere. Our main result asserts that $\mathrm{DISC}(M + R/\sqrt{d}) = O(d^{-1/2})$ holds asymptotically almost surely, whenever $M \in \mathbb{R}^{d \times n}$ is Koml\'os, $R \in \mathbb{R}^{d \times n}$ is a Rademacher random matrix, $d = \omega(1)$, and $n = \omega(d \log d)$. The factor $d^{-1/2}$ normalising $R$ is essentially best possible and the dependency between $n$ and $d$ is asymptotically best possible. Our main source of inspiration is a result by Bansal, Jiang, Meka, Singla, and Sinha (ICALP 2022). They obtained an assertion similar to the one above in the case that the smoothing matrix is Gaussian. They asked whether their result can be attained with the optimal dependency $n = \omega(d \log d)$ in the case of Bernoulli random noise or any other types of discretely distributed noise; the latter types being more conducive for Smoothed Analysis in other discrepancy settings such as the Beck-Fiala problem. For Bernoulli noise, their method works if $n = \omega(d^2)$. In the case of Rademacher noise, we answer the question posed by Bansal, Jiang, Meka, Singla, and Sinha. Our proof builds upon their approach in a strong way and provides a discrete version of the latter. Breaking the $n = \omega(d^2)$ barrier and reaching the optimal dependency $n = \omega(d \log d)$ for Rademacher noise requires additional ideas expressed through a rather meticulous counting argument, incurred by the need to maintain a high level of precision all throughout the discretisation process.

Brown and Walker (1997) showed that GMRES determines a least squares solution of $ A x = b $ where $ A \in {\bf R}^{n \times n} $ without breakdown for arbitrary $ b, x_0 \in {\bf R}^n $ if and only if $A$ is range-symmetric, i.e. $ {\cal R} (A^{\rm T}) = {\cal R} (A) $, where $ A $ may be singular and $ b $ may not be in the range space ${\cal R} A)$ of $A$. In this paper, we propose applying GMRES to $ A C A^{\rm T} z = b $, where $ C \in {\bf R}^{n \times n} $ is symmetric positive definite. This determines a least squares solution $ x = CA^{\rm T} z $ of $ A x = b $ without breakdown for arbitrary (singular) matrix $A \in {\bf R}^{n \times n}$ and $ b \in {\bf R}^n $. To make the method numerically stable, we propose using the pseudoinverse with an appropriate threshold parameter to suppress the influence of tiny singular values when solving the severely ill-conditioned Hessenberg systems which arise in the Arnoldi process of GMRES when solving inconsistent range-symmetric systems. Numerical experiments show that the method taking $C$ to be the identity matrix and the inverse matrix of the diagonal matrix whose diagonal elements are the diagonal of $A A^{\rm T}$ gives a least squares solution even when $A$ is not range-symmetric, including the case when $ {\rm index}(A) >1$.

This paper addresses the problem of finding a minimum-cost $m$-state Markov chain $(S_0,\ldots,S_{m-1})$ in a large set of chains. The chains studied have a reward associated with each state. The cost of a chain is its "gain", i.e., its average reward under its stationary distribution. Specifically, for each $k=0,\ldots,m-1$ there is a known set ${\mathbb S}_k$ of type-$k$ states. A permissible Markov chain contains exactly one state of each type; the problem is to find a minimum-cost permissible chain. The original motivation was to find a cheapest binary AIFV-$m$ lossless code on a source alphabet of size $n$. Such a code is an $m$-tuple of trees, in which each tree can be viewed as a Markov Chain state. This formulation was then used to address other problems in lossless compression. The known solution techniques for finding minimum-cost Markov chains were iterative and ran in exponential time. This paper shows how to map every possible type-$k$ state into a type-$k$ hyperplane and then define a "Markov Chain Polytope" as the lower envelope of all such hyperplanes. Finding a minimum-cost Markov chain can then be shown to be equivalent to finding a "highest" point on this polytope. The local optimization procedures used in the previous iterative algorithms are shown to be separation oracles for this polytope. Since these were often polynomial time, an application of the Ellipsoid method immediately leads to polynomial time algorithms for these problems.

We consider the problem of enumerating all minimal transversals (also called minimal hitting sets) of a hypergraph $\mathcal{H}$. An equivalent formulation of this problem known as the \emph{transversal hypergraph} problem (or \emph{hypergraph dualization} problem) is to decide, given two hypergraphs, whether one corresponds to the set of minimal transversals of the other. The existence of a polynomial time algorithm to solve this problem is a long standing open question. In \cite{fredman_complexity_1996}, the authors present the first sub-exponential algorithm to solve the transversal hypergraph problem which runs in quasi-polynomial time, making it unlikely that the problem is (co)NP-complete. In this paper, we show that when one of the two hypergraphs is of bounded VC-dimension, the transversal hypergraph problem can be solved in polynomial time, or equivalently that if $\mathcal{H}$ is a hypergraph of bounded VC-dimension, then there exists an incremental polynomial time algorithm to enumerate its minimal transversals. This result generalizes most of the previously known polynomial cases in the literature since they almost all consider classes of hypergraphs of bounded VC-dimension. As a consequence, the hypergraph transversal problem is solvable in polynomial time for any class of hypergraphs closed under partial subhypergraphs. We also show that the proposed algorithm runs in quasi-polynomial time in general hypergraphs and runs in polynomial time if the conformality of the hypergraph is bounded, which is one of the few known polynomial cases where the VC-dimension is unbounded.

北京阿比特科技有限公司