亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The hitting set problem asks for a collection of sets over a universe $U$ to find a minimum subset of $U$ that intersects each of the given sets. It is NP-hard and equivalent to the problem set cover. We give a branch-and-bound algorithm to solve hitting set. Though it requires exponential time in the worst case, it can solve many practical instances from different domains in reasonable time. Our algorithm outperforms a modern ILP solver, the state-of-the-art for hitting set, by at least an order of magnitude on most instances.

相關內容

In this paper, we propose a randomized $\tilde{O}(\mu(G))$-round algorithm for the maximum cardinality matching problem in the CONGEST model, where $\mu(G)$ means the maximum size of a matching of the input graph $G$. The proposed algorithm substantially improves the current best worst-case running time. The key technical ingredient is a new randomized algorithm of finding an augmenting path of length $\ell$ with high probability within $\tilde{O}(\ell)$ rounds, which positively settles an open problem left in the prior work by Ahmadi and Kuhn [DISC'20]. The idea of our augmenting path algorithm is based on a recent result by Kitamura and Izumi [IEICE Trans.'22], which efficiently identifies a sparse substructure of the input graph containing an augmenting path, following a new concept called \emph{alternating base trees}. Their algorithm, however, resorts to a centralized approach of collecting the entire information of the substructure into a single vertex for constructing an augmenting path. The technical highlight of this paper is to provide a fully-decentralized counterpart of such a centralized method. To develop the algorithm, we prove several new structural properties of alternating base trees, which are of independent interest.

We solve a long-standing open problem about the optimal codebook structure of codes in $n$-dimensional Euclidean space that consist of $n+1$ codewords subject to a codeword energy constraint, in terms of minimizing the average decoding error probability. The conjecture states that optimal codebooks are formed by the $n+1$ vertices of a regular simplex (the $n$-dimensional generalization of a regular tetrahedron) inscribed in the unit sphere. A self-contained proof of this conjecture is provided that hinges on symmetry arguments and leverages a relaxation approach that consists in jointly optimizing the codebook and the decision regions, rather than the codeword locations alone.

In this paper, we consider the following two problems: (i) Deletion Blocker($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that $\alpha(G-S) \leq \alpha(G)-d$, that is the independence number of $G$ decreases by at least $d$ after having removed the vertices from $S$; (ii) Transversal($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that for every maximum independent set $I$ we have $|I\cap S| \geq d$. We show that both problems are polynomial-time solvable in the class of co-comparability graphs by reducing them to the well-known Vertex Cut problem. Our results generalize a result of [Chang et al., Maximum clique transversals, Lecture Notes in Computer Science 2204, pp. 32-43, WG 2001] and a recent result of [Hoang et al., Assistance and interdiction problems on interval graphs, Discrete Applied Mathematics 340, pp. 153-170, 2023].

For any finite discrete source, the competitive advantage of prefix code $C_1$ over prefix code $C_2$ is the probability $C_1$ produces a shorter codeword than $C_2$, minus the probability $C_2$ produces a shorter codeword than $C_1$. For any source, a prefix code is competitively optimal if it has a nonnegative competitive advantage over all other prefix codes. In 1991, Cover proved that Huffman codes are competitively optimal for all dyadic sources. We prove the following asymptotic converse: As the source size grows, the probability a Huffman code for a randomly chosen non-dyadic source is competitively optimal converges to zero. We also prove: (i) For any source, competitively optimal codes cannot exist unless a Huffman code is competitively optimal; (ii) For any non-dyadic source, a Huffman code has a positive competitive advantage over a Shannon-Fano code; (iii) For any source, the competitive advantage of any prefix code over a Huffman code is strictly less than $\frac{1}{3}$; (iv) For each integer $n>3$, there exists a source of size $n$ and some prefix code whose competitive advantage over a Huffman code is arbitrarily close to $\frac{1}{3}$; and (v) For each positive integer $n$, there exists a source of size $n$ and some prefix code whose competitive advantage over a Shannon-Fano code becomes arbitrarily close to $1$ as $n\longrightarrow\infty$.

A property of prefix codes called strong monotonicity is introduced. Then it is proven that for a prefix code $C$ for a given probability distribution, the following are equivalent: (i) $C$ is expected length minimal; (ii) $C$ is length equivalent to a Huffman code; and (iii) $C$ is complete and strongly monotone. Also, three relations are introduced between prefix code trees called same-parent, same-row, and same-probability swap equivalence, and it is shown that for a given source, all Huffman codes are same-parent, same-probability swap equivalent, and all expected length minimal prefix codes are same-row, same-probability swap equivalent.

In this article, we study the test for independence of two random elements $X$ and $Y$ lying in an infinite dimensional space ${\cal{H}}$ (specifically, a real separable Hilbert space equipped with the inner product $\langle ., .\rangle_{\cal{H}}$). In the course of this study, a measure of association is proposed based on the sup-norm difference between the joint probability density function of the bivariate random vector $(\langle l_{1}, X \rangle_{\cal{H}}, \langle l_{2}, Y \rangle_{\cal{H}})$ and the product of marginal probability density functions of the random variables $\langle l_{1}, X \rangle_{\cal{H}}$ and $\langle l_{2}, Y \rangle_{\cal{H}}$, where $l_{1}\in{\cal{H}}$ and $l_{2}\in{\cal{H}}$ are two arbitrary elements. It is established that the proposed measure of association equals zero if and only if the random elements are independent. In order to carry out the test whether $X$ and $Y$ are independent or not, the sample version of the proposed measure of association is considered as the test statistic after appropriate normalization, and the asymptotic distributions of the test statistic under the null and the local alternatives are derived. The performance of the new test is investigated for simulated data sets and the practicability of the test is shown for three real data sets related to climatology, biological science and chemical science.

We investigate the problem of approximating an incomplete preference relation $\succsim$ on a finite set by a complete preference relation. We aim to obtain this approximation in such a way that the choices on the basis of two preferences, one incomplete, the other complete, have the smallest possible discrepancy in the aggregate. To this end, we use the top-difference metric on preferences, and define a best complete approximation of $\succsim$ as a complete preference relation nearest to $\succsim$ relative to this metric. We prove that such an approximation must be a maximal completion of $\succsim$, and that it is, in fact, any one completion of $\succsim$ with the largest index. Finally, we use these results to provide a sufficient condition for the best complete approximation of a preference to be its canonical completion. This leads to closed-form solutions to the best approximation problem in the case of several incomplete preference relations of interest.

The matrix semigroup membership problem asks, given square matrices $M,M_1,\ldots,M_k$ of the same dimension, whether $M$ lies in the semigroup generated by $M_1,\ldots,M_k$. It is classical that this problem is undecidable in general but decidable in case $M_1,\ldots,M_k$ commute. In this paper we consider the problem of whether, given $M_1,\ldots,M_k$, the semigroup generated by $M_1,\ldots,M_k$ contains a non-negative matrix. We show that in case $M_1,\ldots,M_k$ commute, this problem is decidable subject to Schanuel's Conjecture. We show also that the problem is undecidable if the commutativity assumption is dropped. A key lemma in our decidability result is a procedure to determine, given a matrix $M$, whether the sequence of matrices $(M^n)_{n\geq 0}$ is ultimately nonnegative. This answers a problem posed by S. Akshay (arXiv:2205.09190). The latter result is in stark contrast to the notorious fact that it is not known how to determine effectively whether for any specific matrix index $(i,j)$ the sequence $(M^n)_{i,j}$ is ultimately nonnegative (which is a formulation of the Ultimate Positivity Problem for linear recurrence sequences).

In this paper, we develop sixth-order hybrid finite difference methods (FDMs) for the elliptic interface problem $-\nabla \cdot( a\nabla u)=f$ in $\Omega\backslash \Gamma$, where $\Gamma$ is a smooth interface inside $\Omega$. The variable scalar coefficient $a>0$ and source $f$ are possibly discontinuous across $\Gamma$. The hybrid FDMs utilize a $9$-point compact stencil at any interior regular points of the grid and a $13$-point stencil at irregular points near $\Gamma$. For interior regular points away from $\Gamma$, we obtain a sixth-order $9$-point compact FDM satisfying the sign and sum conditions for ensuring the M-matrix property. We also derive sixth-order compact ($4$-point for corners and $6$-point for edges) FDMs satisfying the sign and sum conditions for the M-matrix property at any boundary point subject to (mixed) Dirichlet/Neumann/Robin boundary conditions. Thus, for the elliptic problem without interface (i.e., $\Gamma$ is empty), our compact FDM has the M-matrix property for any mesh size $h>0$ and consequently, satisfies the discrete maximum principle, which guarantees the theoretical sixth-order convergence. For irregular points near $\Gamma$, we propose fifth-order $13$-point FDMs, whose stencil coefficients can be effectively calculated by recursively solving several small linear systems. Theoretically, the proposed high order FDMs use high order (partial) derivatives of the coefficient $a$, the source term $f$, the interface curve $\Gamma$, the two jump functions along $\Gamma$, and the functions on $\partial \Omega$. Numerically, we always use function values to approximate all required high order (partial) derivatives in our hybrid FDMs without losing accuracy. Our numerical experiments confirm the sixth-order convergence in the $l_{\infty}$ norm of the proposed hybrid FDMs for the elliptic interface problem.

We study the problem of progressive ensemble distillation: Given a large, pretrained teacher model $g$, we seek to decompose the model into smaller, low-inference cost student models $f_i$, such that progressively evaluating additional models in this ensemble leads to improved predictions. The resulting ensemble allows for flexibly tuning accuracy vs. inference cost at runtime, which is useful for a number of applications in on-device inference. The method we propose, B-DISTIL , relies on an algorithmic procedure that uses function composition over intermediate activations to construct expressive ensembles with similar performance as $g$ , but with smaller student models. We demonstrate the effectiveness of B-DISTIL by decomposing pretrained models across standard image, speech, and sensor datasets. We also provide theoretical guarantees in terms of convergence and generalization.

北京阿比特科技有限公司