亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of cutting a length-$n$ string of positive real numbers into $k$ pieces so that every piece has sum at least $b$. The problem can also be phrased as transforming such a string into a new one by merging adjacent numbers. We discuss connections with other problems and present several algorithms in connection with the problem: an $O(n)$-time greedy algorithm, an $O(kn\log n)$-time dynamic programming algorithm, and an $O(n+ k \log n + 2^kk)$-time FPT algorithm for $n-k$ pieces.

相關內容

In this paper, we propose a randomized $\tilde{O}(\mu(G))$-round algorithm for the maximum cardinality matching problem in the CONGEST model, where $\mu(G)$ means the maximum size of a matching of the input graph $G$. The proposed algorithm substantially improves the current best worst-case running time. The key technical ingredient is a new randomized algorithm of finding an augmenting path of length $\ell$ with high probability within $\tilde{O}(\ell)$ rounds, which positively settles an open problem left in the prior work by Ahmadi and Kuhn [DISC'20]. The idea of our augmenting path algorithm is based on a recent result by Kitamura and Izumi [IEICE Trans.'22], which efficiently identifies a sparse substructure of the input graph containing an augmenting path, following a new concept called \emph{alternating base trees}. Their algorithm, however, resorts to a centralized approach of collecting the entire information of the substructure into a single vertex for constructing an augmenting path. The technical highlight of this paper is to provide a fully-decentralized counterpart of such a centralized method. To develop the algorithm, we prove several new structural properties of alternating base trees, which are of independent interest.

A property of prefix codes called strong monotonicity is introduced. Then it is proven that for a prefix code $C$ for a given probability distribution, the following are equivalent: (i) $C$ is expected length minimal; (ii) $C$ is length equivalent to a Huffman code; and (iii) $C$ is complete and strongly monotone. Also, three relations are introduced between prefix code trees called same-parent, same-row, and same-probability swap equivalence, and it is shown that for a given source, all Huffman codes are same-parent, same-probability swap equivalent, and all expected length minimal prefix codes are same-row, same-probability swap equivalent.

We provide a simple and natural solution to the problem of generating all $2^n \cdot n!$ signed permutations of $[n] = \{1,2,\ldots,n\}$. Our solution provides a pleasing generalization of the most famous ordering of permutations: plain changes (Steinhaus-Johnson-Trotter algorithm). In plain changes, the $n!$ permutations of $[n]$ are ordered so that successive permutations differ by swapping a pair of adjacent symbols, and the order is often visualized as a weaving pattern involving $n$ ropes. Here we model a signed permutation using $n$ ribbons with two distinct sides, and each successive configuration is created by twisting (i.e., swapping and turning over) two neighboring ribbons or a single ribbon. By greedily prioritizing $2$-twists of the largest symbol before $1$-twists of the largest symbol, we create a signed version of plain change's memorable zig-zag pattern. We provide a loopless algorithm (i.e., worst-case $\mathcal{O}(1)$-time per object) by extending the well-known mixed-radix Gray code algorithm.

We investigate the problem of approximating an incomplete preference relation $\succsim$ on a finite set by a complete preference relation. We aim to obtain this approximation in such a way that the choices on the basis of two preferences, one incomplete, the other complete, have the smallest possible discrepancy in the aggregate. To this end, we use the top-difference metric on preferences, and define a best complete approximation of $\succsim$ as a complete preference relation nearest to $\succsim$ relative to this metric. We prove that such an approximation must be a maximal completion of $\succsim$, and that it is, in fact, any one completion of $\succsim$ with the largest index. Finally, we use these results to provide a sufficient condition for the best complete approximation of a preference to be its canonical completion. This leads to closed-form solutions to the best approximation problem in the case of several incomplete preference relations of interest.

We prove an NP upper bound on a theory of integer-indexed integer-valued arrays that extends combinatory array logic with an ordering relation on the index set and the ability to express sums of elements. We compare our fragment with seven other fragments in the literature in terms of their expressiveness and computational complexity.

The matrix semigroup membership problem asks, given square matrices $M,M_1,\ldots,M_k$ of the same dimension, whether $M$ lies in the semigroup generated by $M_1,\ldots,M_k$. It is classical that this problem is undecidable in general but decidable in case $M_1,\ldots,M_k$ commute. In this paper we consider the problem of whether, given $M_1,\ldots,M_k$, the semigroup generated by $M_1,\ldots,M_k$ contains a non-negative matrix. We show that in case $M_1,\ldots,M_k$ commute, this problem is decidable subject to Schanuel's Conjecture. We show also that the problem is undecidable if the commutativity assumption is dropped. A key lemma in our decidability result is a procedure to determine, given a matrix $M$, whether the sequence of matrices $(M^n)_{n\geq 0}$ is ultimately nonnegative. This answers a problem posed by S. Akshay (arXiv:2205.09190). The latter result is in stark contrast to the notorious fact that it is not known how to determine effectively whether for any specific matrix index $(i,j)$ the sequence $(M^n)_{i,j}$ is ultimately nonnegative (which is a formulation of the Ultimate Positivity Problem for linear recurrence sequences).

We address the problem of keypoint selection, and find that the performance of 6DoF pose estimation methods can be improved when pre-defined keypoint locations are learned, rather than being heuristically selected as has been the standard approach. We found that accuracy and efficiency can be improved by training a graph network to select a set of disperse keypoints with similarly distributed votes. These votes, learned by a regression network to accumulate evidence for the keypoint locations, can be regressed more accurately compared to previous heuristic keypoint algorithms. The proposed KeyGNet, supervised by a combined loss measuring both Wasserstein distance and dispersion, learns the color and geometry features of the target objects to estimate optimal keypoint locations. Experiments demonstrate the keypoints selected by KeyGNet improved the accuracy for all evaluation metrics of all seven datasets tested, for three keypoint voting methods. The challenging Occlusion LINEMOD dataset notably improved ADD(S) by +16.4% on PVN3D, and all core BOP datasets showed an AR improvement for all objects, of between +1% and +21.5%. There was also a notable increase in performance when transitioning from single object to multiple object training using KeyGNet keypoints, essentially eliminating the SISO-MIMO gap for Occlusion LINEMOD.

We propose a general method to break down a main complex task into a set of intermediary easier sub-tasks, which are formulated in natural language as binary questions related to the final target task. Our method allows for representing each example by a vector consisting of the answers to these questions. We call this representation Natural Language Learned Features (NLLF). NLLF is generated by a small transformer language model (e.g., BERT) that has been trained in a Natural Language Inference (NLI) fashion, using weak labels automatically obtained from a Large Language Model (LLM). We show that the LLM normally struggles for the main task using in-context learning, but can handle these easiest subtasks and produce useful weak labels to train a BERT. The NLI-like training of the BERT allows for tackling zero-shot inference with any binary question, and not necessarily the ones seen during the training. We show that this NLLF vector not only helps to reach better performances by enhancing any classifier, but that it can be used as input of an easy-to-interpret machine learning model like a decision tree. This decision tree is interpretable but also reaches high performances, surpassing those of a pre-trained transformer in some cases.We have successfully applied this method to two completely different tasks: detecting incoherence in students' answers to open-ended mathematics exam questions, and screening abstracts for a systematic literature review of scientific papers on climate change and agroecology.

We study the problem of progressive ensemble distillation: Given a large, pretrained teacher model $g$, we seek to decompose the model into smaller, low-inference cost student models $f_i$, such that progressively evaluating additional models in this ensemble leads to improved predictions. The resulting ensemble allows for flexibly tuning accuracy vs. inference cost at runtime, which is useful for a number of applications in on-device inference. The method we propose, B-DISTIL , relies on an algorithmic procedure that uses function composition over intermediate activations to construct expressive ensembles with similar performance as $g$ , but with smaller student models. We demonstrate the effectiveness of B-DISTIL by decomposing pretrained models across standard image, speech, and sensor datasets. We also provide theoretical guarantees in terms of convergence and generalization.

In the Wishart model for sparse PCA we are given $n$ samples $Y_1,\ldots, Y_n$ drawn independently from a $d$-dimensional Gaussian distribution $N({0, Id + \beta vv^\top})$, where $\beta > 0$ and $v\in \mathbb{R}^d$ is a $k$-sparse unit vector, and we wish to recover $v$ (up to sign). We show that if $n \ge \Omega(d)$, then for every $t \ll k$ there exists an algorithm running in time $n\cdot d^{O(t)}$ that solves this problem as long as \[ \beta \gtrsim \frac{k}{\sqrt{nt}}\sqrt{\ln({2 + td/k^2})}\,. \] Prior to this work, the best polynomial time algorithm in the regime $k\approx \sqrt{d}$, called \emph{Covariance Thresholding} (proposed in [KNV15a] and analyzed in [DM14]), required $\beta \gtrsim \frac{k}{\sqrt{n}}\sqrt{\ln({2 + d/k^2})}$. For large enough constant $t$ our algorithm runs in polynomial time and has better guarantees than Covariance Thresholding. Previously known algorithms with such guarantees required quasi-polynomial time $d^{O(\log d)}$. In addition, we show that our techniques work with sparse PCA with adversarial perturbations studied in [dKNS20]. This model generalizes not only sparse PCA, but also other problems studied in prior works, including the sparse planted vector problem. As a consequence, we provide polynomial time algorithms for the sparse planted vector problem that have better guarantees than the state of the art in some regimes. Our approach also works with the Wigner model for sparse PCA. Moreover, we show that it is possible to combine our techniques with recent results on sparse PCA with symmetric heavy-tailed noise [dNNS22]. In particular, in the regime $k \approx \sqrt{d}$ we get the first polynomial time algorithm that works with symmetric heavy-tailed noise, while the algorithm from [dNNS22]. requires quasi-polynomial time in these settings.

北京阿比特科技有限公司