亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce a variation of the group testing problem where each test is specified by an ordered subset of items, and returns the first defective item in the specified order. We refer to this as \textit{cascaded group testing} and the goal is to identify a small set of $K$ defective items amongst a collection of size $N$, using as few tests as possible. For the adaptive testing regime, we show that a simple scheme is able to find all defective items in at most $K$ tests, which is optimal. For the non-adaptive setting, we first come up with a necessary and sufficient condition for any collection of tests to be feasible for recovering all the defectives. Using this, we are able to show that any feasible non-adaptive strategy requires at least $\Omega(K^2)$ tests. In terms of achievability, it is easy to show that a collection of $O(K^2 \log (N/K))$ randomly constructed tests is feasible. We show via carefully constructed explicit designs that one can do significantly better. We provide two simple schemes for $K = 1, 2$ which only require one and two tests respectively irrespective of the number of items $N$. Note that this is in contrast to standard binary group testing, where at least $\Omega(\log N)$ tests are required. The case of $K \ge 3$ is more challenging and here we come up with an iterative design which requires only $\text{poly}(\log \log N)$ tests.

相關內容

Group一直是研究計算機支持的合作工作、人機交互、計算機支持的協作學習和社會技術研究的主要場所。該會議將社會科學、計算機科學、工程、設計、價值觀以及其他與小組工作相關的多個不同主題的工作結合起來,并進行了廣泛的概念化。官網鏈接: · Processing(編程語言) · 貪心 · 路徑 · MoDELS ·
2024 年 7 月 9 日

In the noisy intermediate-scale quantum era, scientists are trying to improve the entanglement swapping success rate by researching anti-noise technology on the physical level, thereby obtaining a higher generation rate of long-distance entanglement. However, we may improve the generation rate from another perspective, which is studying an efficient entanglement swapping strategy. This paper analyzes the challenges faced by existing entanglement swapping strategies, including the node allocation principle, time synchronization, and processing of entanglement swapping failure. We present Parallel Segment Entanglement Swapping (PSES) to solve these problems. The core idea of PSES is to segment the path and perform parallel entanglement swapping between segments to improve the generation rate of long-distance entanglement. We construct a tree-like model as the carrier of PSES and propose heuristic algorithms called Layer Greedy and Segment Greedy to transform the path into a tree-like model. Moreover, we realize the time synchronization and design the on-demand retransmission mechanism to process entanglement swapping failure. The experiments show that PSES performs superiorly to other entanglement swapping strategies, and the on-demand retransmission mechanism can reduce the average entanglement swapping time by 80% and the average entanglement consumption by 80%.

Conditional computing processes an input using only part of the neural network's computational units. Learning to execute parts of a deep convolutional network by routing individual samples has several advantages: Reducing the computational burden is an obvious advantage. Furthermore, if similar classes are routed to the same path, that part of the network learns to discriminate between finer differences and better classification accuracies can be attained with fewer parameters. Recently, several papers have exploited this idea to take a particular child of a node in a tree-shaped network or to skip parts of a network. In this work, we follow a Trellis-based approach for generating specific execution paths in a deep convolutional neural network. We have designed routing mechanisms that use differentiable information gain-based cost functions to determine which subset of features in a convolutional layer will be executed. We call our method Conditional Information Gain Trellis (CIGT). We show that our conditional execution mechanism achieves comparable or better model performance compared to unconditional baselines, using only a fraction of the computational resources.

The potential move from search to question answering (QA) ignited the question of how should the move from sponsored search to sponsored QA look like. We present the first formal analysis of a sponsored QA platform. The platform fuses an organic answer to a question with an ad to produce a so called {\em sponsored answer}. Advertisers then bid on their sponsored answers. Inspired by Generalized Second Price Auctions (GSPs), the QA platform selects the winning advertiser, sets the payment she pays, and shows the user the sponsored answer. We prove an array of results. For example, advertisers are incentivized to be truthful in their bids; i.e., set them to their true value of the sponsored answer. The resultant setting is stable with properties of VCG auctions.

Sometimes only some digits of a numerical product or some terms of a polynomial or series product are required. Frequently these constitute the most significant or least significant part of the value, for example when computing initial values or refinement steps in iterative approximation schemes. Other situations require the middle portion. In this paper we provide algorithms for the general problem of computing a given span of coefficients within a product, that is the terms within a range of degrees for univariate polynomials or range digits of an integer. This generalizes the "middle product" concept of Hanrot, Quercia and Zimmerman. We are primarily interested in problems of modest size where constant speed up factors can improve overall system performance, and therefore focus the discussion on classical and Karatsuba multiplication and how methods may be combined.

This book is the result of a seminar in which we reviewed multimodal approaches and attempted to create a solid overview of the field, starting with the current state-of-the-art approaches in the two subfields of Deep Learning individually. Further, modeling frameworks are discussed where one modality is transformed into the other, as well as models in which one modality is utilized to enhance representation learning for the other. To conclude the second part, architectures with a focus on handling both modalities simultaneously are introduced. Finally, we also cover other modalities as well as general-purpose multi-modal models, which are able to handle different tasks on different modalities within one unified architecture. One interesting application (Generative Art) eventually caps off this booklet.

Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form. The process of separating underlying factors of variation into variables with semantic meaning benefits in learning explainable representations of data, which imitates the meaningful understanding process of humans when observing an object or relation. As a general learning strategy, DRL has demonstrated its power in improving the model explainability, controlability, robustness, as well as generalization capacity in a wide range of scenarios such as computer vision, natural language processing, data mining etc. In this article, we comprehensively review DRL from various aspects including motivations, definitions, methodologies, evaluations, applications and model designs. We discuss works on DRL based on two well-recognized definitions, i.e., Intuitive Definition and Group Theory Definition. We further categorize the methodologies for DRL into four groups, i.e., Traditional Statistical Approaches, Variational Auto-encoder Based Approaches, Generative Adversarial Networks Based Approaches, Hierarchical Approaches and Other Approaches. We also analyze principles to design different DRL models that may benefit different tasks in practical applications. Finally, we point out challenges in DRL as well as potential research directions deserving future investigations. We believe this work may provide insights for promoting the DRL research in the community.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both a word-based baseline language model and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.

北京阿比特科技有限公司