亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Crowdsourcing is a popular method used to estimate ground-truth labels by collecting noisy labels from workers. In this work, we are motivated by crowdsourcing applications where each worker can exhibit two levels of accuracy depending on a task's type. Applying algorithms designed for the traditional Dawid-Skene model to such a scenario results in performance which is limited by the hard tasks. Therefore, we first extend the model to allow worker accuracy to vary depending on a task's unknown type. Then we propose a spectral method to partition tasks by type. After separating tasks by type, any Dawid-Skene algorithm (i.e., any algorithm designed for the Dawid-Skene model) can be applied independently to each type to infer the truth values. We theoretically prove that when crowdsourced data contain tasks with varying levels of difficulty, our algorithm infers the true labels with higher accuracy than any Dawid-Skene algorithm. Experiments show that our method is effective in practical applications.

相關內容

機器學習系統設計系統評估標準

In deep learning, mini-batch training is commonly used to optimize network parameters. However, the traditional mini-batch method may not learn the under-represented samples and complex patterns in the data, leading to a longer time for generalization. To address this problem, a variant of the traditional algorithm has been proposed, which trains the network focusing on mini-batches with high loss. The study evaluates the effectiveness of the proposed training using various deep neural networks trained on three benchmark datasets (CIFAR-10, CIFAR-100, and STL-10). The deep neural networks used in the study are ResNet-18, ResNet-50, Efficient Net B4, EfficientNetV2-S, and MobilenetV3-S. The experimental results showed that the proposed method can significantly improve the test accuracy and speed up the convergence compared to the traditional mini-batch training method. Furthermore, we introduce a hyper-parameter delta ({\delta}) that decides how many mini-batches are considered for training. Experiments on various values of {\delta} found that the performance of the proposed method for smaller {\delta} values generally results in similar test accuracy and faster generalization. We show that the proposed method generalizes in 26.47% less number of epochs than the traditional mini-batch method in EfficientNet-B4 on STL-10. The proposed method also improves the test top-1 accuracy by 7.26% in ResNet-18 on CIFAR-100.

We study online weighted bipartite matching of reusable resources where an adversarial sequence of requests for resources arrive over time. A resource that is matched is 'used' for a random duration, drawn independently from a resource-dependent distribution, after which it returns and is able to be matched again. We study the performance of the greedy policy, which matches requests to the resource that yields the highest reward. Previously, it was known that the greedy policy is 1/2 competitive against a clairvoyant benchmark that knows the request sequence in advance. In this work, we improve this result by introducing a parameter that quantifies the degree of reusability of the resources. Specifically, if p represents the smallest probability over the usage distributions that a matched resource returns in one time step, the greedy policy achieves a competitive ratio of $1/(2-p)$. Furthermore, when the usage distributions are geometric, we establish a stronger competitive ratio of $(1+p)/2$, which we demonstrate to be tight. Both of these results align with the known results in the two extreme scenarios: p = 0 corresponds to non-reusable resources, where 1/2 is known to be tight, while p = 1 corresponds to every resource returning immediately, where greedy is the optimal policy and hence the competitive ratio is 1. Finally, we show that both results are robust to approximations of the greedy policy. Our work demonstrates that the reusability of resources can enhance performance compared to the non-reusable setting, and that a simple greedy policy suffices when the degree of reusability is high. Our insights contribute to the understanding of how resource reusability can influence the performance of online algorithms, and highlight the potential for improved performance as the degree of reusability increases.

For many tasks of data analysis, we may only have the information of the explanatory variable and the evaluation of the response values are quite expensive. While it is impractical or too costly to obtain the responses of all units, a natural remedy is to judiciously select a good sample of units, for which the responses are to be evaluated. In this paper, we adopt the classical criteria in design of experiments to quantify the information of a given sample regarding parameter estimation. Then, we provide a theoretical justification for approximating the optimal sample problem by a continuous problem, for which fast algorithms can be further developed with the guarantee of global convergence. Our results have the following novelties: (i) The statistical efficiency of any candidate sample can be evaluated without knowing the exact optimal sample; (ii) It can be applied to a very wide class of statistical models; (iii) It can be integrated with a broad class of information criteria; (iv) It is much faster than existing algorithms. $(v)$ A geometric interpretation is adopted to theoretically justify the relaxation of the original combinatorial problem to continuous optimization problem.

Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-life application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the abundance of publicized research on confidence scoring. Code and trained models are at //github.com/IML-DKFZ/fd-shifts.

Despite their prevalence in eHealth applications for behavior change, persuasive messages tend to have small effects on behavior. Conditions or states (e.g., confidence, knowledge, motivation) and characteristics (e.g., gender, age, personality) of persuadees are two promising components for more effective algorithms for choosing persuasive messages. However, it is not yet sufficiently clear how well considering these components allows one to predict behavior after persuasive attempts, especially in the long run. Since collecting data for many algorithm components is costly and places a burden on users, a better understanding of the impact of individual components in practice is welcome. This can help to make an informed decision on which components to use. We thus conducted a longitudinal study in which a virtual coach persuaded 671 daily smokers to do preparatory activities for quitting smoking and becoming more physically active, such as envisioning one's desired future self. Based on the collected data, we designed a Reinforcement Learning (RL)-approach that considers current and future states to maximize the effort people spend on their activities. Using this RL-approach, we found, based on leave-one-out cross-validation, that considering states helps to predict both behavior and future states. User characteristics and especially involvement in the activities, on the other hand, only help to predict behavior if used in combination with states rather than alone. We see these results as supporting the use of states and involvement in persuasion algorithms. Our dataset is available online.

Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase on a set of base classes. Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for FSC. Fine-tuning ViTs, however, is expensive in time, compute and storage. This has motivated the design of parameter efficient fine-tuning (PEFT) methods which fine-tune only a fraction of the Transformer's parameters. While these methods have shown promise, inconsistencies in experimental conditions make it difficult to disentangle their advantage from other experimental factors including the feature extractor architecture, pre-trained initialization and fine-tuning algorithm, amongst others. In our paper, we conduct a large-scale, experimentally consistent, empirical analysis to study PEFTs for few-shot image classification. Through a battery of over 1.8k controlled experiments on large-scale few-shot benchmarks including Meta-Dataset (MD) and ORBIT, we uncover novel insights on PEFTs that cast light on their efficacy in fine-tuning ViTs for few-shot classification. Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call AttnScale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD. Our extensive empirical findings set strong baselines and call for rethinking the current design of PEFT methods for FSC.

In the field of sampling algorithms, MCMC (Markov Chain Monte Carlo) methods are widely used when direct sampling is not possible. However, multimodality of target distributions often leads to slow convergence and mixing. One common solution is parallel tempering. Though highly effective in practice, theoretical guarantees on its performance are limited. In this paper, we present a new lower bound for parallel tempering on the spectral gap that has a polynomial dependence on all parameters except $\log L$, where $(L + 1)$ is the number of levels. This improves the best existing bound which depends exponentially on the number of modes. Moreover, we complement our result with a hypothetical upper bound on spectral gap that has an exponential dependence on $\log L$, which shows that, in some sense, our bound is tight.

Studies on benign overfitting provide insights for the success of overparameterized deep learning models. In this work, we examine whether overfitting is truly benign in real-world classification tasks. We start with the observation that a ResNet model overfits benignly on Cifar10 but not benignly on ImageNet. To understand why benign overfitting fails in the ImageNet experiment, we theoretically analyze benign overfitting under a more restrictive setup where the number of parameters is not significantly larger than the number of data points. Under this mild overparameterization setup, our analysis identifies a phase change: unlike in the previous heavy overparameterization settings, benign overfitting can now fail in the presence of label noise. Our analysis explains our empirical observations, and is validated by a set of control experiments with ResNets. Our work highlights the importance of understanding implicit bias in underfitting regimes as a future direction.

Higher order finite difference Weighted Essentially Non-Oscillatory (WENO) schemes have been constructed for conservation laws. For multidimensional problems, they offer high order accuracy at a fraction of the cost of a finite volume WENO or DG scheme of comparable accuracy. This makes them quite attractive for several science and engineering applications. But, to the best of our knowledge, such schemes have not been extended to non-linear hyperbolic systems with non-conservative products. In this paper, we perform such an extension which improves the domain of applicability of such schemes. The extension is carried out by writing the scheme in fluctuation form. We use the HLLI Riemann solver of Dumbser and Balsara (2016) as a building block for carrying out this extension. Because of the use of an HLL building block, the resulting scheme has a proper supersonic limit. The use of anti-diffusive fluxes ensures that stationary discontinuities can be preserved by the scheme, thus expanding its domain of applicability. Our new finite difference WENO formulation uses the same WENO reconstruction that was used in classical versions, making it very easy for users to transition over to the present formulation. For conservation laws, the new finite difference WENO is shown to perform as well as the classical version of finite difference WENO, with two major advantages:- 1) It can capture jumps in stationary linearly degenerate wave families exactly. 2) It only requires the reconstruction to be applied once. Several examples from hyperbolic PDE systems with non-conservative products are shown which indicate that the scheme works and achieves its design order of accuracy for smooth multidimensional flows. Stringent Riemann ... *Abstract truncated, see PDF*

Inverse problems are mathematically ill-posed. Thus, given some (noisy) data, there is more than one solution that fits the data. In recent years, deep neural techniques that find the most appropriate solution, in the sense that it contains a-priori information, were developed. However, they suffer from several shortcomings. First, most techniques cannot guarantee that the solution fits the data at inference. Second, while the derivation of the techniques is inspired by the existence of a valid scalar regularization function, such techniques do not in practice rely on such a function, and therefore veer away from classical variational techniques. In this work we introduce a new family of neural regularizers for the solution of inverse problems. These regularizers are based on a variational formulation and are guaranteed to fit the data. We demonstrate their use on a number of highly ill-posed problems, from image deblurring to limited angle tomography.

北京阿比特科技有限公司