Recommendation systems rely on user-provided data to learn about item quality and provide personalized recommendations. An implicit assumption when aggregating ratings into item quality is that ratings are strong indicators of item quality. In this work, we test this assumption using data collected from a music discovery application. Our study focuses on two factors that cause rating inflation: heterogeneous user rating behavior and the dynamics of personalized recommendations. We show that user rating behavior substantially varies by user, leading to item quality estimates that reflect the users who rated an item more than the item quality itself. Additionally, items that are more likely to be shown via personalized recommendations can experience a substantial increase in their exposure and potential bias toward them. To mitigate these effects, we analyze the results of a randomized controlled trial in which the rating interface was modified. The test resulted in a substantial improvement in user rating behavior and a reduction in item quality inflation. These findings highlight the importance of carefully considering the assumptions underlying recommendation systems and designing interfaces that encourage accurate rating behavior.
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMM) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset consists of 120k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at two semantic levels: (i) Nonexistent Element Manipulation and (ii) Existent Element Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a novel approach to evaluate visual instruction tuning without the need for human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate that existing LMMs exhibit significant hallucination when presented with our negative instructions, particularly with Existent Element Manipulation instructions. Moreover, by finetuning MiniGPT4 on LRV-Instruction, we successfully mitigate hallucination while improving performance on public datasets using less training data compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model. Updates of our project are available at //fuxiaoliu.github.io/LRV/.
In spite of the excellent strides made by end-to-end (E2E) models in speech recognition in recent years, named entity recognition is still challenging but critical for semantic understanding. In order to enhance the ability to recognize named entities in E2E models, previous studies mainly focus on various rule-based or attention-based contextual biasing algorithms. However, their performance might be sensitive to the biasing weight or degraded by excessive attention to the named entity list, along with a risk of false triggering. Inspired by the success of the class-based language model (LM) in named entity recognition in conventional hybrid systems and the effective decoupling of acoustic and linguistic information in the factorized neural Transducer (FNT), we propose a novel E2E model to incorporate class-based LMs into FNT, which is referred as C-FNT. In C-FNT, the language model score of named entities can be associated with the name class instead of its surface form. The experimental results show that our proposed C-FNT presents significant error reduction in named entities without hurting performance in general word recognition.
While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model (LLM) to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5% compared to the original diffusion model while maintaining the diversity and quality of generations.
Manufacturing industries require efficient and voluminous production of high-quality finished goods. In the context of Industry 4.0, visual anomaly detection poses an optimistic solution for automatically controlling product quality with high precision. Automation based on computer vision poses a promising solution to prevent bottlenecks at the product quality checkpoint. We considered recent advancements in machine learning to improve visual defect localization, but challenges persist in obtaining a balanced feature set and database of the wide variety of defects occurring in the production line. This paper proposes a defect localizing autoencoder with unsupervised class selection by clustering with k-means the features extracted from a pre-trained VGG-16 network. The selected classes of defects are augmented with natural wild textures to simulate artificial defects. The study demonstrates the effectiveness of the defect localizing autoencoder with unsupervised class selection for improving defect detection in manufacturing industries. The proposed methodology shows promising results with precise and accurate localization of quality defects on melamine-faced boards for the furniture industry. Incorporating artificial defects into the training data shows significant potential for practical implementation in real-world quality control scenarios.
Functional encryption is a powerful paradigm for public-key encryption which allows for controlled access to encrypted data. This primitive is generally impossible in the standard setting so we investigate possibilities in the bounded quantum storage model (BQSM) and the bounded classical storage model (BCSM). In these models, ciphertexts potentially disappear which nullifies impossibility results and allows us to obtain positive outcomes. Firstly, in the BQSM, we construct information-theoretically secure functional encryption with $\texttt{q}=O(\sqrt{\texttt{s}/\texttt{r}})$ where $\texttt{r}$ can be set to any value less than $\texttt{s}$. Here $\texttt{r}$ denotes the number of times that an adversary is restricted to $\texttt{s}$--qubits of quantum memory in the protocol and $\texttt{q}$ denotes the required quantum memory to run the protocol honestly. We then show that our scheme is optimal by proving that it is impossible to attain information-theoretically secure functional encryption with $\texttt{q} < \sqrt{\texttt{s}/\texttt{r}}$. However, by assuming the existence of post-quantum one-way functions, we can do far better and achieve functional encryption with classical keys and with $\texttt{q}=0$ and $\texttt{r}=1$. Secondly, in the BCSM, we construct $(O(\texttt{n}),\texttt{n}^2)$ functional encryption assuming the existence of $(\texttt{n},\texttt{n}^2)$ virtual weak grey-box obfuscation. Here, the pair $(\texttt{n},\texttt{n}^2)$ indicates the required memory to run honestly and the needed memory to break security, respectively. This memory gap is optimal and the assumption is minimal. In particular, we also construct $(O(\texttt{n}),\texttt{n}^2)$ virtual weak grey-box obfuscation assuming $(\texttt{n},\texttt{n}^2)$ functional encryption.
The Harrisonburg Department of Public Transportation (HDPT) aims to leverage their data to improve the efficiency and effectiveness of their operations. We construct two supply and demand models that help the department identify gaps in their service. The models take many variables into account, including the way that the HDPT reports to the federal government and the areas with the most vulnerable populations in Harrisonburg City. We employ data analysis and machine learning techniques to make our predictions.
The development of quantum computers has been advancing rapidly in recent years. As quantum computers become more widely accessible, potentially malicious users could try to execute their code on the machines to leak information from other users, to interfere with or manipulate the results of other users, or to reverse engineer the underlying quantum computer architecture and its intellectual property, for example. Among different security threats, previous work has demonstrated information leakage across the reset operations, and it then proposed a secure reset operation could be an enabling technology that allows the sharing of a quantum computer among different users, or among different quantum programs of the same user. This work first shows a set of new, extended reset operation attacks that could be more stealthy by hiding the intention of the attacker's circuit. This work shows various masking circuits and how attackers can retrieve information from the execution of a previous shot of a circuit, even if the masking circuit is used between the reset operation (of the victim, after the shot of the circuit is executed) and the measurement (of the attacker). Based on the uncovered new possible attacks, this work proposes a set of heuristic checks that could be applied at transpile time to check for the existence of malicious circuits that try to steal information via the attack on the reset operation. Unlike run-time protection or added secure reset gates, this work proposes a complimentary, compile-time security solution to the attacks on reset~operation.
With the rapid development of distributed energy resources, increasing number of residential and commercial users have been switched from pure electricity consumers to prosumers that can both consume and produce energy. To properly manage these emerging prosumers, a peer-to-peer electricity market has been explored and extensively studied. In such an electricity market, each prosumer trades energy directly with other prosumers, posing a serious challenge to the scalability of the market. Therefore, a bilateral energy trading mechanism with good scalability is proposed for electricity markets with numerous prosumers in this paper. First, the multi-bilateral economic dispatch problem that maximizes the social welfare is formulated, taking into account product differentiation and network constraints. Then, an energy trading mechanism is devised to improve the scalability from two aspects: (i) an accelerated distributed clearing algorithm with less exchanged information and faster convergence rate. (ii) a novel selection strategy to reduce the amount of computation and communication per prosumer. Finally, the convergence proof of the proposed accelerated algorithm is given, and the proposed selection strategy is illustrated through a Monte Carlo simulation experiment.
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.
Detecting carried objects is one of the requirements for developing systems to reason about activities involving people and objects. We present an approach to detect carried objects from a single video frame with a novel method that incorporates features from multiple scales. Initially, a foreground mask in a video frame is segmented into multi-scale superpixels. Then the human-like regions in the segmented area are identified by matching a set of extracted features from superpixels against learned features in a codebook. A carried object probability map is generated using the complement of the matching probabilities of superpixels to human-like regions and background information. A group of superpixels with high carried object probability and strong edge support is then merged to obtain the shape of the carried object. We applied our method to two challenging datasets, and results show that our method is competitive with or better than the state-of-the-art.