Recent advancements in Artificial Intelligence (AI) and machine learning have demonstrated transformative capabilities across diverse domains. This progress extends to the field of patent analysis and innovation, where AI-based tools present opportunities to streamline and enhance important tasks in the patent cycle such as classification, retrieval, and valuation prediction. This not only accelerates the efficiency of patent researchers and applicants but also opens new avenues for technological innovation and discovery. Our survey provides a comprehensive summary of recent AI tools in patent analysis from more than 40 papers from 26 venues between 2017 and 2023. Unlike existing surveys, we include methods that work for patent image and text data. Furthermore, we introduce a novel taxonomy for the categorization based on the tasks in the patent life cycle as well as the specifics of the AI methods. This interdisciplinary survey aims to serve as a resource for researchers and practitioners who are working at the intersection of AI and patent analysis as well as the patent offices that are aiming to build efficient patent systems.
Vision-based industrial inspection (VII) aims to locate defects quickly and accurately. Supervised learning under a close-set setting and industrial anomaly detection, as two common paradigms in VII, face different problems in practical applications. The former is that various and sufficient defects are difficult to obtain, while the latter is that specific defects cannot be located. To solve these problems, in this paper, we focus on the few-shot semantic segmentation (FSS) method, which can locate unseen defects conditioned on a few annotations without retraining. Compared to common objects in natural images, the defects in VII are small. This brings two problems to current FSS methods: 1 distortion of target semantics and 2 many false positives for backgrounds. To alleviate these problems, we propose a small object few-shot segmentation (SOFS) model. The key idea for alleviating 1 is to avoid the resizing of the original image and correctly indicate the intensity of target semantics. SOFS achieves this idea via the non-resizing procedure and the prototype intensity downsampling of support annotations. To alleviate 2, we design an abnormal prior map in SOFS to guide the model to reduce false positives and propose a mixed normal Dice loss to preferentially prevent the model from predicting false positives. SOFS can achieve FSS and few-shot anomaly detection determined by support masks. Diverse experiments substantiate the superior performance of SOFS. Code is available at //github.com/zhangzilongc/SOFS.
Recently, deep learning-based single-frame infrared small target (SIRST) detection technology has made significant progress. However, existing infrared small target detection methods are often optimized for a fixed image resolution, a single wavelength, or a specific imaging system, limiting their breadth and flexibility in practical applications. Therefore, we propose a refined infrared small target detection scheme based on an adjustable sensitivity (AS) strategy and multi-scale fusion. Specifically, a multi-scale model fusion framework based on multi-scale direction-aware network (MSDA-Net) is constructed, which uses input images of multiple scales to train multiple models and fuses them. Multi-scale fusion helps characterize the shape, edge, and texture features of the target from different scales, making the model more accurate and reliable in locating the target. At the same time, we fully consider the characteristics of the infrared small target detection task and construct an edge enhancement difficulty mining (EEDM) loss. The EEDM loss helps alleviate the problem of category imbalance and guides the network to pay more attention to difficult target areas and edge features during training. In addition, we propose an adjustable sensitivity strategy for post-processing. This strategy significantly improves the detection rate of infrared small targets while ensuring segmentation accuracy. Extensive experimental results show that the proposed scheme achieves the best performance. Notably, this scheme won the first prize in the PRCV 2024 wide-area infrared small target detection competition.
Background: The reproducibility of machine-learning models in prostate cancer detection across different MRI vendors remains a significant challenge. Methods: This study investigates Support Vector Machines (SVM) and Random Forest (RF) models trained on radiomic features extracted from T2-weighted MRI images using Pyradiomics and MRCradiomics libraries. Feature selection was performed using the maximum relevance minimum redundancy (MRMR) technique. We aimed to enhance clinical decision support through multimodal learning and feature fusion. Results: Our SVM model, utilizing combined features from Pyradiomics and MRCradiomics, achieved an AUC of 0.74 on the Multi-Improd dataset (Siemens scanner) but decreased to 0.60 on the Philips test set. The RF model showed similar trends, with notable robustness for models using Pyradiomics features alone (AUC of 0.78 on Philips). Conclusions: These findings demonstrate the potential of multimodal feature integration to improve the robustness and generalizability of machine-learning models for clinical decision support in prostate cancer detection. This study marks a significant step towards developing reliable AI-driven diagnostic tools that maintain efficacy across various imaging platforms.
In reinforcement learning (RL), exploiting environmental symmetries can significantly enhance efficiency, robustness, and performance. However, ensuring that the deep RL policy and value networks are respectively equivariant and invariant to exploit these symmetries is a substantial challenge. Related works try to design networks that are equivariant and invariant by construction, limiting them to a very restricted library of components, which in turn hampers the expressiveness of the networks. This paper proposes a method to construct equivariant policies and invariant value functions without specialized neural network components, which we term equivariant ensembles. We further add a regularization term for adding inductive bias during training. In a map-based path planning case study, we show how equivariant ensembles and regularization benefit sample efficiency and performance.
The fusion of the Internet of Things (IoT) with Sixth-Generation (6G) technology has significant potential to revolutionize the IoT landscape. With the ultra-reliable and low-latency communication capabilities of 6G, 6G-IoT networks can transmit high-quality and diverse data to enhance edge learning. Artificial Intelligence-Generated Content (AIGC) harnesses advanced AI algorithms to automatically generate various types of content. The emergence of edge AIGC integrates with edge networks, facilitating real-time provision of customized AIGC services by deploying AIGC models on edge devices. However, the current practice of edge devices as AIGC Service Providers (ASPs) lacks incentives, hindering the sustainable provision of high-quality edge AIGC services amidst information asymmetry. In this paper, we develop a user-centric incentive mechanism framework for edge AIGC services in 6G-IoT networks. Specifically, we first propose a contract theory model for incentivizing ASPs to provide AIGC services to clients. Recognizing the irrationality of clients towards personalized AIGC services, we utilize Prospect Theory (PT) to capture their subjective utility better. Furthermore, we adopt the diffusion-based soft actor-critic algorithm to generate the optimal contract design under PT, outperforming traditional deep reinforcement learning algorithms. Our numerical results demonstrate the effectiveness of the proposed scheme.
The rise of social media and the exponential growth of multimodal communication necessitates advanced techniques for Multimodal Information Extraction (MIE). However, existing methodologies primarily rely on direct Image-Text interactions, a paradigm that often faces significant challenges due to semantic and modality gaps between images and text. In this paper, we introduce a new paradigm of Image-Context-Text interaction, where large multimodal models (LMMs) are utilized to generate descriptive textual context to bridge these gaps. In line with this paradigm, we propose a novel Shapley Value-based Contrastive Alignment (Shap-CA) method, which aligns both context-text and context-image pairs. Shap-CA initially applies the Shapley value concept from cooperative game theory to assess the individual contribution of each element in the set of contexts, texts and images towards total semantic and modality overlaps. Following this quantitative evaluation, a contrastive learning strategy is employed to enhance the interactive contribution within context-text/image pairs, while minimizing the influence across these pairs. Furthermore, we design an adaptive fusion module for selective cross-modal fusion. Extensive experiments across four MIE datasets demonstrate that our method significantly outperforms existing state-of-the-art methods.
Recent advancements in large language models have heavily relied on the large reward model from reinforcement learning from human feedback for fine-tuning. However, the use of a single reward model across various domains may not always be optimal, often requiring retraining from scratch when new domain data is introduced. To address these challenges, we explore the utilization of small language models operating in a domain-specific manner based on router mechanisms. Our three approaches are: 1) utilize mixture of experts to form a single reward model by modularizing an internal router and experts, 2) employing external router to select the appropriate reward model from multiple domain-specific models, and 3) the framework reduces parameter size by loading reward models and router adapters onto a single small language model using adapters. Experimental validation underscores the effectiveness of our approach, demonstrating performance comparable to baseline methods while also reducing the total parameter size.
Recent studies seek to provide Graph Neural Network (GNN) interpretability via multiple unsupervised learning models. Due to the scarcity of datasets, current methods easily suffer from learning bias. To solve this problem, we embed a Large Language Model (LLM) as knowledge into the GNN explanation network to avoid the learning bias problem. We inject LLM as a Bayesian Inference (BI) module to mitigate learning bias. The efficacy of the BI module has been proven both theoretically and experimentally. We conduct experiments on both synthetic and real-world datasets. The innovation of our work lies in two parts: 1. We provide a novel view of the possibility of an LLM functioning as a Bayesian inference to improve the performance of existing algorithms; 2. We are the first to discuss the learning bias issues in the GNN explanation problem.
The advancement of machine learning (ML) models has led to the development of ML-based approaches to improve numerous software engineering tasks in software maintenance and evolution. Nevertheless, research indicates that despite their potential successes, ML models may not be employed in real-world scenarios because they often remain a black box to practitioners, lacking explainability in their reasoning. Recently, various rule-based model-agnostic Explainable AI (XAI) techniques, such as PyExplainer and LIME, have been employed to explain the predictions of ML models in software analytics tasks. This paper assesses the ability of these techniques (e.g., PyExplainer and LIME) to generate reliable and consistent explanations for ML models across various software analytics tasks, including Just-in-Time (JIT) defect prediction, clone detection, and the classification of useful code review comments. Our manual investigations find inconsistencies and anomalies in the explanations generated by these techniques. Therefore, we design a novel framework: Evaluation of Explainable AI (EvaluateXAI), along with granular-level evaluation metrics, to automatically assess the effectiveness of rule-based XAI techniques in generating reliable and consistent explanations for ML models in software analytics tasks. After conducting in-depth experiments involving seven state-of-the-art ML models trained on five datasets and six evaluation metrics, we find that none of the evaluation metrics reached 100\%, indicating the unreliability of the explanations generated by XAI techniques. Additionally, PyExplainer and LIME failed to provide consistent explanations for 86.11% and 77.78% of the experimental combinations, respectively. Therefore, our experimental findings emphasize the necessity for further research in XAI to produce reliable and consistent explanations for ML models in software analytics tasks.
Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.