亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Finetuning approaches in NLP often focus on exploitation rather than exploration, which may lead to suboptimal models. Given the vast search space of natural language, this limited exploration can restrict their performance in complex, high-stakes domains, where accurate negation understanding and logical reasoning abilities are crucial. To address this issue, we leverage Reinforcement Learning from Logical Feedback (RLLF) to create an effective balance between exploration and exploitation in LLMs. Our approach employs an appropriate benchmark dataset for training and evaluation, highlighting the importance of exploration in enhancing negation understanding capabilities. We compare the performance of our RLLF-enhanced LLMs with baseline models trained without RLLF, demonstrating the value of this balanced approach. Furthermore, we showcase the potential of our method in legal AI applications by employing transfer learning and evaluating its impact on negation understanding. Our experimental results exhibit the effectiveness of balancing exploration and exploitation with RLLF in improving LLMs' negation capabilities. This has implications for the development of more accurate, reliable, and logically consistent language models in high-stakes domains.

相關內容

The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.

3D Gaussian Splatting (GS) have achieved considerable improvement over Neural Radiance Fields in terms of 3D fitting fidelity and rendering speed. However, this unstructured representation with scattered Gaussians poses a significant challenge for generative modeling. To address the problem, we introduce GaussianCube, a structured GS representation that is both powerful and efficient for generative modeling. We achieve this by first proposing a modified densification-constrained GS fitting algorithm which can yield high-quality fitting results using a fixed number of free Gaussians, and then re-arranging the Gaussians into a predefined voxel grid via Optimal Transport. The structured grid representation allows us to use standard 3D U-Net as our backbone in diffusion generative modeling without elaborate designs. Extensive experiments conducted on ShapeNet and OmniObject3D show that our model achieves state-of-the-art generation results both qualitatively and quantitatively, underscoring the potential of GaussianCube as a powerful and versatile 3D representation.

In molecular communication (MC), molecules are released from the transmitter to convey information. This paper considers a realistic molecule shift keying (MoSK) scenario with two species of molecule in two reservoirs, where the molecules are harvested from the environment and placed into different reservoirs, which are purified by exchanging molecules between the reservoirs. This process consumes energy, and for a reasonable energy cost, the reservoirs cannot be pure; thus, our MoSK transmitter is imperfect, releasing mixtures of both molecules for every symbol, resulting in inter-symbol interference (ISI). To mitigate ISI, the properties of the receiver are analyzed and a detection method based on the ratio of different molecules is proposed. Theoretical and simulation results are provided, showing that with the increase of energy cost, the system achieves better performance. The good performance of the proposed detection scheme is also demonstrated.

OLTP has stringent performance requirements defined by Service Level Agreements. Transaction response time is used to determine the maximum throughout in benchmarks. Capacity planning tools for OLTP performance are based on queueing network models for hardware resources and database lock contention has a secondary effect on performance. With ever increasing levels of e-commerce and surges in OLTP traffic we discuss the need for studies of database workloads to develop more realistic lock/latch contention models. Predictive formulas to model increased load leading to thrashing for txns with identical and nonidentical steps are presented. We review concurrency control methods to reduce the level of lock/data conflicts in high contention environments.

The effectiveness of Voting Advice Applications (VAA) is often compromised by the length of their questionnaires. To address user fatigue and incomplete responses, some applications (such as the Swiss Smartvote) offer a condensed version of their questionnaire. However, these condensed versions can not ensure the accuracy of recommended parties or candidates, which we show to remain below 40%. To tackle these limitations, this work introduces an adaptive questionnaire approach that selects subsequent questions based on users' previous answers, aiming to enhance recommendation accuracy while reducing the number of questions posed to the voters. Our method uses an encoder and decoder module to predict missing values at any completion stage, leveraging a two-dimensional latent space reflective of political science's traditional methods for visualizing political orientations. Additionally, a selector module is proposed to determine the most informative subsequent question based on the voter's current position in the latent space and the remaining unanswered questions. We validated our approach using the Smartvote dataset from the Swiss Federal elections in 2019, testing various spatial models and selection methods to optimize the system's predictive accuracy. Our findings indicate that employing the IDEAL model both as encoder and decoder, combined with a PosteriorRMSE method for question selection, significantly improves the accuracy of recommendations, achieving 74% accuracy after asking the same number of questions as in the condensed version.

Given the limitations of using the Cox hazard ratio to summarize the magnitude of the treatment effect, alternative measures that do not have these limitations are gaining attention. One of the recently proposed alternative methods uses the average hazard with survival weight (AH). This population quantity can be interpreted as the average intensity of the event occurrence in a given time window that does not involve study-specific censoring. Inference procedures for the ratio of AH and difference in AH have already been proposed in simple randomized controlled trial settings to compare two groups. However, methods with stratification factors have not been well discussed, although stratified analysis is often used in practice to adjust for confounding factors and increase the power to detect a between-group difference. The conventional stratified analysis or meta-analysis approach, which integrates stratum-specific treatment effects using an optimal weight, directly applies to the ratio of AH and difference in AH. However, this conventional approach has significant limitations similar to the Cochran-Mantel-Haenszel method for a binary outcome and the stratified Cox procedure for a time-to-event outcome. To address this, we propose a new stratified analysis method for AH using standardization. With the proposed method, one can summarize the between-group treatment effect in both absolute difference and relative terms, adjusting for stratification factors. This can be a valuable alternative to the traditional stratified Cox procedure to estimate and report the magnitude of the treatment effect on time-to-event outcomes using hazard.

Diabetic Retinopathy (DR) stands as the leading cause of blindness globally, particularly affecting individuals between the ages of 20 and 70. This paper presents a Computer-Aided Diagnosis (CAD) system designed for the automatic classification of retinal images into five distinct classes: Normal, Mild, Moderate, Severe, and Proliferative Diabetic Retinopathy (PDR). The proposed system leverages Convolutional Neural Networks (CNNs) employing pre-trained deep learning models. Through the application of fine-tuning techniques, our model is trained on fundus images of diabetic retinopathy with resolutions of 350x350x3 and 224x224x3. Experimental results obtained on the Kaggle platform, utilizing resources comprising 4 CPUs, 17 GB RAM, and 1 GB Disk, demonstrate the efficacy of our approach. The achieved Area Under the Curve (AUC) values for CNN, MobileNet, VGG-16, InceptionV3, and InceptionResNetV2 models are 0.50, 0.70, 0.53, 0.63, and 0.69, respectively.

Graph Neural Networks (GNNs) are state-of-the-art models for performing prediction tasks on graphs. While existing GNNs have shown great performance on various tasks related to graphs, little attention has been paid to the scenario where out-of-distribution (OOD) nodes exist in the graph during training and inference. Borrowing the concept from CV and NLP, we define OOD nodes as nodes with labels unseen from the training set. Since a lot of networks are automatically constructed by programs, real-world graphs are often noisy and may contain nodes from unknown distributions. In this work, we define the problem of graph learning with out-of-distribution nodes. Specifically, we aim to accomplish two tasks: 1) detect nodes which do not belong to the known distribution and 2) classify the remaining nodes to be one of the known classes. We demonstrate that the connection patterns in graphs are informative for outlier detection, and propose Out-of-Distribution Graph Attention Network (OODGAT), a novel GNN model which explicitly models the interaction between different kinds of nodes and separate inliers from outliers during feature propagation. Extensive experiments show that OODGAT outperforms existing outlier detection methods by a large margin, while being better or comparable in terms of in-distribution classification.

Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司