亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reinforcement learning with human feedback (RLHF) has become the dominant method to align large models to user preferences. Unlike fine-tuning, for which there are many studies regarding training data memorization, it is not clear how memorization is affected by or introduced in the RLHF alignment process. Understanding this relationship is important as real user data may be collected and used to align large models; if user data is memorized during RLHF and later regurgitated, this could raise privacy concerns. In this work, we analyze how training data memorization can surface and propagate through each phase of RLHF. We focus our study on code completion models, as code completion is one of the most popular use cases for large language models. We find that RLHF significantly decreases the chance that data used for reward modeling and reinforcement learning is memorized, in comparison to aligning via directly fine-tuning on this data, but that examples already memorized during the fine-tuning stage of RLHF, will, in the majority of cases, remain memorized after RLHF.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Learning · 成比例 · 強化學習 · 相互獨立的 ·
2024 年 7 月 31 日

Deep reinforcement learning (DRL) has found application in numerous use-cases pertaining to flow control. Multi-agent RL (MARL), a variant of DRL, has shown to be more effective than single-agent RL in controlling flows exhibiting locality and translational invariance. We present, for the first time, an implementation of MARL-based control of three-dimensional Rayleigh-B\'enard convection (RBC). Control is executed by modifying the temperature distribution along the bottom wall divided into multiple control segments, each of which acts as an independent agent. Two regimes of RBC are considered at Rayleigh numbers $\mathrm{Ra}=500$ and $750$. Evaluation of the learned control policy reveals a reduction in convection intensity by $23.5\%$ and $8.7\%$ at $\mathrm{Ra}=500$ and $750$, respectively. The MARL controller converts irregularly shaped convective patterns to regular straight rolls with lower convection that resemble flow in a relatively more stable regime. We draw comparisons with proportional control at both $\mathrm{Ra}$ and show that MARL is able to outperform the proportional controller. The learned control strategy is complex, featuring different non-linear segment-wise actuator delays and actuation magnitudes. We also perform successful evaluations on a larger domain than used for training, demonstrating that the invariant property of MARL allows direct transfer of the learnt policy.

As large language models (LLMs) grow in parameter size and capabilities, such as interaction through prompting, they open up new ways of interfacing with automatic speech recognition (ASR) systems beyond rescoring n-best lists. This work investigates post-hoc correction of ASR transcripts with LLMs. To avoid introducing errors into likely accurate transcripts, we propose a range of confidence-based filtering methods. Our results indicate that this can improve the performance of less competitive ASR systems.

We study practical approximations to Kolmogorov prefix complexity (K) using IMP2, a high-level programming language. Our focus is on investigating the interpreter optimality for this language as the reference machine for the Coding Theorem Method (CTM). A method advanced to deal with applications to algorithmic complexity different to the popular traditional lossless compression approach based on the principles of algorithmic probability. The chosen model of computation is proven to be suitable for this task and a comparison to other models and methods is performed. Our findings show that CTM approximations using our model do not always correlate with results from lower-level models of computation. This suggests some models may require a larger program space to converge to Levin's universal distribution. Furthermore, we compare CTM with an upper bound to Kolmogorov complexity and find a strong correlation, supporting CTM's validity as an approximation method with finer-grade resolution of K.

Most current audio-visual emotion recognition models lack the flexibility needed for deployment in practical applications. We envision a multimodal system that works even when only one modality is available and can be implemented interchangeably for either predicting emotional attributes or recognizing categorical emotions. Achieving such flexibility in a multimodal emotion recognition system is difficult due to the inherent challenges in accurately interpreting and integrating varied data sources. It is also a challenge to robustly handle missing or partial information while allowing direct switch between regression or classification tasks. This study proposes a versatile audio-visual learning (VAVL) framework for handling unimodal and multimodal systems for emotion regression or emotion classification tasks. We implement an audio-visual framework that can be trained even when audio and visual paired data is not available for part of the training set (i.e., audio only or only video is present). We achieve this effective representation learning with audio-visual shared layers, residual connections over shared layers, and a unimodal reconstruction task. Our experimental results reveal that our architecture significantly outperforms strong baselines on the CREMA-D, MSP-IMPROV, and CMU-MOSEI corpora. Notably, VAVL attains a new state-of-the-art performance in the emotional attribute prediction task on the MSP-IMPROV corpus.

We introduce eRHL, a program logic for reasoning about relational expectation properties of pairs of probabilistic programs. eRHL is quantitative, i.e., its pre- and post-conditions take values in the extended non-negative reals. Thanks to its quantitative assertions, eRHL overcomes randomness alignment restrictions from prior logics, including PRHL, a popular relational program logic used to reason about security of cryptographic constructions, and apRHL, a variant of PRHL for differential privacy. As a result, eRHL is the first relational probabilistic program logic to be supported by non-trivial soundness and completeness results for all almost surely terminating programs. We show that eRHL is sound and complete with respect to program equivalence, statistical distance, and differential privacy. We also show that every PRHL judgment is valid iff it is provable in eRHL. We showcase the practical benefits of eRHL with examples that are beyond reach of PRHL and apRHL.

Designing mobile software that aligns with cultural contexts is crucial for optimizing human-computer interaction. Considering cultural influences is essential not only for the actual set of functional/non-functional requirements, but also for the whole Requirement Engineering (RE) process. Without a clear understanding of cultural influences on RE activities, it's hardly possible to elaborate a correct and complete set of requirements. This research explores the impact of national culture on RE-related activities based on recent studies. We conducted a Systematic Literature Review (SLR) of studies published in 2019-2023 and compared them to an older SLR covering 2000-2018. We identified 17 relevant studies, extracted 33 cultural influences impacting RE activities, and mapped them to the Hofstede model, widely used for cultural analysis in software development research. Our work highlights the critical role of national culture in RE activities, summarizes current research trends, and helps practitioners consider cultural influences for mobile app/software development.

Due to its empirical success in few-shot classification and reinforcement learning, meta-learning has recently received significant interest. Meta-learning methods leverage data from previous tasks to learn a new task in a sample-efficient manner. In particular, model-agnostic methods look for initialization points from which gradient descent quickly adapts to any new task. Although it has been empirically suggested that such methods perform well by learning shared representations during pretraining, there is limited theoretical evidence of such behavior. More importantly, it has not been shown that these methods still learn a shared structure, despite architectural misspecifications. In this direction, this work shows, in the limit of an infinite number of tasks, that first-order ANIL with a linear two-layer network architecture successfully learns linear shared representations. This result even holds with overparametrization; having a width larger than the dimension of the shared representations results in an asymptotically low-rank solution. The learned solution then yields a good adaptation performance on any new task after a single gradient step. Overall, this illustrates how well model-agnostic methods such as first-order ANIL can learn shared representations.

We propose and study a one-dimensional model which consists of two cross-diffusion systems coupled via a moving interface. The motivation stems from the modelling of complex diffusion processes in the context of the vapor deposition of thin films. In our model, cross-diffusion of the various chemical species can be respectively modelled by a size-exclusion system for the solid phase and the Stefan-Maxwell system for the gaseous phase. The coupling between the two phases is modelled by linear phase transition laws of Butler-Volmer type, resulting in an interface evolution. The continuous properties of the model are investigated, in particular its entropy variational structure and stationary states. We introduce a two-point flux approximation finite volume scheme. The moving interface is addressed with a moving-mesh approach, where the mesh is locally deformed around the interface. The resulting discrete nonlinear system is shown to admit a solution that preserves the main properties of the continuous system, namely: mass conservation, nonnegativity, volume-filling constraints, decay of the free energy and asymptotics. In particular, the moving-mesh approach is compatible with the entropy structure of the continuous model. Numerical results illustrate these properties and the dynamics of the model.

Reinforcement learning-based large language models, such as ChatGPT, are believed to have potential to aid human experts in many domains, including healthcare. There is, however, little work on ChatGPT's ability to perform a key task in healthcare: formal, probabilistic medical diagnostic reasoning. This type of reasoning is used, for example, to update a pre-test probability to a post-test probability. In this work, we probe ChatGPT's ability to perform this task. In particular, we ask ChatGPT to give examples of how to use Bayes rule for medical diagnosis. Our prompts range from queries that use terminology from pure probability (e.g., requests for a posterior of A given B and C) to queries that use terminology from medical diagnosis (e.g., requests for a posterior probability of Covid given a test result and cough). We show how the introduction of medical variable names leads to an increase in the number of errors that ChatGPT makes. Given our results, we also show how one can use prompt engineering to facilitate ChatGPT's partial avoidance of these errors. We discuss our results in light of recent commentaries on sensitivity and specificity. We also discuss how our results might inform new research directions for large language models.

The paper presents a framework for online learning of the Koopman operator using streaming data. Many complex systems for which data-driven modeling and control are sought provide streaming sensor data, the abundance of which can present computational challenges but cannot be ignored. Streaming data can intermittently sample dynamically different regimes or rare events which could be critical to model and control. Using ideas from subspace identification, we present a method where the Grassmannian distance between the subspace of an extended observability matrix and the streaming segment of data is used to assess the `novelty' of the data. If this distance is above a threshold, it is added to an archive and the Koopman operator is updated if not it is discarded. Therefore, our method identifies data from segments of trajectories of a dynamical system that are from different dynamical regimes, prioritizes minimizing the amount of data needed in updating the Koopman model and furthermore reduces the number of basis functions by learning them adaptively. Therefore, by dynamically adjusting the amount of data used and learning basis functions, our method optimizes the model's accuracy and the system order.

北京阿比特科技有限公司