Datasets:
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
reinforcement-learning-from-human-feedback
reinforcement-learning
dialogue
conversational-ai
preference-alignment
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -3,10 +3,11 @@ language:
|
|
| 3 |
- en
|
| 4 |
license: apache-2.0
|
| 5 |
task_categories:
|
| 6 |
-
-
|
| 7 |
- text-generation
|
| 8 |
tags:
|
| 9 |
- reinforcement-learning-from-human-feedback
|
|
|
|
| 10 |
- dialogue
|
| 11 |
- conversational-ai
|
| 12 |
- preference-alignment
|
|
@@ -182,13 +183,7 @@ configs:
|
|
| 182 |
|
| 183 |
# Retrospective Learning from Interactions (Respect) Dataset
|
| 184 |
|
| 185 |
-
This
|
| 186 |
-
|
| 187 |
-
The paper introduces Reinforcement Learning from Human Interaction (RLHI), a novel approach that learns directly from in-the-wild user conversations. This enables continual model improvement and multifaceted alignment of conversational models, moving beyond traditional pre-annotated, expert-generated human feedback. The dataset facilitates two complementary methods: RLHI with User-Guided Rewrites and RLHI with User-Based Rewards, linking long-term user personas to turn-level preferences.
|
| 188 |
-
|
| 189 |
-
* **Paper**: [The Era of Real-World Human Interaction: RL from User Conversations](https://huggingface.co/papers/2509.25137)
|
| 190 |
-
* **Project Page**: [https://lil-lab.github.io/respect](https://lil-lab.github.io/respect)
|
| 191 |
-
* **GitHub Repository**: [https://github.com/lil-lab/respect](https://github.com/lil-lab/respect)
|
| 192 |
|
| 193 |
## Sample Usage
|
| 194 |
|
|
@@ -211,4 +206,4 @@ model = Idefics2ForConditionalGeneration.from_pretrained(
|
|
| 211 |
checkpoint, torch_dtype=torch.bfloat16)
|
| 212 |
peft_model = PeftModel.from_pretrained(
|
| 213 |
model, model_id, adapter_name="r6_bp", revision="r6_bp")
|
| 214 |
-
```
|
|
|
|
| 3 |
- en
|
| 4 |
license: apache-2.0
|
| 5 |
task_categories:
|
| 6 |
+
- question-answering
|
| 7 |
- text-generation
|
| 8 |
tags:
|
| 9 |
- reinforcement-learning-from-human-feedback
|
| 10 |
+
- reinforcement-learning
|
| 11 |
- dialogue
|
| 12 |
- conversational-ai
|
| 13 |
- preference-alignment
|
|
|
|
| 183 |
|
| 184 |
# Retrospective Learning from Interactions (Respect) Dataset
|
| 185 |
|
| 186 |
+
This repository contains the `lil-lab/respect` data, based on the ACL paper [Retrospective Learning from Interactions](https://huggingface.co/papers/2410.13852). For more resources, please see <https://lil-lab.github.io/respect> and <https://github.com/lil-lab/respect>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 187 |
|
| 188 |
## Sample Usage
|
| 189 |
|
|
|
|
| 206 |
checkpoint, torch_dtype=torch.bfloat16)
|
| 207 |
peft_model = PeftModel.from_pretrained(
|
| 208 |
model, model_id, adapter_name="r6_bp", revision="r6_bp")
|
| 209 |
+
```
|