Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,15 @@ tags:
|
|
| 6 |
- generated_from_trainer
|
| 7 |
- trl
|
| 8 |
- dpo
|
|
|
|
| 9 |
licence: license
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Model Card for Llama-3.1-8B-paraphrase-type-generation-apty-ipo
|
|
@@ -27,7 +35,7 @@ print(output["generated_text"])
|
|
| 27 |
|
| 28 |
## Training procedure
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
|
| 33 |
|
|
@@ -65,4 +73,44 @@ Cite TRL as:
|
|
| 65 |
publisher = {GitHub},
|
| 66 |
howpublished = {\url{https://github.com/huggingface/trl}}
|
| 67 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
```
|
|
|
|
| 6 |
- generated_from_trainer
|
| 7 |
- trl
|
| 8 |
- dpo
|
| 9 |
+
- ipo
|
| 10 |
licence: license
|
| 11 |
+
license: apache-2.0
|
| 12 |
+
datasets:
|
| 13 |
+
- jpwahle/etpc
|
| 14 |
+
- worta/apty
|
| 15 |
+
language:
|
| 16 |
+
- en
|
| 17 |
+
pipeline_tag: text-generation
|
| 18 |
---
|
| 19 |
|
| 20 |
# Model Card for Llama-3.1-8B-paraphrase-type-generation-apty-ipo
|
|
|
|
| 35 |
|
| 36 |
## Training procedure
|
| 37 |
|
| 38 |
+
This model was previously finetuned on the ETPC dataset: cluebbers/Llama-3.1-8B-paraphrase-type-generation-etpc-apty-reward
|
| 39 |
|
| 40 |
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
|
| 41 |
|
|
|
|
| 73 |
publisher = {GitHub},
|
| 74 |
howpublished = {\url{https://github.com/huggingface/trl}}
|
| 75 |
}
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
Cite ETPC as:
|
| 79 |
+
|
| 80 |
+
```bibtex
|
| 81 |
+
@inproceedings{kovatchev-etal-2018-etpc,
|
| 82 |
+
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
|
| 83 |
+
author = "Kovatchev, Venelin and
|
| 84 |
+
Mart{\'\i}, M. Ant{\`o}nia and
|
| 85 |
+
Salam{\'o}, Maria",
|
| 86 |
+
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
|
| 87 |
+
month = may,
|
| 88 |
+
year = "2018",
|
| 89 |
+
address = "Miyazaki, Japan",
|
| 90 |
+
publisher = "European Language Resources Association (ELRA)",
|
| 91 |
+
url = "https://aclanthology.org/L18-1221",
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
Cite SFT/ETPC model as
|
| 96 |
+
|
| 97 |
+
```bibtex
|
| 98 |
+
@inproceedings{wahle-etal-2023-paraphrase,
|
| 99 |
+
title = "Paraphrase Types for Generation and Detection",
|
| 100 |
+
author = "Wahle, Jan Philip and
|
| 101 |
+
Gipp, Bela and
|
| 102 |
+
Ruas, Terry",
|
| 103 |
+
editor = "Bouamor, Houda and
|
| 104 |
+
Pino, Juan and
|
| 105 |
+
Bali, Kalika",
|
| 106 |
+
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
|
| 107 |
+
month = dec,
|
| 108 |
+
year = "2023",
|
| 109 |
+
address = "Singapore",
|
| 110 |
+
publisher = "Association for Computational Linguistics",
|
| 111 |
+
url = "https://aclanthology.org/2023.emnlp-main.746",
|
| 112 |
+
doi = "10.18653/v1/2023.emnlp-main.746",
|
| 113 |
+
pages = "12148--12164",
|
| 114 |
+
abstract = "Current approaches in paraphrase generation and detection heavily rely on a single general similarity score, ignoring the intricate linguistic properties of language. This paper introduces two new tasks to address this shortcoming by considering paraphrase types - specific linguistic perturbations at particular text positions. We name these tasks Paraphrase Type Generation and Paraphrase Type Detection. Our results suggest that while current techniques perform well in a binary classification scenario, i.e., paraphrased or not, the inclusion of fine-grained paraphrase types poses a significant challenge. While most approaches are good at generating and detecting general semantic similar content, they fail to understand the intrinsic linguistic variables they manipulate. Models trained in generating and identifying paraphrase types also show improvements in tasks without them. In addition, scaling these models further improves their ability to understand paraphrase types. We believe paraphrase types can unlock a new paradigm for developing paraphrase models and solving tasks in the future.",
|
| 115 |
+
}
|
| 116 |
```
|