Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,155 +1,3 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
- summarization
|
| 4 |
-
widget:
|
| 5 |
-
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
|
| 6 |
-
|
| 7 |
---
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
# CodeTrans model for program synthesis
|
| 11 |
-
|
| 12 |
-
## Table of Contents
|
| 13 |
-
- [Model Details](#model-details)
|
| 14 |
-
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
|
| 15 |
-
- [Uses](#uses)
|
| 16 |
-
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
| 17 |
-
- [Training](#training)
|
| 18 |
-
- [Evaluation](#evaluation)
|
| 19 |
-
- [Environmental Impact](#environmental-impact)
|
| 20 |
-
- [Citation Information](#citation-information)
|
| 21 |
-
|
| 22 |
-
## Model Details
|
| 23 |
-
- **Model Description:** This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
|
| 24 |
-
- **Developed by:** [Ahmed Elnaggar](https://www.linkedin.com/in/prof-ahmed-elnaggar/),[Wei Ding](https://www.linkedin.com/in/wei-ding-92561270/)
|
| 25 |
-
- **Model Type:** Summarization
|
| 26 |
-
- **Language(s):** English
|
| 27 |
-
- **License:** Unknown
|
| 28 |
-
- **Resources for more information:**
|
| 29 |
-
- [Research Paper](https://arxiv.org/pdf/2104.02443.pdf)
|
| 30 |
-
- [GitHub Repo](https://github.com/agemagician/CodeTrans)
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
## How to Get Started With the Model
|
| 34 |
-
|
| 35 |
-
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
|
| 36 |
-
|
| 37 |
-
```python
|
| 38 |
-
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
|
| 39 |
-
|
| 40 |
-
pipeline = SummarizationPipeline(
|
| 41 |
-
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune"),
|
| 42 |
-
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune", skip_special_tokens=True),
|
| 43 |
-
device=0
|
| 44 |
-
)
|
| 45 |
-
|
| 46 |
-
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
|
| 47 |
-
pipeline([tokenized_code])
|
| 48 |
-
```
|
| 49 |
-
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/small_model.ipynb).
|
| 50 |
-
## Training data
|
| 51 |
-
|
| 52 |
-
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
## Uses
|
| 58 |
-
|
| 59 |
-
#### Direct Use
|
| 60 |
-
|
| 61 |
-
The model could be used to generate lisp inspired DSL code given the human language description tasks.
|
| 62 |
-
|
| 63 |
-
## Risks, Limitations and Biases
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
As detailed in this model’s [publication](https://arxiv.org/pdf/2104.02443.pdf), this model makes use of the data-set [One Billion Word Language Model Benchmark corpus](https://www.researchgate.net/publication/259239818_One_Billion_Word_Benchmark_for_Measuring_Progress_in_Statistical_Language_Modeling) in order to gather the self-supervised English data samples.
|
| 67 |
-
|
| 68 |
-
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
| 69 |
-
As such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark) reports that the One Billion Word Word Language Model Benchmark corpus
|
| 70 |
-
> “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.”
|
| 71 |
-
|
| 72 |
-
The aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus
|
| 73 |
-
> contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples.
|
| 74 |
-
|
| 75 |
-
[Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark)
|
| 76 |
-
|
| 77 |
-
## Training
|
| 78 |
-
|
| 79 |
-
#### Training Data
|
| 80 |
-
|
| 81 |
-
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
|
| 82 |
-
|
| 83 |
-
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/2104.02443.pdf):
|
| 84 |
-
|
| 85 |
-
> We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output.
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
## Training procedure
|
| 89 |
-
|
| 90 |
-
#### Preprocessing
|
| 91 |
-
|
| 92 |
-
##### Transfer-learning Pretraining
|
| 93 |
-
|
| 94 |
-
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
|
| 95 |
-
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
| 96 |
-
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
| 97 |
-
|
| 98 |
-
###### Fine-tuning
|
| 99 |
-
|
| 100 |
-
This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
## Evaluation
|
| 104 |
-
|
| 105 |
-
#### Results
|
| 106 |
-
|
| 107 |
-
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
|
| 108 |
-
|
| 109 |
-
Test results :
|
| 110 |
-
|
| 111 |
-
| Language / Model | LISP |
|
| 112 |
-
| -------------------- | :------------: |
|
| 113 |
-
| CodeTrans-ST-Small | 89.43 |
|
| 114 |
-
| CodeTrans-ST-Base | 89.65 |
|
| 115 |
-
| CodeTrans-TF-Small | 90.30 |
|
| 116 |
-
| CodeTrans-TF-Base | 90.24 |
|
| 117 |
-
| CodeTrans-TF-Large | 90.21 |
|
| 118 |
-
| CodeTrans-MT-Small | 82.88 |
|
| 119 |
-
| CodeTrans-MT-Base | 86.99 |
|
| 120 |
-
| CodeTrans-MT-Large | 90.27 |
|
| 121 |
-
| CodeTrans-MT-TF-Small | **90.31** |
|
| 122 |
-
| CodeTrans-MT-TF-Base | 90.30 |
|
| 123 |
-
| CodeTrans-MT-TF-Large | 90.17 |
|
| 124 |
-
| State of the art | 85.80 |
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
## Environmental Impact
|
| 128 |
-
|
| 129 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
- **Hardware Type:** Nvidia RTX 8000 GPUs
|
| 133 |
-
|
| 134 |
-
- **Hours used:** Unknown
|
| 135 |
-
|
| 136 |
-
- **Cloud Provider:** GCC TPU v2-8 and v3-8.
|
| 137 |
-
|
| 138 |
-
- **Compute Region:** Unknown
|
| 139 |
-
|
| 140 |
-
- **Carbon Emitted:** Unknown
|
| 141 |
-
|
| 142 |
-
## Citation Information
|
| 143 |
-
|
| 144 |
-
```bibtex
|
| 145 |
-
@misc{elnaggar2021codetrans,
|
| 146 |
-
title={CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing},
|
| 147 |
-
author={Ahmed Elnaggar and Wei Ding and Llion Jones and Tom Gibbs and Tamas Feher and Christoph Angerer and Silvia Severini and Florian Matthes and Burkhard Rost},
|
| 148 |
-
year={2021},
|
| 149 |
-
eprint={2104.02443},
|
| 150 |
-
archivePrefix={arXiv},
|
| 151 |
-
primaryClass={cs.SE}
|
| 152 |
-
}
|
| 153 |
-
```
|
| 154 |
-
|
| 155 |
-
|
|
|
|
| 1 |
---
|
| 2 |
+
language: code
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|