--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 5235566 num_examples: 40639 - name: test num_bytes: 57761 num_examples: 300 download_size: 2474695 dataset_size: 5293327 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* license: mit tags: - nl2bash - shell - instruction - code pretty_name: NL2SH-alpaca-format size_categories: - 10K", "input": "", "output": "" } ``` Additionally, for the **test split**, the original `bash2` (alternative command) and `difficulty` fields have been **included in the output**. This dataset can be used to train instruction-following models for translating **natural language instructions to shell commands**. --- ### Data Fields The data fields are as follows: * `instruction`: natural language description of the shell task the model should perform. Each instruction is unique. * `input`: optional context or input for the task. In this dataset, this field is empty for all examples. * `output`: the shell command corresponding to the instruction. For the **test split**, alternative commands (`bash2`) and difficulty are appended to the output. --- ### Data Instances An example of "train" looks as follows: ```json { 'instruction': 'Compile C code and cache compiled output (to use `ccache` on all `gcc` invocations, see the note above)', 'input': '', 'output': 'ccache gcc path/to/file.c' } ``` --- ### Data Splits | | train | test | | ------------ | -----: | ---: | | NL2SH-ALPACA | 40,639 | 300 | --- ### How to load ```python from datasets import load_dataset ds = load_dataset("abandonedmonk/NL2SH-ALPACA") print(ds["train"][0]) ``` --- ### Citation If you use this dataset, please cite the original work: ``` @misc{westenfelder2025nl2sh, title={NL2SH-ALFA}, author={westenfelder}, year={2025}, howpublished={\url{[https://huggingface.co/datasets/westenfelder/NL2SH-ALFA}}](https://huggingface.co/datasets/westenfelder/NL2SH-ALFA}}) } ``` This version of the dataset was **converted to Alpaca-style format and maintained by Anshuman Jena**.