🥭 MANGO-T5+ (770M) - Spanish Sysadmin Assistant
Mango-T5+ is a fine-tuned CodeT5+ model specialized in translating natural language instructions into complex terminal commands (Bash, Docker, Git, Systemd, etc.).
🚀 Special Feature: While based on a multilingual model, MANGO has been specifically optimized to understand Spanish instructions, including technical jargon and common sysadmin slang.
💻 Usage
Installation
pip install transformers torch sentencepiece safetensors
Inference Code
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_id = "jrodriiguezg/mango-t5-770m"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
text = "Translate to Bash: bloquea la ip 192.168.1.50 en el firewall"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)
command = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(command)
# Output: sudo iptables -A INPUT -s 192.168.1.50 -j DROP
Capabilities
Unlike generic LLMs, Mango-T5+ is focused on execution logic rather than chat. It excels at:
- Complex Piping: Chaining commands logically (e.g., find | xargs).
- Docker Operations: Advanced filtering and bulk actions.
- Git Management: distinguishing between soft/hard resets and branching.
- System Administration: systemd, journalctl, chmod, chown, user management.
Example Outputs
| Input (Spanish) | Generated Command |
|---|---|
| "Reinicia los contenedores que sean de ubuntu" | `docker ps -q --filter ancestor=ubuntu |
| "Busca archivos modificados hoy" | find . -type f -mtime 0 |
| "Deshaz el último commit manteniendo cambios" | git reset --soft HEAD~1 |
Training Details
Dataset Composition
The model was trained on a mixed dataset of ~35,000 examples, combining:
- NL2Bash: The standard benchmark for bash translation.
- Docker-NL: Specific dataset for container orchestration.
- TLDR Pages: Inverse training using CLI tool descriptions.
- Mango-DataSheet (Custom): A manually curated dataset in Spanish designed to fix common hallucinations, logic errors, and security pitfalls.
Hyperparameters
- Base Model: Salesforce/codet5p-770m
- Hardware: NVIDIA L4 (24GB VRAM)
- Precision: BF16 (Brain Float 16)
- Optimizer: Adafactor
- Batch Size: 8 (Gradient Accumulation: 2)
- Epochs: 3
Limitations & Safety
- Context: The model assumes a standard Linux environment (Fedora/Debian/Ubuntu).
- Verification: Always verify commands before execution, especially those involving file deletion (rm), disk formatting (mkfs, dd), or permission changes. The model generates the command, but the user is responsible for pulling the trigger.
License
This model is fine-tuned from Salesforce/codet5p-770m and is distributed under the Apache 2.0 license.
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for jrodriiguezg/mango-t5-770m
Base model
Salesforce/codet5p-770m