Gilbert-Qwen-Multitask-LoRA
LoRA fine-tuned Qwen1.5-1.8B-Chat for multiple text generation tasks including email drafting, story continuation, technical Q&A, news summarization, and chat responses.
Model Description
This model is a LoRA (Low-Rank Adaptation) adapter fine-tuned on Qwen1.5-1.8B-Chat for multitask text generation across 5 domains:
- βοΈ Email Drafting: Generate professional email replies
- π Story Continuation: Continue fictional narratives
- π» Technical Q&A: Answer programming and technical questions
- π° News Summarization: Create concise summaries of articles
- π¬ Chat Responses: Generate conversational replies
Training Details
- Base Model:
Qwen/Qwen1.5-1.8B-Chat - Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Quantization: 4-bit (QLoRA)
- Training Tasks: Multi-task learning
- Training Steps: 15,000
- Learning Rate: 3e-5
- Context Length: 1024 tokens
Usage
Installation
pip install transformers peft torch accelerate bitsandbytes
- Downloads last month
- 127
Model tree for GilbertAkham/gilbert-qwen-multitask-lora
Base model
Qwen/Qwen1.5-1.8B-Chat