Gilbert-Qwen-Multitask-LoRA

LoRA fine-tuned Qwen1.5-1.8B-Chat for multiple text generation tasks including email drafting, story continuation, technical Q&A, news summarization, and chat responses.

Model Description

This model is a LoRA (Low-Rank Adaptation) adapter fine-tuned on Qwen1.5-1.8B-Chat for multitask text generation across 5 domains:

  • βœ‰οΈ Email Drafting: Generate professional email replies
  • πŸ“– Story Continuation: Continue fictional narratives
  • πŸ’» Technical Q&A: Answer programming and technical questions
  • πŸ“° News Summarization: Create concise summaries of articles
  • πŸ’¬ Chat Responses: Generate conversational replies

Training Details

  • Base Model: Qwen/Qwen1.5-1.8B-Chat
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Quantization: 4-bit (QLoRA)
  • Training Tasks: Multi-task learning
  • Training Steps: 15,000
  • Learning Rate: 3e-5
  • Context Length: 1024 tokens

Usage

Installation

pip install transformers peft torch accelerate bitsandbytes
Downloads last month
127
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for GilbertAkham/gilbert-qwen-multitask-lora

Adapter
(326)
this model

Space using GilbertAkham/gilbert-qwen-multitask-lora 1