YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-Tuned LoRA Adapters for Mixtral-8x7B CEFR Model
This repository contains the LoRA adapters for a fine-tuned version of unsloth/mistral-7b-bnb-4bit for CEFR-level sentence generation. The base model is available at unsloth/noSynthetic-mixtral_3epoch_02dropout_base.
- Base Model: unsloth/noSynthetic-mixtral_3epoch_02dropout_base
- Fine-Tuning: LoRA with SMOTE-balanced dataset
- Training Details:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.2
- Training Args: learning_rate=1e-5, batch_size=8, epochs=3, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=2, threshold=0.01
- Evaluation Metrics (Exact Matches):
- CEFR Classifier Accuracy: 0.167
- Precision (Macro): 0.042
- Recall (Macro): 0.167
- F1-Score (Macro): 0.067
- Evaluation Metrics (Within ±1 Level):
- CEFR Classifier Accuracy: 0.500
- Precision (Macro): 0.306
- Recall (Macro): 0.500
- F1-Score (Macro): 0.361
- Other Metrics:
- Perplexity: 9.119
- Diversity (Unique Sentences): 0.033
- Inference Time (ms): 1043.625
- Model Size (GB): 28.0 (base model + LoRA adapters)
- Robustness (F1): 0.063
- Confusion Matrix (Exact Matches):
- CSV: confusion_matrix_exact.csv
- Image: confusion_matrix_exact.png
- Confusion Matrix (Within ±1 Level):
- Per-Class Confusion Metrics (Exact Matches):
- A1: TP=0, FP=0, FN=10, TN=50
- A2: TP=10, FP=30, FN=0, TN=20
- B1: TP=0, FP=20, FN=10, TN=30
- B2: TP=0, FP=0, FN=10, TN=50
- C1: TP=0, FP=0, FN=10, TN=50
- C2: TP=0, FP=0, FN=10, TN=50
- Per-Class Confusion Metrics (Within ±1 Level):
- A1: TP=10, FP=0, FN=0, TN=50
- A2: TP=10, FP=10, FN=0, TN=40
- B1: TP=10, FP=20, FN=0, TN=30
- B2: TP=0, FP=0, FN=10, TN=50
- C1: TP=0, FP=0, FN=10, TN=50
- C2: TP=0, FP=0, FN=10, TN=50
- Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained( "unsloth/noSynthetic-mixtral_3epoch_02dropout_base", quantization_config=BitsAndBytesConfig(load_in_4bit=True), device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("unsloth/noSynthetic-mixtral_3epoch_02dropout_base") # Load LoRA adapters model = PeftModel.from_pretrained(base_model, "Mr-FineTuner/noSynthetic-mixtral_3epoch_02dropout_lora") # Example inference prompt = "<s>[INST] Generate a CEFR B1 level sentence. [/INST]" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Uploaded using huggingface_hub. Saved with safetensors for efficiency.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support