YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-Tuned Gemma-7B CEFR Model
This is a fine-tuned version of unsloth/gemma-7b-bnb-4bit for CEFR-level sentence generation, evaluated with a fine-tuned classifier from Mr-FineTuner/Skripsi_validator_best_model.
- Base Model: unsloth/gemma-7b-bnb-4bit
- Fine-Tuning: LoRA with SMOTE-balanced dataset
- Training Details:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
- Training Args: learning_rate=2e-5, batch_size=8, epochs=1, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=3, threshold=0.01
- Evaluation Metrics (Exact Matches):
- CEFR Classifier Accuracy: 0.000
- Precision (Macro): 0.000
- Recall (Macro): 0.000
- F1-Score (Macro): 0.000
- Evaluation Metrics (Within 卤1 Level):
- CEFR Classifier Accuracy: 0.833
- Precision (Macro): 0.750
- Recall (Macro): 0.833
- F1-Score (Macro): 0.778
- Other Metrics:
- Perplexity: 3.187
- Diversity (Unique Sentences): 0.010
- Inference Time (ms): 6193.813
- Model Size (GB): 4.2
- Robustness (F1): 0.000
- Confusion Matrix (Exact Matches):
- CSV: confusion_matrix_exact.csv
- Image: confusion_matrix_exact.png
- Confusion Matrix (Within 卤1 Level):
- Per-Class Confusion Metrics (Exact Matches):
- A1: TP=0, FP=100, FN=100, TN=400
- A2: TP=0, FP=300, FN=100, TN=200
- B1: TP=0, FP=100, FN=100, TN=400
- B2: TP=0, FP=100, FN=100, TN=400
- C1: TP=0, FP=0, FN=100, TN=500
- C2: TP=0, FP=0, FN=100, TN=500
- Per-Class Confusion Metrics (Within 卤1 Level):
- A1: TP=100, FP=0, FN=0, TN=500
- A2: TP=100, FP=100, FN=0, TN=400
- B1: TP=100, FP=0, FN=0, TN=500
- B2: TP=100, FP=0, FN=0, TN=500
- C1: TP=100, FP=0, FN=0, TN=500
- C2: TP=0, FP=0, FN=100, TN=500
- Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test_03_gemma_trainPercen_myValidator_1epoch") tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test_03_gemma_trainPercen_myValidator_1epoch") # Example inference prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Uploaded using huggingface_hub.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support