YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

yoruba-diacritics-quantized

This model is a fine-tuned version of Davlan/mT5_base_yoruba_adr on a version of Niger-Volta-LTI, provided by Bunmie-e on huggingface.

Model description

The fine-tuning was performed using the PEFT-LoRa technique, aiming to improve the model's performance on tasks like diacritization restoration and generation.

Key Features:

  • Base model: mT5_base_yoruba_adr pre-trained on Yoruba text
  • Fine-tuned dataset: Yoruba diacritics dataset from bumie-e/Yoruba-diacritics-vs-non-diacritics
  • Fine-tuning technique: PEFT-LoRa

Potential Applications:

  • Diacritization restoration in Yoruba text
  • Generation of Yoruba text with correct diacritics
  • Natural language processing tasks for Yoruba language

Code for Testing:

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

config = PeftConfig.from_pretrained("Professor/yoruba-diacritics-quantized")
model = AutoModelForSeq2SeqLM.from_pretrained("Davlan/mT5_base_yoruba_adr")
model = PeftModel.from_pretrained(model, "Professor/yoruba-diacritics-quantized")
tokenizer = AutoTokenizer.from_pretrained("Davlan/mT5_base_yoruba_adr")

inputs = tokenizer(
    "Mo ti so fun bobo yen sha, aaro la wa bayi",
    return_tensors="pt",
)

device = "cpu" # use your GPU if you have

model.to(device)

with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=100)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))

Intended uses & limitations

More information coming

Training and evaluation data

More information coming

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

coming soon.

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.38.0.dev0
  • Pytorch 2.0.0
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
33
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Professor/yoruba-diacritics-quantized

Adapter
(1)
this model