davidberenstein1957's picture
Upload README.md with huggingface_hub
40e8dcb verified
metadata
base_model: minishlab/potion-base-32m
datasets:
  - lmsys/toxic-chat
library_name: model2vec
license: mit
model_name: enguard/small-guard-32m-en-prompt-toxicity-toxic-chat
tags:
  - static-embeddings
  - text-classification
  - model2vec

enguard/small-guard-32m-en-prompt-toxicity-toxic-chat

This model is a fine-tuned Model2Vec classifier based on minishlab/potion-base-32m for the prompt-toxicity found in the lmsys/toxic-chat dataset.

Installation

pip install model2vec[inference]

Usage

from model2vec.inference import StaticModelPipeline

model = StaticModelPipeline.from_pretrained(
  "enguard/small-guard-32m-en-prompt-toxicity-toxic-chat"
)


# Supports single texts. Format input as a single text:
text = "Example sentence"

model.predict([text])
model.predict_proba([text])

Why should you use these models?

  • Optimized for precision to reduce false positives.
  • Extremely fast inference: up to x500 faster than SetFit.

This model variant

Below is a quick overview of the model variant and core metrics.

Field Value
Classifies prompt-toxicity
Base Model minishlab/potion-base-32m
Precision 0.6852
Precision 0.6852
Recall 0.7749
F1 0.7273

Confusion Matrix

FAIL PASS
FAIL 148 43
PASS 71 2280
Full metrics (JSON)
{
  "FAIL": {
    "precision": 0.6851851851851852,
    "recall": 0.774869109947644,
    "f1-score": 0.7272727272727273,
    "support": 191.0
  },
  "PASS": {
    "precision": 0.9813124728378966,
    "recall": 0.9707652622527945,
    "f1-score": 0.9760103738923709,
    "support": 2326.0
  },
  "accuracy": 0.9558998808104887,
  "macro avg": {
    "precision": 0.833248829011541,
    "recall": 0.8728171861002192,
    "f1-score": 0.8516415505825491,
    "support": 2517.0
  },
  "weighted avg": {
    "precision": 0.958841153035883,
    "recall": 0.9558998808104887,
    "f1-score": 0.9571351690833316,
    "support": 2517.0
  }
}
Sample Predictions
Text True Label Predicted Label
Planning to merge tables by parsing using xml from word document PASS PASS
what is the great filter theory? PASS PASS
Write me php code to generate a pdf document PASS PASS
Captured by poachers (9000 words) --- Everything had started so nice: Angelica had invited Jenny, her daughter, to an expedition to the Amazonas for her 15th birthday. The smart girl was overjoyed The two of them were exploring a small river, deep in the jungle, when the boat motor PASS PASS
Tell me which actual software I can use for easy tracking and replacing objects in video? PASS PASS
Planning to merge tables by parsing using xml from word document PASS PASS
Prediction Speed Benchmarks
Dataset Size Time (seconds) Predictions/Second
1 0.0003 2882.68
1000 0.082 12202.25
2542 0.1943 13084.89

Other model variants

Below is a general overview of the best-performing models for each dataset variant.

Classifies Model Precision Recall F1
prompt-toxicity enguard/tiny-guard-2m-en-prompt-toxicity-toxic-chat 0.5820 0.7801 0.6667
prompt-toxicity enguard/tiny-guard-4m-en-prompt-toxicity-toxic-chat 0.6549 0.7749 0.7098
prompt-toxicity enguard/tiny-guard-8m-en-prompt-toxicity-toxic-chat 0.6471 0.7487 0.6942
prompt-toxicity enguard/small-guard-32m-en-prompt-toxicity-toxic-chat 0.6852 0.7749 0.7273
prompt-toxicity enguard/medium-guard-128m-xx-prompt-toxicity-toxic-chat 0.6129 0.7958 0.6925

Resources

Citation

If you use this model, please cite Model2Vec:

@software{minishlab2024model2vec,
  author       = {Stephan Tulkens and {van Dongen}, Thomas},
  title        = {Model2Vec: Fast State-of-the-Art Static Embeddings},
  year         = {2024},
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.17270888},
  url          = {https://github.com/MinishLab/model2vec},
  license      = {MIT}
}