|
|
--- |
|
|
base_model: BLIP |
|
|
library_name: peft |
|
|
--- |
|
|
|
|
|
# Model Card for BLIP: Bootstrapping Language-Image Pre-training |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
BLIP is a unified vision-language model designed for image captioning, visual question answering, and related tasks. The current implementation is pretrained for image captioning and fine-tuned on a food-specific dataset. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
BLIP (Bootstrapping Language-Image Pre-training) leverages vision transformers (ViT) for feature extraction and connects them with language models for unified vision-language understanding and generation. This particular model is fine-tuned to generate captions for food-related images. |
|
|
|
|
|
- **Developed by:** Salesforce AI Research |
|
|
- **Funded by:** Salesforce |
|
|
- **Shared by:** Official BLIP repository |
|
|
- **Model type:** Vision-language model |
|
|
- **Language(s)** English |
|
|
- **Finetuned from model:** BLIP base pretrained on COCO dataset |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
- **Repository:** [BLIP Official GitHub](https://github.com/salesforce/BLIP) |
|
|
- **Paper:** [BLIP: Bootstrapping Language-Image Pre-training](https://arxiv.org/abs/2201.12086) |
|
|
- **Dataset:** [Recipes Dataset](https://www.kaggle.com/datasets/pes12017000148/food-ingredients-and-recipe-dataset-with-images) |
|
|
|
|
|
|
|
|
|