--- tags: - image-classification - timm - transformers - animetimm - dghs-imgutils library_name: timm license: gpl-3.0 datasets: - animetimm/danbooru-wdtagger-v4-w640-ws-full base_model: - timm/mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k --- # Anime Tagger mobilenetv4_conv_aa_large.dbv4-full ## Model Details - **Model Type:** Multilabel Image classification / feature backbone - **Model Stats:** - Params: 47.3M - FLOPs / MACs: 19.2G / 9.6G - Image size: train = 448 x 448, test = 448 x 448 - **Dataset:** [animetimm/danbooru-wdtagger-v4-w640-ws-full](https://huggingface.co/datasets/animetimm/danbooru-wdtagger-v4-w640-ws-full) - Tags Count: 12476 - General (#0) Tags Count: 9225 - Character (#4) Tags Count: 3247 - Rating (#9) Tags Count: 4 ## Results | # | Macro@0.40 (F1/MCC/P/R) | Micro@0.40 (F1/MCC/P/R) | Macro@Best (F1/P/R) | |:----------:|:-----------------------------:|:-----------------------------:|:---------------------:| | Validation | 0.457 / 0.473 / 0.586 / 0.407 | 0.641 / 0.642 / 0.693 / 0.596 | --- | | Test | 0.458 / 0.473 / 0.586 / 0.408 | 0.641 / 0.642 / 0.694 / 0.596 | 0.511 / 0.537 / 0.513 | * `Macro/Micro@0.40` means the metrics on the threshold 0.40. * `Macro@Best` means the mean metrics on the tag-level thresholds on each tags, which should have the best F1 scores. ## Thresholds | Category | Name | Alpha | Threshold | Micro@Thr (F1/P/R) | Macro@0.40 (F1/P/R) | Macro@Best (F1/P/R) | |:----------:|:---------:|:-------:|:-----------:|:---------------------:|:---------------------:|:---------------------:| | 0 | general | 1 | 0.33 | 0.631 / 0.639 / 0.624 | 0.323 / 0.475 / 0.272 | 0.388 / 0.404 / 0.405 | | 4 | character | 1 | 0.42 | 0.873 / 0.921 / 0.829 | 0.840 / 0.901 / 0.793 | 0.860 / 0.913 / 0.819 | | 9 | rating | 1 | 0.38 | 0.807 / 0.759 / 0.862 | 0.813 / 0.788 / 0.841 | 0.814 / 0.783 / 0.851 | * `Micro@Thr` means the metrics on the category-level suggested thresholds, which are listed in the table above. * `Macro@0.40` means the metrics on the threshold 0.40. * `Macro@Best` means the metrics on the tag-level thresholds on each tags, which should have the best F1 scores. For tag-level thresholds, you can find them in [selected_tags.csv](https://huggingface.co/animetimm/mobilenetv4_conv_aa_large.dbv4-full/resolve/main/selected_tags.csv). ## How to Use We provided a sample image for our code samples, you can find it [here](https://huggingface.co/animetimm/mobilenetv4_conv_aa_large.dbv4-full/blob/main/sample.webp). ### Use TIMM And Torch Install [dghs-imgutils](https://github.com/deepghs/imgutils), [timm](https://github.com/huggingface/pytorch-image-models) and other necessary requirements with the following command ```shell pip install 'dghs-imgutils>=0.17.0' torch huggingface_hub timm pillow pandas ``` After that you can load this model with timm library, and use it for train, validation and test, with the following code ```python import json import pandas as pd import torch from huggingface_hub import hf_hub_download from imgutils.data import load_image from imgutils.preprocess import create_torchvision_transforms from timm import create_model repo_id = 'animetimm/mobilenetv4_conv_aa_large.dbv4-full' model = create_model(f'hf-hub:{repo_id}', pretrained=True) model.eval() with open(hf_hub_download(repo_id=repo_id, repo_type='model', filename='preprocess.json'), 'r') as f: preprocessor = create_torchvision_transforms(json.load(f)['test']) # Compose( # PadToSize(size=(512, 512), interpolation=bilinear, background_color=white) # Resize(size=448, interpolation=bicubic, max_size=None, antialias=True) # CenterCrop(size=[448, 448]) # MaybeToTensor() # Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250])) # ) image = load_image('https://huggingface.co/animetimm/mobilenetv4_conv_aa_large.dbv4-full/resolve/main/sample.webp') input_ = preprocessor(image).unsqueeze(0) # input_, shape: torch.Size([1, 3, 448, 448]), dtype: torch.float32 with torch.no_grad(): output = model(input_) prediction = torch.sigmoid(output)[0] # output, shape: torch.Size([1, 12476]), dtype: torch.float32 # prediction, shape: torch.Size([12476]), dtype: torch.float32 df_tags = pd.read_csv( hf_hub_download(repo_id=repo_id, repo_type='model', filename='selected_tags.csv'), keep_default_na=False ) tags = df_tags['name'] mask = prediction.numpy() >= df_tags['best_threshold'] print(dict(zip(tags[mask].tolist(), prediction[mask].tolist()))) # {'sensitive': 0.6181432604789734, # '1girl': 0.9969968795776367, # 'solo': 0.9696205258369446, # 'looking_at_viewer': 0.8432332873344421, # 'blush': 0.7917149662971497, # 'smile': 0.9405843615531921, # 'short_hair': 0.6273495554924011, # 'shirt': 0.5353975892066956, # 'long_sleeves': 0.7138653993606567, # 'brown_hair': 0.8164870738983154, # 'holding': 0.6878705024719238, # 'dress': 0.6111152172088623, # 'closed_mouth': 0.5007601976394653, # 'white_shirt': 0.34434816241264343, # 'purple_eyes': 0.7064062356948853, # 'flower': 0.9301103949546814, # 'braid': 0.869755208492279, # 'sidelocks': 0.23272593319416046, # 'outdoors': 0.4784768223762512, # 'hand_up': 0.17624469101428986, # 'blunt_bangs': 0.3426509499549866, # 'head_tilt': 0.11214721202850342, # 'sunlight': 0.15041665732860565, # 'plant': 0.22287964820861816, # 'light_smile': 0.08559338748455048, # 'blue_flower': 0.8238141536712646, # 'backlighting': 0.17485418915748596, # 'crown_braid': 0.6755908131599426} ``` ### Use ONNX Model For Inference Install [dghs-imgutils](https://github.com/deepghs/imgutils) with the following command ```shell pip install 'dghs-imgutils>=0.17.0' ``` Use `multilabel_timm_predict` function with the following code ```python from imgutils.generic import multilabel_timm_predict general, character, rating = multilabel_timm_predict( 'https://huggingface.co/animetimm/mobilenetv4_conv_aa_large.dbv4-full/resolve/main/sample.webp', repo_id='animetimm/mobilenetv4_conv_aa_large.dbv4-full', fmt=('general', 'character', 'rating'), ) print(general) # {'1girl': 0.9969969987869263, # 'solo': 0.969620406627655, # 'smile': 0.940584659576416, # 'flower': 0.9301101565361023, # 'braid': 0.8697538375854492, # 'looking_at_viewer': 0.8432332277297974, # 'blue_flower': 0.8238140344619751, # 'brown_hair': 0.816490650177002, # 'blush': 0.7917153835296631, # 'long_sleeves': 0.7138651609420776, # 'purple_eyes': 0.7064056396484375, # 'holding': 0.6878722906112671, # 'crown_braid': 0.6755940318107605, # 'short_hair': 0.6273516416549683, # 'dress': 0.6111209392547607, # 'shirt': 0.5354008078575134, # 'closed_mouth': 0.5007631182670593, # 'outdoors': 0.47849011421203613, # 'white_shirt': 0.34435153007507324, # 'blunt_bangs': 0.34265023469924927, # 'sidelocks': 0.2327292263507843, # 'plant': 0.22287851572036743, # 'hand_up': 0.17624491453170776, # 'backlighting': 0.17485374212265015, # 'sunlight': 0.15041813254356384, # 'head_tilt': 0.11214578151702881, # 'light_smile': 0.08559340238571167} print(character) # {} print(rating) # {'sensitive': 0.6181411743164062} ``` For further information, see [documentation of function multilabel_timm_predict](https://dghs-imgutils.deepghs.org/main/api_doc/generic/multilabel_timm.html#multilabel-timm-predict).