--- library_name: transformers license: apache-2.0 base_model: PekingU/rtdetr_v2_r50vd tags: - generated_from_trainer model-index: - name: rtdetr-v2-r50-cppe5-finetune-2 results: [] --- # rtdetr-v2-r50-cppe5-finetune-2 This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://huggingface.co/PekingU/rtdetr_v2_r50vd) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 6.5817 - eval_map: 0.6377 - eval_map_50: 0.9261 - eval_map_75: 0.7461 - eval_map_small: 0.5601 - eval_map_medium: 0.781 - eval_map_large: 0.7309 - eval_mar_1: 0.235 - eval_mar_10: 0.5687 - eval_mar_100: 0.7408 - eval_mar_small: 0.6721 - eval_mar_medium: 0.855 - eval_mar_large: 0.8479 - eval_map_checked-unchecked: -1.0 - eval_mar_100_checked-unchecked: -1.0 - eval_map_checked: 0.6635 - eval_mar_100_checked: 0.7876 - eval_map_unchecked: 0.6118 - eval_mar_100_unchecked: 0.694 - eval_runtime: 7.4046 - eval_samples_per_second: 17.151 - eval_steps_per_second: 2.161 - epoch: 11.0 - step: 990 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 40 ### Framework versions - Transformers 4.56.0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0