Albertmade commited on
Commit
c7e09a3
·
verified ·
1 Parent(s): 9e90800

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -57
README.md CHANGED
@@ -1,62 +1,12 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
  base_model: intfloat/multilingual-e5-base
5
- tags:
6
- - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - f1
10
- model-index:
11
- - name: ai-champ
12
- results: []
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
 
18
- # ai-champ
19
-
20
- This model is a fine-tuned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on an unknown dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.5180
23
- - Accuracy: 0.7567
24
- - F1: 0.7843
25
- - Roc Auc: 0.8159
26
-
27
- ## Model description
28
-
29
- More information needed
30
-
31
- ## Intended uses & limitations
32
-
33
- More information needed
34
-
35
- ## Training and evaluation data
36
-
37
- More information needed
38
-
39
- ## Training procedure
40
-
41
- ### Training hyperparameters
42
-
43
- The following hyperparameters were used during training:
44
- - learning_rate: 3e-06
45
- - train_batch_size: 64
46
- - eval_batch_size: 128
47
- - seed: 42
48
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
- - lr_scheduler_type: linear
50
- - num_epochs: 3
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Training results
54
-
55
-
56
-
57
- ### Framework versions
58
-
59
- - Transformers 4.56.1
60
- - Pytorch 2.2.1
61
- - Datasets 4.0.0
62
- - Tokenizers 0.22.0
 
1
  ---
2
+ pipeline_tag: text-classification
3
+ tags: [encoder, binary-classification, routing, ai-champs]
4
  base_model: intfloat/multilingual-e5-base
 
 
 
 
 
 
 
 
5
  ---
6
 
7
+ # ai-champ: Question A/B 라우팅 분류기
 
8
 
9
+ - 입력: `question` + `[MODEL] {model_name}`
10
+ - 라벨: score ≥ 4 → **B(1)**, else **A(0)**
11
+ - 학습 데이터: `HAERAE-HUB/ai-champs-train`의 train/test 각각 10% 샘플
12
+ - 테스트 메트릭: {'eval_loss': 0.5179986953735352, 'eval_accuracy': 0.7566917950398776, 'eval_f1': 0.7842681391068488, 'eval_roc_auc': 0.8158790589102392, 'eval_runtime': 19.1572, 'eval_samples_per_second': 955.569, 'eval_steps_per_second': 7.517, 'epoch': 3.0}