GraiLLM
#34 opened 4 days ago
by
Caesaropapism
How to finetune this and make it work using mediapipe on kotlin with GPU delegate support like litert-community/Gemma3-1B-IT ?
#32 opened 12 days ago
by
andromedazt
Request: DOI
1
#31 opened 23 days ago
by
Najin06
Add chat_template.json
#30 opened about 1 month ago
by
raphaelmerx
Is it able to select languages after training for generating text?
1
#29 opened about 1 month ago
by
DuongLeVan
Get AutoProcessor working
#28 opened about 2 months ago
by
mamousavi
[Gemma-3-1B] Gibberish outputs after instruction fine-tuning
1
#27 opened 2 months ago
by
razumelo
Honest Review
🔥
1
2
#26 opened 2 months ago
by
kalashshah19
Update README.md
#25 opened 3 months ago
by
krooner
Request: DOI
1
#24 opened 3 months ago
by
frdfd
TRAINING DATA
4
#23 opened 4 months ago
by
amanpreet7
OSError: Can't load tokenizer for 'google/gemma-3-1b-it'.
1
#20 opened 6 months ago
by
JeffMII
RuntimeError: value cannot be converted to type uint8_t without overflow
1
#19 opened 6 months ago
by
TyJaJa
Request: DOI
1
#18 opened 7 months ago
by
S22-22
Why are vocab_size and tokenizer different length?
👀
3
4
#17 opened 7 months ago
by
choco9966
GUI python
1
#14 opened 7 months ago
by
sportcmaneiro
Serving on vLLM creates nonsense responses
3
#12 opened 8 months ago
by
cahmetcan
Remove development branch of transformers
3
#11 opened 8 months ago
by
farzadab
Use the model as a sequence classifier
👍
🤝
5
4
#10 opened 8 months ago
by
A123hmed
Transformers Pipeline Error: AttributeError: 'NoneType' object has no attribute 'apply_chat_template'
🔥
1
16
#9 opened 8 months ago
by
steve122192
AttributeError: 'HybridCache' object has no attribute 'float'
3
#8 opened 8 months ago
by
naruto-soop
Remove processor class from tokenizer_config.json
1
#7 opened 8 months ago
by
Xenova
What transformers version can this be deployed with?
4
#6 opened 8 months ago
by
Khalizo