danielhanchen commited on
Commit
3ae733c
·
verified ·
1 Parent(s): df00f1b

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - gg-hf-gm/gemma-3-270m-it
4
+ license: gemma
5
+ tags:
6
+ - gemma3
7
+ - unsloth
8
+ - gemma
9
+ - google
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
+ extra_gated_heading: Access Gemma on Hugging Face
13
+ extra_gated_prompt: >-
14
+ To access Gemma on Hugging Face, you’re required to review and agree to
15
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
16
+ Face and click below. Requests are processed immediately.
17
+ extra_gated_button_content: Acknowledge license
18
+ ---
19
+ <div>
20
+ <p style="margin-top: 0;margin-bottom: 0;">
21
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
22
+ </p>
23
+ <div style="display: flex; gap: 5px; align-items: center; ">
24
+ <a href="https://github.com/unslothai/unsloth/">
25
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
26
+ </a>
27
+ <a href="https://discord.gg/unsloth">
28
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
29
+ </a>
30
+ <a href="https://docs.unsloth.ai/">
31
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
32
+ </a>
33
+ </div>
34
+ </div>
35
+
36
+
37
+ # Gemma 3 model card
38
+
39
+ > [!Note]
40
+ > This repository corresponds to the 270m **pre-trained** version of the Gemma 3 model using Quantization Aware Training (QAT).
41
+ >
42
+ > **The checkpoint in this repository is unquantized, please make sure to quantize with Q4_0 with your favorite tool**
43
+ >
44
+ > Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements
45
+ > to load the model.