|
|
--- |
|
|
|
|
|
|
|
|
{} |
|
|
--- |
|
|
|
|
|
# CT-LLM-Base |
|
|
[**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**🤗 MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**🤗 CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**🤗 CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**📖 arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM) |
|
|
|
|
|
CT-LLM-Base is the first Chinese-centric large language model, both pre-training and fine-tuned primarily on Chinese corpora, and offers significant insights into potential biases, Chinese language ability, and multilingual adaptability. |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
``` |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_path = '<your-model-path>' |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True) |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_path, |
|
|
device_map="auto", |
|
|
torch_dtype='auto' |
|
|
).eval() |
|
|
|
|
|
input_text = "很久很久以前," |
|
|
|
|
|
input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device) |
|
|
output_ids = model.generate(**input_ids, max_new_tokens=20) |
|
|
response = tokenizer.decode(output_ids[0], skip_special_tokens=True) |
|
|
|
|
|
print(response) |
|
|
|
|
|
``` |
|
|
|