patrickvonplaten pandora-s commited on
Commit
9925900
·
verified ·
1 Parent(s): 1296dc8

[AUTO] CVST Tokenizer Badger (#140)

Browse files

- [AUTO] CVST Tokenizer Badger (a114bdb8b30f55eea8072737b264969f5bd576dc)


Co-authored-by: pandora <[email protected]>

Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -12,6 +12,62 @@ widget:
12
 
13
  # Model Card for Mistral-7B-Instruct-v0.2
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
16
 
17
  Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
 
12
 
13
  # Model Card for Mistral-7B-Instruct-v0.2
14
 
15
+ ###
16
+
17
+ > [!CAUTION]
18
+ > ⚠️
19
+ > The `transformers` tokenizer might give incorrect results as it has not been tested by the Mistral team. To make sure that your encoding and decoding is correct, please use `mistral_common` as shown below:
20
+
21
+ ## Encode and Decode with `mistral_common`
22
+
23
+ ```py
24
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
25
+ from mistral_common.protocol.instruct.messages import UserMessage
26
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
27
+
28
+ mistral_models_path = "MISTRAL_MODELS_PATH"
29
+
30
+ tokenizer = MistralTokenizer.v1()
31
+
32
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
33
+
34
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
35
+ ```
36
+
37
+ ## Inference with `mistral_inference`
38
+
39
+ ```py
40
+ from mistral_inference.model import Transformer
41
+ from mistral_inference.generate import generate
42
+
43
+ model = Transformer.from_folder(mistral_models_path)
44
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
45
+
46
+ result = tokenizer.decode(out_tokens[0])
47
+
48
+ print(result)
49
+ ```
50
+
51
+ ## Inference with hugging face `transformers`
52
+
53
+ ```py
54
+ from transformers import AutoModelForCausalLM
55
+
56
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
57
+ model.to("cuda")
58
+
59
+ generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
60
+
61
+ # decode with mistral tokenizer
62
+ result = tokenizer.decode(generated_ids[0].tolist())
63
+ print(result)
64
+ ```
65
+
66
+ > [!TIP]
67
+ > PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome!
68
+
69
+ ---
70
+
71
  The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
72
 
73
  Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1