atasoglu commited on
Commit
625b6cf
·
verified ·
1 Parent(s): 155b7e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -10,10 +10,16 @@ tags:
10
  license: apache-2.0
11
  language:
12
  - en
 
 
 
 
13
  ---
14
 
15
  # Uploaded model
16
 
 
 
17
  - **Developed by:** atasoglu
18
  - **License:** apache-2.0
19
  - **Finetuned from model :** ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1
@@ -21,3 +27,139 @@ language:
21
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: apache-2.0
11
  language:
12
  - en
13
+ - tr
14
+ datasets:
15
+ - atasoglu/turkish-function-calling-20k
16
+ pipeline_tag: text-generation
17
  ---
18
 
19
  # Uploaded model
20
 
21
+ **This model was adapted from [ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1](https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1) and fine-tuned on the [atasoglu/turkish-function-calling-20k](https://huggingface.co/datasets/atasoglu/turkish-function-calling-20k) dataset to perform function calling tasks in Turkish.**
22
+
23
  - **Developed by:** atasoglu
24
  - **License:** apache-2.0
25
  - **Finetuned from model :** ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1
 
27
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
28
 
29
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
30
+
31
+ # Usage
32
+
33
+ First, load the model:
34
+
35
+ ```python
36
+ import json
37
+ from unsloth import FastLanguageModel
38
+
39
+ # loading the model and tokenizer
40
+ model, tokenizer = FastLanguageModel.from_pretrained(
41
+ model_name="atasoglu/Turkish-Llama-3-8B-function-calling",
42
+ load_in_4bit=True,
43
+ )
44
+ FastLanguageModel.for_inference(model)
45
+ ```
46
+
47
+ Setup the tools and messages:
48
+
49
+ ```python
50
+ # define the prompt templates
51
+ system_prompt = """Sen yardımsever, akıllı ve fonksiyon çağrısı yapabilen bir asistansın.
52
+ Aşağıda JSON parçası içinde verilen fonksiyonları kullanarak kullanıcının sorusunu uygun şekilde cevaplamanı istiyorum.
53
+
54
+ Fonksiyon çağrısı yaparken uyman gereken talimatlar:
55
+
56
+ * Fonksiyonlar, JSON şeması olarak ifade edilmiştir.
57
+ * Eğer kullanıcının sorusu, bu fonksiyonlardan en az biri kullanılarak cevaplanabiliyorsa; uygun bir fonksiyon çağrısını JSON parçası içinde oluştur.
58
+ * Fonksiyonların parametreleri için asla uydurmalar yapma ve sadece kullanıcının verdiği bilgileri kullan.
59
+ * Eğer kullanıcının sorusu herhangi bir fonksiyon ile cevaplanamıyorsa, sadece "Verilen fonksiyonlarla cevaplanamaz" metnini döndür ve başka bir açıklama yapma.
60
+
61
+ Bu talimatlara uyarak soruları cevaplandır."""
62
+
63
+ user_prompt = """### Fonksiyonlar
64
+
65
+ '''json
66
+ {tools}
67
+ '''
68
+
69
+ ### Soru
70
+
71
+ {query}"""
72
+
73
+ # define the tools and messages
74
+ tools = [
75
+ {
76
+ "type": "function",
77
+ "function": {
78
+ "name": "get_weather",
79
+ "description": "Get current temperature for a given location.",
80
+ "parameters": {
81
+ "type": "object",
82
+ "properties": {
83
+ "location": {
84
+ "type": "string",
85
+ "description": "City and country e.g. Bogotá, Colombia",
86
+ }
87
+ },
88
+ "required": ["location"],
89
+ "additionalProperties": False,
90
+ },
91
+ "strict": True,
92
+ },
93
+ }
94
+ ]
95
+ query = "Paris'te hava şu anda nasıl?"
96
+ messages = [
97
+ {
98
+ "role": "system",
99
+ "content": system_prompt,
100
+ },
101
+ {
102
+ "role": "user",
103
+ "content": user_prompt.format(
104
+ tools=json.dumps(tools, ensure_ascii=False),
105
+ query=query,
106
+ ),
107
+ },
108
+ ]
109
+ ```
110
+
111
+ **NOTE:** Change the *single quote* character to a *backtick* in the user prompt before running to specify the JSON snippet.
112
+
113
+ Then, generate and evaluate the output:
114
+
115
+ ```python
116
+ import re
117
+
118
+
119
+ # define an evaluation function
120
+ def eval_function_calling(text):
121
+ match_ = re.search(r"```json(.*)```", text, re.DOTALL)
122
+ if match_ is None:
123
+ return False, text
124
+ return True, json.loads(match_.group(1).strip())
125
+
126
+
127
+ # tokenize the inputs
128
+ inputs = tokenizer.apply_chat_template(
129
+ messages,
130
+ add_generation_prompt=True,
131
+ return_dict=True,
132
+ return_tensors="pt",
133
+ ).to("cuda")
134
+
135
+ # define generation arguments
136
+ generation_kwargs = dict(
137
+ do_sample=True,
138
+ use_cache=True,
139
+ max_new_tokens=500,
140
+ temperature=0.3,
141
+ top_p=0.9,
142
+ top_k=40,
143
+ )
144
+
145
+ # finally, generate the output
146
+ outputs = model.generate(**inputs, **generation_kwargs)
147
+ output_ids = outputs[:, inputs["input_ids"].shape[1] :]
148
+ generated_texts = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
149
+ has_function_calling, results = eval_function_calling(generated_texts[0])
150
+
151
+ # print the model response
152
+ if has_function_calling:
153
+ for result in results:
154
+ fn = result["function"]
155
+ name, args = fn["name"], fn["arguments"]
156
+ print(f"Calling {name!r} function with these arguments: {args}")
157
+ else:
158
+ print(f"No function call: {results!r}")
159
+ ```
160
+
161
+ Output:
162
+
163
+ ```console
164
+ Calling 'get_weather' function with these arguments: {"location":"Paris, France"}
165
+ ```