LLM-FineTuning-Large-Language-Models

Форк
0
247 строк · 11.8 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "metadata": {},
6
   "source": [
7
    "🦙 Chat Templates in HuggingFace is just great. 🔥🚀\n",
8
    "\n",
9
    "📌 Why templates? ❓\n",
10
    "\n",
11
    "An increasingly common use case for LLMs is chat. In a chat context, rather than continuing a single string of text (as is the case with a standard language model), the model instead continues a conversation that consists of one or more messages, each of which includes a role as well as message text.\n",
12
    "\n",
13
    "Chat models have been trained with very different formats for converting conversations into a single tokenizable string. Using a format different from the format a model was trained with will usually cause severe, silent performance degradation, so matching the format used during training is extremely important! \n",
14
    "\n",
15
    "So, `apply_chat_template` attribute that can be used to save the chat format the model was trained with. This attribute contains a Jinja template that converts conversation histories into a correctly formatted string. \n",
16
    "\n",
17
    "--------\n",
18
    "\n",
19
    "All language models, including models fine-tuned for chat, operate on linear sequences of tokens and do not intrinsically have special handling for roles. This means that role information is usually injected by adding control tokens between messages, to indicate both the message boundary and the relevant roles.\n",
20
    "\n",
21
    "Unfortunately, there isn’t (yet!) a standard for which tokens to use, and so different models have been trained with wildly different formatting and control tokens for chat. This can be a real problem for users - if you use the wrong format, then the model will be confused by your input, and your performance will be a lot worse than it should be. \n",
22
    "\n",
23
    "Whether you're fine-tuning a model or using it directly for inference, it's always a good idea to minimize these distribution shifts and keep the input you give it as similar as possible to the input it was trained on. With regular language models, it's relatively easy to do that - simply load your tokenizer and model from the same checkpoint, and you're good to go.\n",
24
    "\n",
25
    "With chat models, however, it's a bit different. This is because \"chat\" is not just a single string of text that can be straightforwardly tokenized - it's a sequence of messages, each of which contains a role as well as content, which is the actual text of the message. Most commonly, the roles are \"user\" for messages sent by the user, \"assistant\" for responses written by the model, and optionally \"system\" for high-level directives given at the start of the conversation.\n",
26
    "\n",
27
    "This sequence of messages needs to be converted into a text string before it can be tokenized and used as input to a model. \n",
28
    "\n",
29
    "This is the problem that chat templates aim to solve. Chat templates are Jinja template strings that are saved and loaded with your tokenizer, and that contain all the information needed to turn a list of chat messages into a correctly formatted input for your model. Here are three chat template strings, corresponding to the three message formats above:\n",
30
    "\n",
31
    "------------------"
32
   ]
33
  },
34
  {
35
   "cell_type": "code",
36
   "execution_count": null,
37
   "metadata": {},
38
   "outputs": [],
39
   "source": [
40
    "# Chat Templates in HuggingFace is just great 🚀\n",
41
    "\n",
42
    "from transformers import AutoTokenizer\n",
43
    "tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-chat-hf\")\n",
44
    "\n",
45
    "chat = [\n",
46
    "  {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n",
47
    "  {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\n",
48
    "  {\"role\": \"user\", \"content\": \"I'd like to show off how chat templating works!\"},\n",
49
    "]\n",
50
    "\n",
51
    "tokenizer.use_default_system_prompt = False\n",
52
    "tokenizer.apply_chat_template(chat, tokenize=False)\n",
53
    "\n",
54
    "\"<s>[INST] Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] \\\n",
55
    "    I'd like to show off how chat templating works! [/INST]\"\n",
56
    "\n",
57
    "# Note that the tokenizer has added the control tokens [INST] and [/INST]\n",
58
    "# to indicate the start and end of user messages\n",
59
    "\n",
60
    "\n",
61
    "# Now with Jinja template to format inputs similarly to the way LLaMA formats them\n",
62
    "\n",
63
    "{% for message in messages %}\n",
64
    "    {% if message['role'] == 'user' %}\n",
65
    "        {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }}\n",
66
    "    {% elif message['role'] == 'system' %}\n",
67
    "        {{ '<<SYS>>\\\\n' + message['content'].strip() + '\\\\n<</SYS>>\\\\n\\\\n' }}\n",
68
    "    {% elif message['role'] == 'assistant' %}\n",
69
    "        {{ '[ASST] '  + message['content'] + ' [/ASST]' + eos_token }}\n",
70
    "    {% endif %}\n",
71
    "{% endfor %}"
72
   ]
73
  },
74
  {
75
   "attachments": {},
76
   "cell_type": "markdown",
77
   "metadata": {},
78
   "source": [
79
    "📌 How do I get started with templates?\n",
80
    "\n",
81
    "Easy! If a tokenizer has the chat_template attribute set, it's ready to go. You can use that model and tokenizer in `ConversationPipeline`, or you can call `tokenizer.apply_chat_template()` to format chats for inference or training. \n",
82
    "\n",
83
    "------------------\n",
84
    "\n",
85
    "📌  How do I create a new chat template?\n",
86
    "\n",
87
    "You can add a `chat_template` even for checkpoints that you're not the owner of, by opening a pull request. The only change you need to make is to set the tokenizer.`chat_template` attribute to a Jinja template string. Once that's done, push your changes and you're ready to go!\n",
88
    "\n",
89
    "--------------\n",
90
    "\n",
91
    "📌 How do chat templates work?\n",
92
    "\n",
93
    "The chat template for a model is stored on the `tokenizer.chat_template` attribute. If no chat template is set, the default template for that model class is used instead.\n",
94
    "\n",
95
    "Jinja template gives you a lot of flexibility - Let’s see a Jinja template that can format inputs similarly to the way LLaMA formats them \n",
96
    "\n",
97
    "```\n",
98
    "{% for message in messages %}\n",
99
    "    {% if message['role'] == 'user' %}\n",
100
    "        {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }}\n",
101
    "    {% elif message['role'] == 'system' %}\n",
102
    "        {{ '<<SYS>>\\\\n' + message['content'].strip() + '\\\\n<</SYS>>\\\\n\\\\n' }}\n",
103
    "    {% elif message['role'] == 'assistant' %}\n",
104
    "        {{ '[ASST] '  + message['content'] + ' [/ASST]' + eos_token }}\n",
105
    "    {% endif %}\n",
106
    "{% endfor %}\n",
107
    "\n",
108
    "```"
109
   ]
110
  },
111
  {
112
   "cell_type": "markdown",
113
   "metadata": {},
114
   "source": [
115
    "## From HF Doc\n",
116
    "\n",
117
    "```py\n",
118
    "from transformers import AutoTokenizer\n",
119
    "tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\n",
120
    "\n",
121
    "chat = [\n",
122
    "  {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n",
123
    "  {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\n",
124
    "  {\"role\": \"user\", \"content\": \"I'd like to show off how chat templating works!\"},\n",
125
    "]\n",
126
    "\n",
127
    "tokenizer.apply_chat_template(chat, tokenize=False)\n",
128
    "\n",
129
    "# \"<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]\"\n",
130
    "\n",
131
    "```\n",
132
    "\n",
133
    "\n",
134
    "Notice how the entire chat is condensed into a single string. If we use `tokenize=True`, which is the default setting, that string will also be tokenized for us. \n",
135
    "\n",
136
    "Note that for Mistral-7B-Instruct, the tokenizer has added the control tokens [INST] and [/INST] to indicate the start and end of user messages (but not assistant messages!). Mistral-instruct was trained with these tokens, but BlenderBot was not."
137
   ]
138
  },
139
  {
140
   "cell_type": "markdown",
141
   "metadata": {},
142
   "source": [
143
    "## More on why we need `apply_chat_template`\n",
144
    "\n",
145
    "Basically, it auto-applies the right formatting around the messages for models.\n",
146
    "\n",
147
    "Using this could greatly improve the user experience of open-source LLMs since users would no longer have to manually add the correct tokens and formatting to prompts sent to the PromptNode.\n",
148
    "\n",
149
    "Here is a full example of how to use this new function with the Mistral Instruct Model"
150
   ]
151
  },
152
  {
153
   "cell_type": "code",
154
   "execution_count": null,
155
   "metadata": {},
156
   "outputs": [],
157
   "source": [
158
    "\n",
159
    "\n",
160
    "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
161
    "\n",
162
    "device = \"cuda\" # the device to load the model onto\n",
163
    "\n",
164
    "model = AutoModelForCausalLM.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\n",
165
    "tokenizer = AutoTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.1\")\n",
166
    "\n",
167
    "messages = [\n",
168
    "    {\"role\": \"user\", \"content\": \"What is your favourite condiment?\"},\n",
169
    "    {\"role\": \"assistant\", \"content\": \"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!\"},\n",
170
    "    {\"role\": \"user\", \"content\": \"Do you have mayonnaise recipes?\"}\n",
171
    "]\n",
172
    "\n",
173
    "encodeds = tokenizer.apply_chat_template(messages, return_tensors=\"pt\")\n",
174
    "\n",
175
    "model_inputs = encodeds.to(device)\n",
176
    "model.to(device)\n",
177
    "\n",
178
    "generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)\n",
179
    "decoded = tokenizer.batch_decode(generated_ids)\n",
180
    "print(decoded[0])"
181
   ]
182
  },
183
  {
184
   "cell_type": "markdown",
185
   "metadata": {},
186
   "source": [
187
    "------------------\n",
188
    "\n",
189
    "## Below is a common and useful util method for formatting the dialogue history into a prompt of a chat-LLM model. 🐍\n",
190
    "\n",
191
    "This will help understand the context of the conversation and generate appropriate responses by the Model\n",
192
    "\n",
193
    "--------\n",
194
    "\n",
195
    "The function takes a history of dialogues as input, which is a list of lists where each sublist represents a pair of user and assistant messages, like below\n",
196
    "\n",
197
    "```\n",
198
    "[\n",
199
    "    [\"User's first instruction\", \"Assistant's first response\"],\n",
200
    "    [\"User's second instruction\", \"Assistant's second response\"],\n",
201
    "    [\"User's third instruction\", null]\n",
202
    "]\n",
203
    "\n",
204
    "```\n",
205
    "\n",
206
    "And, the messages list should be of the following format:\n",
207
    "\n",
208
    "```\n",
209
    "messages =\n",
210
    "\n",
211
    "[\n",
212
    "    {\"role\": \"user\", \"content\": \"User's first message\"},\n",
213
    "    {\"role\": \"assistant\", \"content\": \"Assistant's first response\"},\n",
214
    "    {\"role\": \"user\", \"content\": \"User's second message\"},\n",
215
    "    {\"role\": \"assistant\", \"content\": \"Assistant's second response\"},\n",
216
    "    {\"role\": \"user\", \"content\": \"User's third message\"}\n",
217
    "]\n",
218
    "```"
219
   ]
220
  },
221
  {
222
   "cell_type": "code",
223
   "execution_count": null,
224
   "metadata": {},
225
   "outputs": [],
226
   "source": [
227
    "def format_chat_history(history) -> str:\n",
228
    "    messages = [{\"role\": (\"user\" if i % 2 == 0 else \"assistant\"), \"content\": dialog[i % 2]}\n",
229
    "                for i, dialog in enumerate(history) for _ in (0, 1) if dialog[i % 2]]\n",
230
    "    # The conditional `(if dialog[i % 2])` ensures that messages\n",
231
    "    # that are None (like the latest assistant response in an ongoing\n",
232
    "    # conversation) are not included.\n",
233
    "    return pipeline.tokenizer.apply_chat_template(\n",
234
    "        messages, tokenize=False,\n",
235
    "        add_generation_prompt=True)"
236
   ]
237
  }
238
 ],
239
 "metadata": {
240
  "language_info": {
241
   "name": "python"
242
  },
243
  "orig_nbformat": 4
244
 },
245
 "nbformat": 4,
246
 "nbformat_minor": 2
247
}
248

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.