GenerativeAIExamples

Форк
0
426 строк · 17.3 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "metadata": {},
6
   "source": [
7
    "# Supervised Fine-Tuning for Instruction Following"
8
   ]
9
  },
10
  {
11
   "cell_type": "markdown",
12
   "metadata": {},
13
   "source": [
14
    "[Gemma](https://ai.google.dev/gemma/docs/model_card) is a groundbreaking new open model in the Gemini family of models from Google. Gemma is just as powerful as previous models but compact enough to run locally on NVIDIA RTX GPUs. Gemma is available in 2 sizes: 2B and 7B parameters. With NVIDIA NeMo, you can customize Gemma to fit your usecase and deploy an optimized model on your NVIDIA GPU.\n",
15
    "\n",
16
    "In this tutorial, we'll go over a specific kind of customization -- full parameter supervised fine-tuning for instruction following (also known as SFT). To learn how to perform Low-rank adapter (LoRA) tuning to follow a specific output format, see the [companion notebook](./lora.ipynb). For LoRA, we'll show how you can kick off a multi-GPU training job with an example script so that you can train on 8 GPUs. The exact number of GPUs needed will depend on which model you use and what kind of GPUs you use, but we recommend using 8 A100-80GB GPUs.\n",
17
    "\n",
18
    "We'll also learn how to export your custom model to TensorRT-LLM, an open-source library that accelerates and optimizes inference performance of the latest LLMs on the NVIDIA AI platform."
19
   ]
20
  },
21
  {
22
   "cell_type": "markdown",
23
   "metadata": {},
24
   "source": [
25
    "## Introduction"
26
   ]
27
  },
28
  {
29
   "cell_type": "markdown",
30
   "metadata": {},
31
   "source": [
32
    "Supervised Fine-Tuning (SFT) is the process of fine-tuning all of a model’s parameters on supervised data of inputs and outputs. It teaches the model how to follow user specified instructions and is typically done after model pre-training. This notebook describes the steps involved in fine-tuning Gemma for instruction following. Gemma was released with a checkpoint already fine-tuned for instruction-following, but here we'll learn how we can tune our own model starting with the pre-trained checkpoint to achieve a similar outcome. "
33
   ]
34
  },
35
  {
36
   "cell_type": "markdown",
37
   "metadata": {},
38
   "source": [
39
    "## Download the base model\n",
40
    "\n",
41
    "For all of our customization and deployment processes, we'll need to start off with a pre-trained version of Gemma in the `.nemo` format. You can download the base model in `.nemo` format from the NVIDIA GPU Cloud, or convert checkpoints from another framework into a `.nemo` file. You can choose to use the 2B parameter or 7B parameter Gemma models for this notebook -- the 2B model will be faster to customize, but the 7B model will be more capable. \n",
42
    "\n",
43
    "You can download either model from the NVIDIA NGC Catalog, using the NGC CLI. The instructions to install and configure the NGC CLI can be found [here](https://ngc.nvidia.com/setup/installers/cli).\n",
44
    "\n",
45
    "To download the model, execute one of the following commands, based on which model you want to use:\n",
46
    "\n",
47
    "```bash\n",
48
    "ngc registry model download-version \"nvidia/nemo/gemma_2b_base:1.1\"\n",
49
    "```\n",
50
    "\n",
51
    "or\n",
52
    "\n",
53
    "```bash\n",
54
    "ngc registry model download-version \"nvidia/nemo/gemma_7b_base:1.1\"\n",
55
    "```"
56
   ]
57
  },
58
  {
59
   "cell_type": "markdown",
60
   "metadata": {},
61
   "source": [
62
    "## Getting NeMo Framework\n",
63
    "\n",
64
    "NVIDIA NeMo Framework is a generative AI framework built for researchers and PyTorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.\n",
65
    "\n",
66
    "If you haven't already, you can pull a container that includes the version of NeMo Framework and all dependencies needed for this notebook with the following:\n",
67
    "\n",
68
    "```bash\n",
69
    "docker pull nvcr.io/nvidia/nemo:24.01.gemma\n",
70
    "```\n",
71
    "\n",
72
    "The best way to run this notebook is from within the container. You can do that by launching the container with the following command\n",
73
    "\n",
74
    "```bash\n",
75
    "docker run -it --rm --gpus all --ipc=host --network host -v $(pwd):/workspace nvcr.io/nvidia/nemo:24.01.gemma\n",
76
    "```\n",
77
    "\n",
78
    "Then, from within the container, start the jupyter server with\n",
79
    "\n",
80
    "```bash\n",
81
    "jupyter lab --no-browser --port=8080 --allow-root --ip 0.0.0.0\n",
82
    "```"
83
   ]
84
  },
85
  {
86
   "cell_type": "markdown",
87
   "metadata": {},
88
   "source": [
89
    "## SFT Data Formatting"
90
   ]
91
  },
92
  {
93
   "cell_type": "markdown",
94
   "metadata": {},
95
   "source": [
96
    "To begin, we'll need to prepare a dataset to tune our model on.\n",
97
    "\n",
98
    "This notebook uses the [Dolly dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) as an example to demonstrate how to format your SFT data. This dataset consists of 15,000 instruction-context-response triples.\n",
99
    "\n",
100
    "First, to download the data enter the following command:"
101
   ]
102
  },
103
  {
104
   "cell_type": "code",
105
   "execution_count": null,
106
   "metadata": {},
107
   "outputs": [],
108
   "source": [
109
    "!wget https://huggingface.co/datasets/databricks/databricks-dolly-15k/resolve/main/databricks-dolly-15k.jsonl"
110
   ]
111
  },
112
  {
113
   "cell_type": "markdown",
114
   "metadata": {},
115
   "source": [
116
    "\n",
117
    "The downloaded data, stored at `databricks-dolly-15k.jsonl`, is a `JSONL` file with each line formatted like this:\n",
118
    "\n"
119
   ]
120
  },
121
  {
122
   "cell_type": "code",
123
   "execution_count": null,
124
   "metadata": {
125
    "vscode": {
126
     "languageId": "plaintext"
127
    }
128
   },
129
   "outputs": [],
130
   "source": [
131
    "{\n",
132
    "    \"instruction\": \"When did Virgin Australia start operating?\",\n",
133
    "    \"context\": \"Virgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-based airline. It is the largest airline by fleet size to use the Virgin brand. It commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.[3] It suddenly found itself as a major airline in Australia's domestic market after the collapse of Ansett Australia in September 2001. The airline has since grown to directly serve 32 cities in Australia, from hubs in Brisbane, Melbourne and Sydney.[4]\",\n",
134
    "    \"response\": \"Virgin Australia commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.\",\n",
135
    "    \"category\": \"closed_qa\"\n",
136
    "}"
137
   ]
138
  },
139
  {
140
   "cell_type": "markdown",
141
   "metadata": {},
142
   "source": [
143
    "As this example shows, there are no clear “input” and “output” fields, which are required for SFT with NeMo. To remedy this, we can do some data pre-processing. This cell converts the `instruction`, `context`, and `response` fields into `input` and `output`. It also concatenates the `instruction` and `context` fields with a `\\n\\n` separator, and randomizes the order in which they appear in the input to generate a new `JSONL` file. This generates an output file called `databricks-dolly-15k-output.jsonl`."
144
   ]
145
  },
146
  {
147
   "cell_type": "code",
148
   "execution_count": null,
149
   "metadata": {},
150
   "outputs": [],
151
   "source": [
152
    "import json\n",
153
    "import numpy as np\n",
154
    "\n",
155
    "path_to_data = \"databricks-dolly-15k.jsonl\"\n",
156
    "output_path = f\"{path_to_data.split('.')[0]}-output.jsonl\"\n",
157
    "with open(path_to_data, \"r\") as f, open(output_path, \"w\") as g:\n",
158
    "    for line in f:\n",
159
    "\n",
160
    "        # Read JSONL line in original format\n",
161
    "        line = json.loads(line)\n",
162
    "        context = line[\"context\"].strip()\n",
163
    "\n",
164
    "        # Randomize context and instruction order.\n",
165
    "        if context != \"\":\n",
166
    "            context_first = np.random.randint(0, 2) == 0\n",
167
    "            if context_first:\n",
168
    "                instruction = line[\"instruction\"].strip()\n",
169
    "                assert instruction != \"\"\n",
170
    "                input = f\"{context}\\n\\n{instruction}\"\n",
171
    "                output = line[\"response\"]\n",
172
    "            else:\n",
173
    "                instruction = line[\"instruction\"].strip()\n",
174
    "                assert instruction != \"\"\n",
175
    "                input = f\"{instruction}\\n\\n{context}\"\n",
176
    "                output = line[\"response\"]\n",
177
    "        else:\n",
178
    "            input = line[\"instruction\"]\n",
179
    "            output = line[\"response\"]\n",
180
    "\n",
181
    "        # Write JSONL line in new format\n",
182
    "        g.write(\n",
183
    "            json.dumps(\n",
184
    "                {\"input\": input, \"output\": output, \"category\": line[\"category\"]}\n",
185
    "            )\n",
186
    "            + \"\\n\"\n",
187
    "        )"
188
   ]
189
  },
190
  {
191
   "cell_type": "markdown",
192
   "metadata": {},
193
   "source": [
194
    "Now, the dataset is a `JSONL` file with each line formatted like this: "
195
   ]
196
  },
197
  {
198
   "cell_type": "code",
199
   "execution_count": null,
200
   "metadata": {
201
    "vscode": {
202
     "languageId": "plaintext"
203
    }
204
   },
205
   "outputs": [],
206
   "source": [
207
    "{\n",
208
    "  \"input\": \"Virgin Australia, the trading name of Virgin Australia Airlines Pty Ltd, is an Australian-based airline. It is the largest airline by fleet size to use the Virgin brand. It commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route. It suddenly found itself as a major airline in Australia's domestic market after the collapse of Ansett Australia in September 2001. The airline has since grown to directly serve 32 cities in Australia, from hubs in Brisbane, Melbourne and Sydney.\\n\\nWhen did Virgin Australia start operating?\",\n",
209
    "  \"output\": \"Virgin Australia commenced services on 31 August 2000 as Virgin Blue, with two aircraft on a single route.\",\n",
210
    "  \"category\": \"closed_qa\"\n",
211
    "}"
212
   ]
213
  },
214
  {
215
   "cell_type": "markdown",
216
   "metadata": {},
217
   "source": [
218
    "## SFT Training\n",
219
    "\n",
220
    "To perform the SFT Training, we'll use NVIDIA NeMo-Aligner. NeMo-Aligner is a scalable toolkit for efficient model alignment, built using the [NeMo Toolkit](https://github.com/NVIDIA/NeMo) which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. Users can do end-to-end model alignment on a wide range of model sizes and take advantage of all the parallelism techniques to ensure their model alignment is done in a performant and resource efficient manner.\n",
221
    "\n",
222
    "To install NeMo Aligner, we can clone the repository and install it using `pip`:"
223
   ]
224
  },
225
  {
226
   "cell_type": "code",
227
   "execution_count": null,
228
   "metadata": {},
229
   "outputs": [],
230
   "source": [
231
    "%%bash\n",
232
    "\n",
233
    "git clone https://github.com/NVIDIA/NeMo-Aligner.git -b dev\n",
234
    "cd NeMo-Aligner\n",
235
    "pip install -e ."
236
   ]
237
  },
238
  {
239
   "cell_type": "markdown",
240
   "metadata": {},
241
   "source": [
242
    "If you want to track and visualize your SFT training experiments, you can login to Weights and Biases. If you don't want to use wandb, make sure to set the argument `exp_manager.create_wandb_logger=False` when launching your job."
243
   ]
244
  },
245
  {
246
   "cell_type": "code",
247
   "execution_count": null,
248
   "metadata": {},
249
   "outputs": [],
250
   "source": [
251
    "import wandb\n",
252
    "wandb.login()"
253
   ]
254
  },
255
  {
256
   "cell_type": "markdown",
257
   "metadata": {},
258
   "source": [
259
    "To run SFT locally on a single node, you can use the following command. Note the `trainer.num_nodes` and `trainer.devices` arguments, which define how many nodes and how many total GPUs you want to use for training. Make sure the source model, output model, and dataset paths all match your local setup.\n",
260
    "\n",
261
    "If you'd like to perform multi-node finetuning -- for example on a slurm cluster -- you can find more information in the [NeMo-Aligner user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/rlhf.html#instruction-following-taught-by-supervised-fine-tuning-sft)."
262
   ]
263
  },
264
  {
265
   "cell_type": "code",
266
   "execution_count": null,
267
   "metadata": {},
268
   "outputs": [],
269
   "source": [
270
    "\n",
271
    "%%bash\n",
272
    "\n",
273
    "cd NeMo-Aligner\n",
274
    "\n",
275
    "python examples/nlp/gpt/train_gpt_sft.py \\\n",
276
    "   name=gemma_dolly_finetuned \\\n",
277
    "   trainer.precision=bf16 \\\n",
278
    "   trainer.num_nodes=1 \\\n",
279
    "   trainer.devices=8 \\\n",
280
    "   trainer.sft.max_steps=-1 \\\n",
281
    "   trainer.sft.limit_val_batches=40 \\\n",
282
    "   trainer.sft.val_check_interval=1000 \\\n",
283
    "   model.tensor_model_parallel_size=4 \\\n",
284
    "   model.pipeline_model_parallel_size=1 \\\n",
285
    "   model.megatron_amp_O2=True \\\n",
286
    "   model.restore_from_path=../gemma_7b_pt.nemo \\\n",
287
    "   model.optim.lr=5e-6 \\\n",
288
    "   model.answer_only_loss=True \\\n",
289
    "   ++model.bias_activation_fusion=true \\\n",
290
    "   model.data.num_workers=0 \\\n",
291
    "   model.data.train_ds.micro_batch_size=1 \\\n",
292
    "   model.data.train_ds.global_batch_size=128 \\\n",
293
    "   model.data.train_ds.file_path=../databricks-dolly-15k-output.jsonl \\\n",
294
    "   model.data.train_ds.add_bos=True \\\n",
295
    "   model.data.validation_ds.micro_batch_size=1 \\\n",
296
    "   model.data.validation_ds.global_batch_size=128 \\\n",
297
    "   model.data.validation_ds.drop_last=True \\\n",
298
    "   model.data.validation_ds.file_path=../databricks-dolly-15k-output.jsonl \\\n",
299
    "   exp_manager.create_wandb_logger=True \\\n",
300
    "   exp_manager.explicit_log_dir=../results \\\n",
301
    "   exp_manager.wandb_logger_kwargs.project=sft_run \\\n",
302
    "   exp_manager.wandb_logger_kwargs.name=dolly_sft_run \\\n",
303
    "   exp_manager.checkpoint_callback_params.save_nemo_on_train_end=True \\\n",
304
    "   exp_manager.resume_if_exists=True \\\n",
305
    "   exp_manager.resume_ignore_no_checkpoint=True \\\n",
306
    "   exp_manager.create_checkpoint_callback=True \\\n",
307
    "   exp_manager.checkpoint_callback_params.monitor=validation_loss"
308
   ]
309
  },
310
  {
311
   "cell_type": "markdown",
312
   "metadata": {},
313
   "source": [
314
    "When training is finished, you should see a file called `results/checkpoints/gemma_dolly_finetuned.nemo` that contains the weights of your new, instruction-tuned model."
315
   ]
316
  },
317
  {
318
   "cell_type": "markdown",
319
   "metadata": {},
320
   "source": [
321
    "## Exporting to TensorRT-LLM"
322
   ]
323
  },
324
  {
325
   "cell_type": "markdown",
326
   "metadata": {},
327
   "source": [
328
    "TensorRT-LLM is an open-source library for optimizing inference performance to achieve state-of-the-art speed on NVDIA GPUs. The NeMo framework offers an easy way to compile `.nemo` models into optimized TensorRT-LLM engines which you can run locally embedded in another application, or serve to other applications using a server like Triton Inference Server."
329
   ]
330
  },
331
  {
332
   "cell_type": "markdown",
333
   "metadata": {},
334
   "source": [
335
    "To start with, lets create a folder where our exported model will land"
336
   ]
337
  },
338
  {
339
   "cell_type": "code",
340
   "execution_count": null,
341
   "metadata": {},
342
   "outputs": [],
343
   "source": [
344
    "!mkdir gemma_trt_llm"
345
   ]
346
  },
347
  {
348
   "cell_type": "markdown",
349
   "metadata": {},
350
   "source": [
351
    "To export the model, we just need to create an instance of the `TensorRTLLM` class and call the `TensorRTLLM.export()` function -- pointing the `nemo_checkpoint_path` argument to the newly fine-tuned model we trained above.\n",
352
    "\n",
353
    "This creates a couple of files in the folder we created -- an `engine` file that holds the weights and the compiled execution graph of the model, a `tokenizer.model` file which holds the tokenizer information, and `config.json` which holds some metadata about the model (along with `model.cache`, which caches some operations and makes it faster to re-compile the model in the future.)"
354
   ]
355
  },
356
  {
357
   "cell_type": "code",
358
   "execution_count": null,
359
   "metadata": {},
360
   "outputs": [],
361
   "source": [
362
    "from nemo.export import TensorRTLLM\n",
363
    "trt_llm_exporter = TensorRTLLM(model_dir=\"gemma_dolly_finetuned_trt_llm\")\n",
364
    "trt_llm_exporter.export(nemo_checkpoint_path=\"results/checkpoints/gemma_dolly_finetuned.nemo\", model_type=\"gemma\", n_gpus=1)"
365
   ]
366
  },
367
  {
368
   "cell_type": "markdown",
369
   "metadata": {},
370
   "source": [
371
    "\n",
372
    "With the model exported into TensorRTLLM, we can perform very fast inference"
373
   ]
374
  },
375
  {
376
   "cell_type": "code",
377
   "execution_count": null,
378
   "metadata": {},
379
   "outputs": [],
380
   "source": [
381
    "trt_llm_exporter.forward([\"NVIDIA and Google are\"])"
382
   ]
383
  },
384
  {
385
   "cell_type": "markdown",
386
   "metadata": {},
387
   "source": [
388
    "There's also a convenient function to deploy a the model as a service, backed by Triton Inference Server:"
389
   ]
390
  },
391
  {
392
   "cell_type": "code",
393
   "execution_count": null,
394
   "metadata": {},
395
   "outputs": [],
396
   "source": [
397
    "from nemo.deploy import DeployPyTriton\n",
398
    "\n",
399
    "nm = DeployPyTriton(model=trt_llm_exporter, triton_model_name=\"gemma\")\n",
400
    "nm.deploy()\n",
401
    "nm.serve()"
402
   ]
403
  }
404
 ],
405
 "metadata": {
406
  "kernelspec": {
407
   "display_name": "nemo_lora",
408
   "language": "python",
409
   "name": "python3"
410
  },
411
  "language_info": {
412
   "codemirror_mode": {
413
    "name": "ipython",
414
    "version": 3
415
   },
416
   "file_extension": ".py",
417
   "mimetype": "text/x-python",
418
   "name": "python",
419
   "nbconvert_exporter": "python",
420
   "pygments_lexer": "ipython3",
421
   "version": "3.10.12"
422
  }
423
 },
424
 "nbformat": 4,
425
 "nbformat_minor": 2
426
}
427

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.