LLM-FineTuning-Large-Language-Models

Форк
0
/
Mistral_7B_Instruct_GPTQ_finetune.ipynb 
899 строк · 32.3 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "id": "ba89be6e",
6
   "metadata": {},
7
   "source": [
8
    "## Mistral-7B-Instruct_GPTQ - Finetune on finance-alpaca dataset\n",
9
    "\n",
10
    "### Checkout my [Twitter(@rohanpaul_ai)](https://twitter.com/rohanpaul_ai) for daily LLM bits"
11
   ]
12
  },
13
  {
14
   "cell_type": "markdown",
15
   "id": "60c514b4",
16
   "metadata": {},
17
   "source": [
18
    "<a href=\"https://colab.research.google.com/github/rohan-paul/LLM-FineTuning-Large-Language-Models/blob/main/Mistral_7B_Instruct_GPTQ_finetune.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
19
   ]
20
  },
21
  {
22
   "cell_type": "code",
23
   "execution_count": 1,
24
   "id": "43537678",
25
   "metadata": {},
26
   "outputs": [],
27
   "source": [
28
    "# !pip install --upgrade trl peft accelerate bitsandbytes datasets auto-gptq optimum -q"
29
   ]
30
  },
31
  {
32
   "cell_type": "code",
33
   "execution_count": null,
34
   "id": "7dd0ba1d",
35
   "metadata": {},
36
   "outputs": [],
37
   "source": [
38
    "from accelerate import FullyShardedDataParallelPlugin, Accelerator\n",
39
    "from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig\n",
40
    "import torch\n",
41
    "from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling, BitsAndBytesConfig\n",
42
    "from datasets import load_dataset\n",
43
    "import pandas as pd\n",
44
    "import logging\n",
45
    "import os\n",
46
    "from pathlib import Path\n",
47
    "from typing import Optional, Tuple\n",
48
    "from peft import LoraConfig, PeftConfig, PeftModel\n",
49
    "from transformers import GPTQConfig\n",
50
    "from peft import prepare_model_for_kbit_training, LoraConfig, get_peft_model"
51
   ]
52
  },
53
  {
54
   "cell_type": "code",
55
   "execution_count": 3,
56
   "id": "0fc11aa4-42bf-4082-ab1c-067c400e5ca0",
57
   "metadata": {
58
    "id": "TEzYBadkyRgd"
59
   },
60
   "outputs": [],
61
   "source": [
62
    "fsdp_plugin = FullyShardedDataParallelPlugin(\n",
63
    "    state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=False),\n",
64
    "    optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=False),\n",
65
    ")\n",
66
    "\n",
67
    "accelerator = Accelerator(fsdp_plugin=fsdp_plugin)\n",
68
    "\n",
69
    "dataset = load_dataset('gbharti/finance-alpaca')\n",
70
    "# Split the dataset into train and test sets\n",
71
    "train_test_split = dataset['train'].train_test_split(test_size=0.1)\n",
72
    "train_dataset = train_test_split['train']\n",
73
    "test_dataset = train_test_split['test']\n",
74
    "\n",
75
    "# Further split the train dataset into train and validation sets\n",
76
    "train_val_split = train_dataset.train_test_split(test_size=0.1)\n",
77
    "train_dataset = train_val_split['train']\n",
78
    "eval_dataset = train_val_split['test']\n",
79
    "\n",
80
    "\n",
81
    "\n",
82
    "##############\n",
83
    "\n",
84
    "pretrained_model_name_or_path = \"TheBloke/Mistral-7B-Instruct-v0.1-GPTQ\""
85
   ]
86
  },
87
  {
88
   "cell_type": "markdown",
89
   "id": "75b9604f",
90
   "metadata": {},
91
   "source": [
92
    "![](assets/2023-12-30-23-50-29.png)"
93
   ]
94
  },
95
  {
96
   "cell_type": "code",
97
   "execution_count": 4,
98
   "id": "64e259ae",
99
   "metadata": {},
100
   "outputs": [],
101
   "source": [
102
    "def tokenize(prompt):\n",
103
    "    result = tokenizer(\n",
104
    "        prompt,\n",
105
    "        truncation=True,\n",
106
    "        max_length=512,\n",
107
    "        padding=\"max_length\",\n",
108
    "    )\n",
109
    "    result[\"labels\"] = result[\"input_ids\"].copy()\n",
110
    "    return result\n",
111
    "\n",
112
    "def format_input_data_to_build_model_prompt(data_point):\n",
113
    "        instruction = str(data_point['instruction'])\n",
114
    "        input_query = str(data_point['input'])\n",
115
    "        response = str(data_point['output'])\n",
116
    "\n",
117
    "        if len(input_query.strip()) == 0:\n",
118
    "            full_prompt_for_model = f\"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction} \\n\\n### Input:\\n{input_query}\\n\\n### Response:\\n{response}\"\"\"\n",
119
    "\n",
120
    "        else:\n",
121
    "            full_prompt_for_model = f\"\"\"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction}\\n\\n### Response:\\n{response}\"\"\"\n",
122
    "        return tokenize(full_prompt_for_model)"
123
   ]
124
  },
125
  {
126
   "cell_type": "markdown",
127
   "id": "34fd2826",
128
   "metadata": {},
129
   "source": [
130
    "## Need for input data formatting i.e. `format_input_data_to_build_model_prompt` method\n",
131
    "\n",
132
    "📌 The `format_input_data_to_build_model_prompt` method processes the input DataFrame, which contains columns like 'instruction', 'input', and 'output', representing different components of a training sample. The method consolidates these components into a single 'text' column, formatted in a structured way that aligns with the training requirements of LLMs.\n",
133
    "\n",
134
    "📌 Specifically, the method constructs each entry in the 'text' column as a concatenation of the instruction, the context (if provided), and the expected response. This formatting is key for fine-tuning models like LLMs that are based on transformer architectures. It ensures the correct associations between the prompts (instructions and input queries) and the expected responses.\n",
135
    "\n",
136
    "==============\n",
137
    "\n",
138
    "##  Prompt format for mistralai/Mixtral-8x7B-v0.1 🔥\n",
139
    "\n",
140
    "https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/discussions/22\n",
141
    "\n",
142
    "\n",
143
    "\"Mixtral-8x7B-v0.1\" is a base model, therefore it doesn't need to be prompted in a specific way in order to get started with the model. If you want to use the instruct version of the model, you need to follow the template that is on the model card: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format\n",
144
    "\n",
145
    "The template used to build a prompt for the Instruct model is defined as follows:\n",
146
    "\n",
147
    "```\n",
148
    "<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]\n",
149
    "```"
150
   ]
151
  },
152
  {
153
   "cell_type": "code",
154
   "execution_count": 5,
155
   "id": "de76e992",
156
   "metadata": {},
157
   "outputs": [],
158
   "source": [
159
    "def build_qlora_model(\n",
160
    "    pretrained_model_name_or_path: str = \"TheBloke/Mistral-7B-Instruct-v0.1-GPTQ\",\n",
161
    "    gradient_checkpointing: bool = True,\n",
162
    "    cache_dir: Optional[Path] = None,\n",
163
    ") -> Tuple[AutoModelForCausalLM, AutoTokenizer, PeftConfig]:\n",
164
    "    \"\"\"\n",
165
    "    Args:\n",
166
    "        pretrained_model_name_or_path (str): The name or path of the pretrained model to use.\n",
167
    "        gradient_checkpointing (bool): Whether to use gradient checkpointing or not.\n",
168
    "        cache_dir (Optional[Path]): The directory to cache the model in.\n",
169
    "\n",
170
    "    Returns:\n",
171
    "        Tuple[AutoModelForCausalLM, AutoTokenizer]: A tuple containing the built model and tokenizer.\n",
172
    "    \"\"\"\n",
173
    "\n",
174
    "    # If I am using any GPTQ model, then need to comment-out bnb_config\n",
175
    "    # as I can not quantize an already quantized model\n",
176
    "\n",
177
    "    # bnb_config = BitsAndBytesConfig(\n",
178
    "    #     load_in_4bit=True,\n",
179
    "    #     bnb_4bit_use_double_quant=True,\n",
180
    "    #     bnb_4bit_compute_dtype=torch.bfloat16\n",
181
    "    # )\n",
182
    "\n",
183
    "    # In below as well, when using any GPTQ model\n",
184
    "    # comment-out the quantization_config param\n",
185
    "\n",
186
    "    tokenizer = AutoTokenizer.from_pretrained(\n",
187
    "        pretrained_model_name_or_path,\n",
188
    "        padding_side=\"left\",\n",
189
    "        add_eos_token=True,\n",
190
    "        add_bos_token=True,\n",
191
    "    )\n",
192
    "    tokenizer.pad_token = tokenizer.eos_token\n",
193
    "\n",
194
    "    quantization_config_loading = GPTQConfig(bits=4, use_exllama=False, tokenizer=tokenizer)\n",
195
    "\n",
196
    "    model = AutoModelForCausalLM.from_pretrained(\n",
197
    "        pretrained_model_name_or_path,\n",
198
    "        # quantization_config=bnb_config,\n",
199
    "        quantization_config=quantization_config_loading,\n",
200
    "        device_map=\"auto\",\n",
201
    "        cache_dir=str(cache_dir) if cache_dir else None,\n",
202
    "    )\n",
203
    "\n",
204
    "    #disable tensor parallelism\n",
205
    "    model.config.pretraining_tp = 1\n",
206
    "\n",
207
    "    if gradient_checkpointing:\n",
208
    "        model.gradient_checkpointing_enable()\n",
209
    "        model.config.use_cache = (\n",
210
    "            False  # Gradient checkpointing is not compatible with caching.\n",
211
    "        )\n",
212
    "    else:\n",
213
    "        model.gradient_checkpointing_disable()\n",
214
    "        model.config.use_cache = True  # It is good practice to enable caching when using the model for inference.\n",
215
    "\n",
216
    "    return model, tokenizer"
217
   ]
218
  },
219
  {
220
   "cell_type": "code",
221
   "execution_count": 6,
222
   "id": "18d573c7",
223
   "metadata": {},
224
   "outputs": [
225
    {
226
     "name": "stderr",
227
     "output_type": "stream",
228
     "text": [
229
      "You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. ['use_cuda_fp16', 'use_exllama', 'max_input_length', 'exllama_config', 'disable_exllama']) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.\n"
230
     ]
231
    }
232
   ],
233
   "source": [
234
    "model, tokenizer = build_qlora_model(pretrained_model_name_or_path)"
235
   ]
236
  },
237
  {
238
   "cell_type": "code",
239
   "execution_count": 7,
240
   "id": "f4f95dc0",
241
   "metadata": {},
242
   "outputs": [],
243
   "source": [
244
    "\n",
245
    "model = prepare_model_for_kbit_training(model)"
246
   ]
247
  },
248
  {
249
   "cell_type": "code",
250
   "execution_count": 8,
251
   "id": "9c769d0b",
252
   "metadata": {},
253
   "outputs": [],
254
   "source": [
255
    "from peft import LoraConfig, get_peft_model\n",
256
    "\n",
257
    "config = LoraConfig(\n",
258
    "    r=8,\n",
259
    "    lora_alpha=16,\n",
260
    "    target_modules=[\n",
261
    "        \"q_proj\",\n",
262
    "        \"k_proj\",\n",
263
    "        \"v_proj\",\n",
264
    "        \"o_proj\"\n",
265
    "    ],\n",
266
    "    bias=\"none\",\n",
267
    "    lora_dropout=0.05,  # Conventional\n",
268
    "    task_type=\"CAUSAL_LM\",\n",
269
    ")\n",
270
    "\n",
271
    "model = get_peft_model(model, config)\n"
272
   ]
273
  },
274
  {
275
   "cell_type": "code",
276
   "execution_count": 10,
277
   "id": "22d5b36e",
278
   "metadata": {},
279
   "outputs": [
280
    {
281
     "name": "stdout",
282
     "output_type": "stream",
283
     "text": [
284
      "trainable params: 6815744 || all params: 269225984 || trainable%: 2.5316070532033046\n"
285
     ]
286
    }
287
   ],
288
   "source": [
289
    "def print_trainable_parameters(model):\n",
290
    "    \"\"\"\n",
291
    "    Prints the number of trainable parameters in the model.\n",
292
    "    \"\"\"\n",
293
    "    trainable_params = 0\n",
294
    "    all_param = 0\n",
295
    "    for _, param in model.named_parameters():\n",
296
    "        all_param += param.numel()\n",
297
    "        if param.requires_grad:\n",
298
    "            trainable_params += param.numel()\n",
299
    "    print(\n",
300
    "        f\"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}\"\n",
301
    "    )\n",
302
    "\n",
303
    "print_trainable_parameters(model)\n",
304
    "\n",
305
    "# trainable params: 6815744 || all params: 269225984 || trainable%: 2.5316070532033046\n",
306
    "\n",
307
    "# Apply the accelerator. You can comment this out to remove the accelerator.\n",
308
    "model = accelerator.prepare_model(model)"
309
   ]
310
  },
311
  {
312
   "cell_type": "markdown",
313
   "id": "86da2d8f",
314
   "metadata": {},
315
   "source": [
316
    "###########################3"
317
   ]
318
  },
319
  {
320
   "cell_type": "code",
321
   "execution_count": null,
322
   "id": "2667459f-fca0-480c-9934-f3b9a7b6361b",
323
   "metadata": {},
324
   "outputs": [],
325
   "source": [
326
    "tokenized_train_dataset = train_dataset.map(format_input_data_to_build_model_prompt)\n",
327
    "tokenized_val_dataset = eval_dataset.map(format_input_data_to_build_model_prompt)"
328
   ]
329
  },
330
  {
331
   "cell_type": "markdown",
332
   "id": "ffe65c1a-62c2-4a36-99e3-3e972478261a",
333
   "metadata": {
334
    "id": "Vxbl4ACsyRgi"
335
   },
336
   "source": [
337
    "### Let's grab a single data point from our testset (both instruction and output) to see how the base model does on it."
338
   ]
339
  },
340
  {
341
   "cell_type": "code",
342
   "execution_count": 12,
343
   "id": "24f21bb8-e2df-4d76-bec4-bcbe966310c8",
344
   "metadata": {
345
    "id": "k_VRZDh9yRgi",
346
    "scrolled": true
347
   },
348
   "outputs": [
349
    {
350
     "name": "stdout",
351
     "output_type": "stream",
352
     "text": [
353
      "Instruction Sentence: Describe how a person's life might be different if he/she won the lottery.\n",
354
      "Output: The person could easily afford their desired lifestyle, from buying luxury cars and homes to traveling the world and not having to worry about financial concerns. They could pursue their dream career or start a business or charity of their own, leaving them with a much more fulfilling life. They could give back to their communities and make a positive difference. They can use their wealth to make a lasting impact in the lives of family and friends. All in all, winning the lottery can drastically change a person's life for the better.\n",
355
      "\n"
356
     ]
357
    }
358
   ],
359
   "source": [
360
    "print(\"Instruction Sentence: \" + test_dataset[1]['instruction'])\n",
361
    "print(\"Output: \" + test_dataset[1]['output'] + \"\\n\")"
362
   ]
363
  },
364
  {
365
   "cell_type": "code",
366
   "execution_count": 13,
367
   "id": "f5020387-45b8-4a77-a63e-15cf6b1d8d5a",
368
   "metadata": {
369
    "id": "gOxnx-cAyRgi"
370
   },
371
   "outputs": [],
372
   "source": [
373
    "eval_prompt = \"\"\"Given an instruction sentence construct the output.\n",
374
    "\n",
375
    "### Instruction sentence:\n",
376
    "Generate a sentence that describes the main idea behind a stock market crash.\n",
377
    "\n",
378
    "\n",
379
    "### Output\n",
380
    "\n",
381
    "\n",
382
    "\"\"\""
383
   ]
384
  },
385
  {
386
   "cell_type": "markdown",
387
   "id": "889f57b0",
388
   "metadata": {},
389
   "source": [
390
    "Now, to start our fine-tuning, we have to apply some preprocessing to the model to prepare it for training. For that use the `prepare_model_for_kbit_training` method from PEFT."
391
   ]
392
  },
393
  {
394
   "cell_type": "code",
395
   "execution_count": 14,
396
   "id": "d9866d9f-0578-4a61-8b13-f100a1a344ab",
397
   "metadata": {},
398
   "outputs": [],
399
   "source": [
400
    "# Apply the accelerator. You can comment this out to remove the accelerator.\n",
401
    "# prepare_model - Prepares a PyTorch model for training in any distributed setup.\n",
402
    "model = accelerator.prepare_model(model)"
403
   ]
404
  },
405
  {
406
   "cell_type": "code",
407
   "execution_count": 15,
408
   "id": "b54d3b8e-88a6-4fbd-9375-509ea9a296af",
409
   "metadata": {
410
    "id": "NidIuFXMyRgi"
411
   },
412
   "outputs": [],
413
   "source": [
414
    "# Re-init the tokenizer so it doesn't add padding or eos token\n",
415
    "eval_tokenizer = AutoTokenizer.from_pretrained(\n",
416
    "    pretrained_model_name_or_path,\n",
417
    "    add_bos_token=True,\n",
418
    ")"
419
   ]
420
  },
421
  {
422
   "cell_type": "code",
423
   "execution_count": 16,
424
   "id": "93a253a4-a3a8-43b3-abb2-d602d8fa2ab0",
425
   "metadata": {},
426
   "outputs": [],
427
   "source": [
428
    "device = \"cuda\"\n",
429
    "model_input = eval_tokenizer(eval_prompt, return_tensors=\"pt\").to(device)"
430
   ]
431
  },
432
  {
433
   "cell_type": "code",
434
   "execution_count": null,
435
   "id": "fb6f9452-0016-48f7-b355-588907eaff14",
436
   "metadata": {},
437
   "outputs": [],
438
   "source": [
439
    "model.eval()\n",
440
    "with torch.no_grad():\n",
441
    "    print(eval_tokenizer.decode(model.generate(**model_input, max_new_tokens=128)[0], skip_special_tokens=True))"
442
   ]
443
  },
444
  {
445
   "cell_type": "markdown",
446
   "id": "f6bc6c58-8338-4d5d-a8c0-05dfd7162423",
447
   "metadata": {
448
    "id": "dCAWeCzZyRgi"
449
   },
450
   "source": [
451
    "It actually did not do very well out of the box."
452
   ]
453
  },
454
  {
455
   "cell_type": "markdown",
456
   "id": "4f088a21-62b6-46e6-9323-2aa583754f4b",
457
   "metadata": {
458
    "id": "cUYEpEK-yRgj"
459
   },
460
   "source": [
461
    "Let's print the model to examine its layers, as we will apply QLoRA to all the linear layers of the model. Those layers are `q_proj`, `k_proj`, `v_proj`, `o_proj`."
462
   ]
463
  },
464
  {
465
   "cell_type": "code",
466
   "execution_count": 19,
467
   "id": "5e477004-dbdb-4feb-82a0-681289522fdf",
468
   "metadata": {
469
    "id": "XshGNsbxyRgj",
470
    "scrolled": true
471
   },
472
   "outputs": [
473
    {
474
     "name": "stdout",
475
     "output_type": "stream",
476
     "text": [
477
      "PeftModelForCausalLM(\n",
478
      "  (base_model): LoraModel(\n",
479
      "    (model): MistralForCausalLM(\n",
480
      "      (model): MistralModel(\n",
481
      "        (embed_tokens): Embedding(32000, 4096, padding_idx=0)\n",
482
      "        (layers): ModuleList(\n",
483
      "          (0-31): 32 x MistralDecoderLayer(\n",
484
      "            (self_attn): MistralAttention(\n",
485
      "              (rotary_emb): MistralRotaryEmbedding()\n",
486
      "              (k_proj): QuantLinear(\n",
487
      "                (base_layer): QuantLinear()\n",
488
      "                (lora_dropout): ModuleDict(\n",
489
      "                  (default): Dropout(p=0.05, inplace=False)\n",
490
      "                )\n",
491
      "                (lora_A): ModuleDict(\n",
492
      "                  (default): Linear(in_features=4096, out_features=8, bias=False)\n",
493
      "                )\n",
494
      "                (lora_B): ModuleDict(\n",
495
      "                  (default): Linear(in_features=8, out_features=1024, bias=False)\n",
496
      "                )\n",
497
      "                (lora_embedding_A): ParameterDict()\n",
498
      "                (lora_embedding_B): ParameterDict()\n",
499
      "                (quant_linear_module): QuantLinear()\n",
500
      "              )\n",
501
      "              (o_proj): QuantLinear(\n",
502
      "                (base_layer): QuantLinear()\n",
503
      "                (lora_dropout): ModuleDict(\n",
504
      "                  (default): Dropout(p=0.05, inplace=False)\n",
505
      "                )\n",
506
      "                (lora_A): ModuleDict(\n",
507
      "                  (default): Linear(in_features=4096, out_features=8, bias=False)\n",
508
      "                )\n",
509
      "                (lora_B): ModuleDict(\n",
510
      "                  (default): Linear(in_features=8, out_features=4096, bias=False)\n",
511
      "                )\n",
512
      "                (lora_embedding_A): ParameterDict()\n",
513
      "                (lora_embedding_B): ParameterDict()\n",
514
      "                (quant_linear_module): QuantLinear()\n",
515
      "              )\n",
516
      "              (q_proj): QuantLinear(\n",
517
      "                (base_layer): QuantLinear()\n",
518
      "                (lora_dropout): ModuleDict(\n",
519
      "                  (default): Dropout(p=0.05, inplace=False)\n",
520
      "                )\n",
521
      "                (lora_A): ModuleDict(\n",
522
      "                  (default): Linear(in_features=4096, out_features=8, bias=False)\n",
523
      "                )\n",
524
      "                (lora_B): ModuleDict(\n",
525
      "                  (default): Linear(in_features=8, out_features=4096, bias=False)\n",
526
      "                )\n",
527
      "                (lora_embedding_A): ParameterDict()\n",
528
      "                (lora_embedding_B): ParameterDict()\n",
529
      "                (quant_linear_module): QuantLinear()\n",
530
      "              )\n",
531
      "              (v_proj): QuantLinear(\n",
532
      "                (base_layer): QuantLinear()\n",
533
      "                (lora_dropout): ModuleDict(\n",
534
      "                  (default): Dropout(p=0.05, inplace=False)\n",
535
      "                )\n",
536
      "                (lora_A): ModuleDict(\n",
537
      "                  (default): Linear(in_features=4096, out_features=8, bias=False)\n",
538
      "                )\n",
539
      "                (lora_B): ModuleDict(\n",
540
      "                  (default): Linear(in_features=8, out_features=1024, bias=False)\n",
541
      "                )\n",
542
      "                (lora_embedding_A): ParameterDict()\n",
543
      "                (lora_embedding_B): ParameterDict()\n",
544
      "                (quant_linear_module): QuantLinear()\n",
545
      "              )\n",
546
      "            )\n",
547
      "            (mlp): MistralMLP(\n",
548
      "              (act_fn): SiLU()\n",
549
      "              (down_proj): QuantLinear()\n",
550
      "              (gate_proj): QuantLinear()\n",
551
      "              (up_proj): QuantLinear()\n",
552
      "            )\n",
553
      "            (input_layernorm): MistralRMSNorm()\n",
554
      "            (post_attention_layernorm): MistralRMSNorm()\n",
555
      "          )\n",
556
      "        )\n",
557
      "        (norm): MistralRMSNorm()\n",
558
      "      )\n",
559
      "      (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\n",
560
      "    )\n",
561
      "  )\n",
562
      ")\n"
563
     ]
564
    }
565
   ],
566
   "source": [
567
    "print(model)"
568
   ]
569
  },
570
  {
571
   "cell_type": "markdown",
572
   "id": "630857be-801a-4586-a562-88ffc7772058",
573
   "metadata": {
574
    "id": "I6mTLuQJyRgj"
575
   },
576
   "source": [
577
    "\n",
578
    "📌 `LoraConfig` allows you to control how LoRA is applied to the base model:\n",
579
    "\n",
580
    "📌 Rank of Decomposition r\n",
581
    "\n",
582
    "r represents the rank of the low rank matrices learned during the finetuning process. As this value is increased, the number of parameters needed to be updated during the low-rank adaptation increases. Intuitively, a lower r may lead to a quicker, less computationally intensive training process, but may affect the quality of the model thus produced. However, increasing r beyond a certain value may not yield any discernible increase in quality of model output.\n",
583
    "\n",
584
    "---------------\n",
585
    "\n",
586
    "`target_modules` are the names of modules LoRA is applied to. Here it is set to query, key and value which are the names of inner layers of self attention layer from Transformer Architecture.\n",
587
    "\n",
588
    "---------------\n",
589
    "\n",
590
    "📌 Alpha Parameter for LoRA Scaling `lora_alpha`\n",
591
    "\n",
592
    "According to the LoRA article Hu et. al., ∆W is scaled by α / r where α is a constant. When optimizing with Adam, tuning α is roughly the same as tuning the learning rate if the initialization was scaled appropriately. The reason is that the number of parameters increases linearly with r. As you increases r, the values of the entries in ∆W also scale linearly with r. We want ∆W to scale consistently with the pretrained weights no matter what r is used. That’s why the authors set α to the first r and do not tune it. The default of α is 8.\n",
593
    "\n",
594
    "---------\n",
595
    "\n",
596
    "📌 `Dropout Rate (lora_dropout)`: This is the probability that each neuron’s output is set to zero during training, used to prevent overfitting.\n",
597
    "\n",
598
    "So Dropout is a general technique in Deep Learning, to reduce overfitting by randomly selecting neurons to ignore with a dropout probability during training. The contribution of those selected neurons to the activation of downstream neurons is temporally removed on the forward pass, and any weight updates are not applied to the neuron on the backward pass. The default of lora_dropout is 0."
599
   ]
600
  },
601
  {
602
   "cell_type": "markdown",
603
   "id": "9eb4c7fe-c05a-479a-9208-84d86d22c0bf",
604
   "metadata": {
605
    "id": "_0MOtwf3zdZp"
606
   },
607
   "source": [
608
    "### Training!"
609
   ]
610
  },
611
  {
612
   "cell_type": "code",
613
   "execution_count": 22,
614
   "id": "a9d0aae4-60e2-4853-a949-1d2026c66e98",
615
   "metadata": {
616
    "id": "c_L1131GyRgo"
617
   },
618
   "outputs": [],
619
   "source": [
620
    "if torch.cuda.device_count() > 1: # If more than 1 GPU\n",
621
    "    model.is_parallelizable = True\n",
622
    "    model.model_parallel = True"
623
   ]
624
  },
625
  {
626
   "cell_type": "code",
627
   "execution_count": 23,
628
   "id": "edbb95a0-9f0f-465a-9657-506596615afb",
629
   "metadata": {},
630
   "outputs": [
631
    {
632
     "data": {
633
      "text/plain": [
634
       "1"
635
      ]
636
     },
637
     "execution_count": 23,
638
     "metadata": {},
639
     "output_type": "execute_result"
640
    }
641
   ],
642
   "source": [
643
    "torch.cuda.device_count()"
644
   ]
645
  },
646
  {
647
   "cell_type": "code",
648
   "execution_count": null,
649
   "id": "832143f1-35a3-454c-82f9-42f195a03c8f",
650
   "metadata": {
651
    "id": "jq0nX33BmfaC",
652
    "scrolled": true
653
   },
654
   "outputs": [],
655
   "source": [
656
    "import transformers\n",
657
    "from datetime import datetime\n",
658
    "\n",
659
    "project = \"Mixtral-alpaca-finance-finetune\"\n",
660
    "base_model_name = \"mixtral\"\n",
661
    "run_name = base_model_name + \"-\" + project\n",
662
    "output_dir = \"./\" + run_name\n",
663
    "\n",
664
    "tokenizer.pad_token = tokenizer.eos_token\n",
665
    "\n",
666
    "trainer = transformers.Trainer(\n",
667
    "    model=model,\n",
668
    "    train_dataset=tokenized_train_dataset,\n",
669
    "    eval_dataset=tokenized_val_dataset,\n",
670
    "    args=transformers.TrainingArguments(\n",
671
    "        output_dir=output_dir,\n",
672
    "        warmup_steps=5,\n",
673
    "        per_device_train_batch_size=1,\n",
674
    "        gradient_checkpointing=True,\n",
675
    "        gradient_accumulation_steps=4,\n",
676
    "        max_steps=1000,\n",
677
    "        learning_rate=2.5e-5,\n",
678
    "        logging_steps=25,\n",
679
    "        fp16=True,\n",
680
    "        optim=\"paged_adamw_8bit\",\n",
681
    "        logging_dir=\"./logs\",        # Directory for storing logs\n",
682
    "        save_strategy=\"steps\",       # Save the model checkpoint every logging step\n",
683
    "        save_steps=50,                # Save checkpoints every 50 steps\n",
684
    "        evaluation_strategy=\"steps\", # Evaluate the model every logging step\n",
685
    "        eval_steps=50,               # Evaluate and save checkpoints every 50 steps\n",
686
    "        do_eval=True,                # Perform evaluation at the end of training\n",
687
    "        # report_to=\"wandb\",           # Comment this out if you don't want to use weights & baises\n",
688
    "        run_name=f\"{run_name}-{datetime.now().strftime('%Y-%m-%d-%H-%M')}\"          # Name of the W&B run (optional)\n",
689
    "    ),\n",
690
    "    data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\n",
691
    ")\n",
692
    "\n",
693
    "model.config.use_cache = False  # silence the warnings. Re-enable for inference!\n",
694
    "trainer.train()"
695
   ]
696
  },
697
  {
698
   "cell_type": "markdown",
699
   "id": "24beb8e2-1ea7-4c30-a8cc-37ff6b6e62b0",
700
   "metadata": {
701
    "id": "0D57XqcsyRgo"
702
   },
703
   "source": [
704
    "### Evaluate the Trained Model!\n",
705
    "\n",
706
    "However, before going to the evaluation code, it's a good idea to kill the current process so that to avoid possible out of memory loading the base model again on top of the model we just trained. \n",
707
    "\n",
708
    "Hence, to kill the current process => Go to `Kernel > Restart Kernel` or kill the process via the Terminal (`nvidia smi` > `kill [PID]`). \n",
709
    "\n",
710
    "### By default, the PEFT library will only save the QLoRA adapters, so we need to first load the base Mixtral model from the Huggingface Hub:\n"
711
   ]
712
  },
713
  {
714
   "cell_type": "code",
715
   "execution_count": null,
716
   "id": "538a7d9f-71f1-4b1e-bf96-e232ad302180",
717
   "metadata": {
718
    "id": "SKSnF016yRgp"
719
   },
720
   "outputs": [],
721
   "source": [
722
    "import torch\n",
723
    "from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\n",
724
    "\n",
725
    "pretrained_model_name_or_path = \"TheBloke/Mistral-7B-Instruct-v0.1-GPTQ\"\n",
726
    "\n",
727
    "# bnb_config = BitsAndBytesConfig(\n",
728
    "#     load_in_4bit=True,\n",
729
    "#     bnb_4bit_use_double_quant=True,\n",
730
    "#     bnb_4bit_compute_dtype=torch.bfloat16\n",
731
    "# )\n",
732
    "\n",
733
    "base_model = AutoModelForCausalLM.from_pretrained(\n",
734
    "    pretrained_model_name_or_path,  # Mixtral, same as before\n",
735
    "    # quantization_config=bnb_config,  # Same quantization config as before, but commented out as its a GPTQ model (which is already quantized )\n",
736
    "    quantization_config=quantization_config_loading,\n",
737
    "    device_map=\"auto\",\n",
738
    "    trust_remote_code=True,\n",
739
    ")\n",
740
    "\n",
741
    "eval_tokenizer = AutoTokenizer.from_pretrained(\n",
742
    "    pretrained_model_name_or_path,\n",
743
    "    add_bos_token=True,\n",
744
    "    trust_remote_code=True,\n",
745
    ")"
746
   ]
747
  },
748
  {
749
   "cell_type": "markdown",
750
   "id": "f4f4580e-1d9d-4c8d-b8d1-05cc7dee3bcf",
751
   "metadata": {
752
    "id": "_BxOhAiqyRgp"
753
   },
754
   "source": [
755
    "### Noting again, by default, the PEFT library will only save the QLoRA adapters, so we need to first load the base Mixtral model from the Huggingface Hub:\n",
756
    "\n",
757
    "Now load the QLoRA adapter from the appropriate checkpoint directory, i.e. the best performing model checkpoint:"
758
   ]
759
  },
760
  {
761
   "cell_type": "code",
762
   "execution_count": null,
763
   "id": "d6bdefc4-8b5b-4c16-82ff-ae54b70a50b4",
764
   "metadata": {
765
    "id": "GwsiqhWuyRgp"
766
   },
767
   "outputs": [],
768
   "source": [
769
    "from peft import PeftModel\n",
770
    "\n",
771
    "ft_model = PeftModel.from_pretrained(base_model, \"mistral-finetune-alpaca-GPTQ/checkpoint-500\")\n",
772
    "\n",
773
    "# Here, \"mistral-finetune-alpaca-GPTQ/checkpoint-500\" is the adapter name"
774
   ]
775
  },
776
  {
777
   "cell_type": "markdown",
778
   "id": "332c2771-3e84-405a-a780-1392bc6b737f",
779
   "metadata": {
780
    "id": "lX39ibolyRgp"
781
   },
782
   "source": [
783
    "and run your inference!"
784
   ]
785
  },
786
  {
787
   "cell_type": "markdown",
788
   "id": "3f99ff63-728e-4cd7-a5a2-4a3580b00f84",
789
   "metadata": {
790
    "id": "UUehsaVNyRgp"
791
   },
792
   "source": [
793
    "Let's try the same `eval_prompt` and thus `model_input` as above, and see if the new finetuned model performs better."
794
   ]
795
  },
796
  {
797
   "cell_type": "code",
798
   "execution_count": null,
799
   "id": "240eaf08-96f9-434c-8d3a-a77939eaeab8",
800
   "metadata": {
801
    "id": "lMkVNEUvyRgp"
802
   },
803
   "outputs": [],
804
   "source": [
805
    "eval_prompt = \"\"\"\"Given an instruction sentence construct the output.\n",
806
    "\n",
807
    "### Instruction sentence:\n",
808
    "Generate a sentence that describes the main idea behind a stock market crash.\n",
809
    "\n",
810
    "\n",
811
    "### Output\n",
812
    "\n",
813
    "\n",
814
    "\"\"\"\n",
815
    "\n",
816
    "model_input = eval_tokenizer(eval_prompt, return_tensors=\"pt\").to(\"cuda\")\n",
817
    "\n",
818
    "ft_model.eval()\n",
819
    "with torch.no_grad():\n",
820
    "    print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=50)[0], skip_special_tokens=True))"
821
   ]
822
  },
823
  {
824
   "cell_type": "markdown",
825
   "id": "1184ef2f",
826
   "metadata": {},
827
   "source": [
828
    "## `PeftModel.from_pretrained` - Explanations\n",
829
    "\n",
830
    "https://huggingface.co/docs/peft/package_reference/peft_model#peft.PeftModel.from_pretrained\n",
831
    "\n",
832
    "\n",
833
    "When I do the below line\n",
834
    "\n",
835
    "`ft_model = PeftModel.from_pretrained(base_model, model_id)`\n",
836
    "\n",
837
    "\n",
838
    "--------------------\n",
839
    "\n",
840
    "### Source Code\n",
841
    "\n",
842
    "https://github.com/huggingface/peft/blob/v0.7.1/src/peft/peft_model.py#L282\n",
843
    "\n",
844
    "```\n",
845
    "def from_pretrained(\n",
846
    "        cls,\n",
847
    "        model: torch.nn.Module,\n",
848
    "        model_id: Union[str, os.PathLike],\n",
849
    "        adapter_name: str = \"default\",\n",
850
    "        is_trainable: bool = False,\n",
851
    "        config: Optional[PeftConfig] = None,\n",
852
    "        **kwargs: Any,\n",
853
    "    ) -> \"PeftModel\":\n",
854
    "        r\"\"\"\n",
855
    "        Instantiate a PEFT model from a pretrained model and loaded PEFT weights.\n",
856
    "\n",
857
    "        Note that the passed `model` may be modified inplace.\n",
858
    "\n",
859
    "        Args:\n",
860
    "            model ([`torch.nn.Module`]):\n",
861
    "                The model to be adapted. For 🤗 Transformers models, the model should be initialized with the\n",
862
    "                [`~transformers.PreTrainedModel.from_pretrained`].\n",
863
    "            model_id (`str` or `os.PathLike`):\n",
864
    "                The name of the PEFT configuration to use. Can be either:\n",
865
    "                    - A string, the `model id` of a PEFT configuration hosted inside a model repo on the Hugging Face\n",
866
    "                      Hub.\n",
867
    "                    - A path to a directory containing a PEFT configuration file saved using the `save_pretrained`\n",
868
    "                      method (`./my_peft_config_directory/`).\n",
869
    "            adapter_name (`str`, *optional*, defaults to `\"default\"`):\n",
870
    "                The name of the adapter to be loaded. This is useful for loading multiple adapters.\n",
871
    "\n",
872
    "\n",
873
    "```\n",
874
    "\n"
875
   ]
876
  }
877
 ],
878
 "metadata": {
879
  "kernelspec": {
880
   "display_name": "Python 3 (ipykernel)",
881
   "language": "python",
882
   "name": "python3"
883
  },
884
  "language_info": {
885
   "codemirror_mode": {
886
    "name": "ipython",
887
    "version": 3
888
   },
889
   "file_extension": ".py",
890
   "mimetype": "text/x-python",
891
   "name": "python",
892
   "nbconvert_exporter": "python",
893
   "pygments_lexer": "ipython3",
894
   "version": "3.10.13"
895
  }
896
 },
897
 "nbformat": 4,
898
 "nbformat_minor": 5
899
}
900

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.