LLM-FineTuning-Large-Language-Models

Форк
0
/
4-bit_LLM_Quantization_with_GPTQ.ipynb 
210 строк · 11.6 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "code",
5
   "execution_count": null,
6
   "metadata": {},
7
   "outputs": [],
8
   "source": [
9
    "!BUILD_CUDA_EXT=0 pip install -q auto-gptq transformers"
10
   ]
11
  },
12
  {
13
   "cell_type": "code",
14
   "execution_count": null,
15
   "metadata": {},
16
   "outputs": [],
17
   "source": [
18
    "# Code for 4-bit Quantization with GPTQ\n",
19
    "import random\n",
20
    "import torch\n",
21
    "from typing import List, Dict\n",
22
    "from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig\n",
23
    "from datasets import load_dataset\n",
24
    "from transformers import AutoTokenizer\n",
25
    "\n",
26
    "def prepare_data(model_id: str, n_samples: int) -> List[Dict[str, torch.Tensor]]:\n",
27
    "    tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
28
    "    data = load_dataset(\"allenai/c4\",\n",
29
    "                        data_files=\"en/c4-train.00001-of-01024.json.gz\",\n",
30
    "                        split=f\"train[:{n_samples*5}]\")\n",
31
    "    tokenized_data = tokenizer(\"\\n\\n\".join(data['text']), return_tensors='pt')\n",
32
    "\n",
33
    "    start_indices = []\n",
34
    "    for _ in range(n_samples):\n",
35
    "        max_index = tokenized_data.input_ids.shape[1] - tokenizer.model_max_length - 1\n",
36
    "        random_index = random.randint(0, max_index)\n",
37
    "        start_indices.append(random_index)\n",
38
    "    examples_ids = []\n",
39
    "    for i in start_indices:\n",
40
    "        j = i + tokenizer.model_max_length\n",
41
    "        input_ids = tokenized_data.input_ids[:, i:j]\n",
42
    "        attention_mask = (input_ids != tokenizer.pad_token_id).type(input_ids.dtype)\n",
43
    "        examples_ids.append({'input_ids': input_ids, 'attention_mask': attention_mask})\n",
44
    "\n",
45
    "    return examples_ids\n",
46
    "\n",
47
    "def quantize_llm_with_gptq(model_id: str,\n",
48
    "                           out_dir: str,\n",
49
    "                           n_samples: int = 512,\n",
50
    "                           bits: int = 4,\n",
51
    "                           group_size: int = 128,\n",
52
    "                           damp_percent: float = 0.01,\n",
53
    "                           desc_act: bool = False) -> str:\n",
54
    "    quantize_config = BaseQuantizeConfig(bits=bits,\n",
55
    "                                         group_size=group_size,\n",
56
    "                                         damp_percent=damp_percent,\n",
57
    "                                         desc_act=desc_act)\n",
58
    "    model = AutoGPTQForCausalLM.from_pretrained(model_id, quantize_config)\n",
59
    "\n",
60
    "    examples_ids = prepare_data(model_id, n_samples)\n",
61
    "    model.quantize(examples_ids, batch_size=1, use_triton=True)\n",
62
    "    model.save_quantized(out_dir, use_safetensors=True)\n",
63
    "    AutoTokenizer.from_pretrained(model_id).save_pretrained(out_dir)\n",
64
    "\n",
65
    "    return out_dir\n",
66
    "\n",
67
    "# Example usage\n",
68
    "model_id = \"HuggingFaceH4/zephyr-7b-beta\"\n",
69
    "out_dir_quantized = model_id + \"-GPTQ\"\n",
70
    "quantized_dir = quantize_llm_with_gptq(model_id, out_dir_quantized)"
71
   ]
72
  },
73
  {
74
   "cell_type": "code",
75
   "execution_count": null,
76
   "metadata": {},
77
   "outputs": [],
78
   "source": [
79
    "#######################################################################\n",
80
    "# Now, load quantized model\n",
81
    "device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n",
82
    "\n",
83
    "# Reload model and tokenizer\n",
84
    "model = AutoGPTQForCausalLM.from_quantized(\n",
85
    "    out_dir_quantized,\n",
86
    "    device=device,\n",
87
    "    use_triton=True,\n",
88
    "    use_safetensors=True,\n",
89
    ")\n",
90
    "tokenizer = AutoTokenizer.from_pretrained(out_dir_quantized)\n",
91
    "\n",
92
    "from transformers import pipeline\n",
93
    "\n",
94
    "generator = pipeline('text-generation', model=model, tokenizer=tokenizer)\n",
95
    "\n",
96
    "result = generator(\"My favourite destination\", do_sample=True, max_length=50)[0]['generated_text']\n",
97
    "\n",
98
    "print(result)"
99
   ]
100
  },
101
  {
102
   "cell_type": "markdown",
103
   "metadata": {},
104
   "source": [
105
    "📌 GPTQ is a post-training quantization (PTQ) method for 4-bit quantization that focuses primarily on GPU inference and performance.\n",
106
    "\n",
107
    "The idea behind the method is that it will try to compress all weights to a 4-bit quantization by minimizing the mean squared error to that weight. During inference, it will dynamically dequantize its weights to float16 for improved performance whilst keeping memory low.\n",
108
    "\n",
109
    "However, for CPU-friendly approach, GGML is currently your best option. \n",
110
    "\n",
111
    "The attached code is for 4-bit Quantization with GPTQ\n",
112
    "\n",
113
    "------------------"
114
   ]
115
  },
116
  {
117
   "cell_type": "markdown",
118
   "metadata": {},
119
   "source": [
120
    "## Explanation of the below block from the above code\n",
121
    "\n",
122
    "```py\n",
123
    "  start_indices = []\n",
124
    "    for _ in range(n_samples):\n",
125
    "        max_index = tokenized_data.input_ids.shape[1] - tokenizer.model_max_length - 1\n",
126
    "        random_index = random.randint(0, max_index)\n",
127
    "        start_indices.append(random_index)\n",
128
    "```\n",
129
    "\n",
130
    "📌 This block generates a list of `start_indices` that are used to select sequences from the tokenized data for model quantization. Each index in `start_indices` serves as a starting point for a sequence of tokens.\n",
131
    "\n",
132
    "📌 `max_index = tokenized_data.input_ids.shape[1] - tokenizer.model_max_length - 1`\n",
133
    "   \n",
134
    "Here `tokenized_data.input_ids.shape[1]` gives the length of the tokenized sequence.\n",
135
    "     \n",
136
    "`tokenizer.model_max_length` is the maximum length of sequences that the model can handle.\n",
137
    "\n",
138
    "`model_max_length (int, optional) `— The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)).\n",
139
    "     \n",
140
    "Subtracting these values and an additional `1` ensures that any sequence starting at `random_index` and extending for `tokenizer.model_max_length` tokens will not exceed the length of `tokenized_data.input_ids`. This subtraction prevents index out-of-bounds errors when later slicing the `input_ids`.\n",
141
    "\n",
142
    "`random_index = random.randint(0, max_index)`: Generates a random integer between `0` and `max_index`. This index is used as the start point for slicing a sequence from the tokenized data.\n",
143
    "\n",
144
    "`start_indices.append(random_index)`: Appends the generated `random_index` to the `start_indices` list.\n",
145
    "\n",
146
    "📌 After the loop, `start_indices` contains `n_samples` number of starting points, each chosen randomly within the valid range, ensuring the subsequences extracted will be within the model's maximum sequence length and within the bounds of `tokenized_data.input_ids`.\n",
147
    "\n",
148
    "----------------------"
149
   ]
150
  },
151
  {
152
   "cell_type": "markdown",
153
   "metadata": {},
154
   "source": [
155
    "### `input_ids = tokenized_data.input_ids[:, i:j]` - In this line why the we are slicing column-wise (indicated by `:`) ?\n",
156
    "\n",
157
    "- In NLP models, especially those based on transformers, input data is often represented in a 2D tensor format. The first dimension typically represents different examples or sequences (batch dimension), and the second dimension represents the tokens within each sequence.\n",
158
    "\n",
159
    "- The `input_ids` tensor from tokenized data usually has a shape of `[batch_size, seq_length]`, where `batch_size` is the number of sequences and `seq_length` is the length of each sequence. If you're processing a single sequence, `batch_size` would be 1. f you have 10 sequences of text data that you want to process at the same time, your batch size will be 10.\n",
160
    "\n",
161
    "`seq_length`, the 2-nd dimension, is the number of tokens in each sequence, i.e. the length of each input text sequence after tokenization.\n",
162
    "\n",
163
    "- In the given code, the slicing operation `[:, i:j]` is designed to select a specific range of tokens from each sequence in the batch. The `:` indicates that we are selecting all sequences (across the batch dimension), and `i:j` specifies the range of tokens to select from each sequence. This way, you get a continuous slice of tokens from every sequence in the batch. This operation is vital for creating uniform-length sequences for model input.\n",
164
    "\n",
165
    "- **Why Not Row-wise**: Row-wise slicing would imply selecting specific sequences (entire sequences) rather than specific tokens within each sequence. This is not the desired operation here, as the goal is to select a particular section of tokens (from `i` to `j`) across all sequences for the model to process.\n",
166
    "\n",
167
    "---------"
168
   ]
169
  },
170
  {
171
   "cell_type": "markdown",
172
   "metadata": {},
173
   "source": [
174
    "### `attention_mask = (input_ids != tokenizer.pad_token_id).type(input_ids.dtype)`\n",
175
    "\n",
176
    "📌 This to accurately represent the valid (non-padding) tokens in the `attention_mask`.\n",
177
    "\n",
178
    "📌 In a typical NLP model setup, especially in large language models, input sequences may be padded to a fixed length for batching purposes. Padding tokens are not actual content and should not be attended to by the model.\n",
179
    "\n",
180
    "📌 The expression `(input_ids != tokenizer.pad_token_id)` creates a boolean mask where each position is `True` if the corresponding token is not a padding token, and `False` otherwise. This effectively identifies the non-padding tokens.\n",
181
    "\n",
182
    "📌 The method `.type(input_ids.dtype)` is then used to convert this boolean mask to the same data type as `input_ids`. This is required because attention masks need to be of the same type as the model's inputs (usually a tensor of integers or floats).\n",
183
    "\n",
184
    "📌 By doing this, the attention mask accurately reflects the actual tokens that should be considered during the attention calculations in the model, leading to more effective and accurate model processing."
185
   ]
186
  },
187
  {
188
   "cell_type": "markdown",
189
   "metadata": {},
190
   "source": [
191
    "C4 (Colossal Clean Crawled Corpus) dataset is used to generate our samples. The quantization process relies heavily on samples to evaluate and enhance the quality of the quantization. They provide a means of comparison between the outputs produced by the origina and the newly quantized model. The larger the number of samples provided, the greater the potential for more accurate and effective comparisons, leading to improved quantization quality.\n",
192
    "\n",
193
    "In example above we load 512 samples from the C4 dataset, tokenize them, and format them.\n",
194
    "\n",
195
    "------\n",
196
    "\n",
197
    "`desc_act` (also called act order) param allows you to process rows based on decreasing activation, meaning the most important or impactful rows (determined by sampled inputs and outputs) are processed first. This method aims to place most of the quantization error (inevitably introduced during quantization) on less significant weights. This approach improves the overall accuracy of the quantization process by ensuring the most significant weights are processed with greater precision. However, when used alongside group size, desc_act can lead to performance slowdowns due to the need to frequently reload quantization parameters. For this reason, we won't use it here (it will probably be fixed in the future, however).\n",
198
    "\n",
199
    "https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.PreTrainedTokenizer.model_max_length"
200
   ]
201
  }
202
 ],
203
 "metadata": {
204
  "language_info": {
205
   "name": "python"
206
  }
207
 },
208
 "nbformat": 4,
209
 "nbformat_minor": 2
210
}
211

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.