LLM-FineTuning-Large-Language-Models

Форк
0
/
Quantizing_Transformers_with_GPTQ.ipynb 
67 строк · 2.8 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "metadata": {},
6
   "source": [
7
    "## Quantizing 🤗 Transformers models with the GPTQ method can be done in a only few lines.\n",
8
    "\n",
9
    "📌 Note that for large model like 175B param, at least 4 GPU-hours will be needed if one uses a large dataset (e.g. `\"c4\"``).\n",
10
    "\n",
11
    "📌 Of course many GPTQ models are already available on the Hugging Face Hub, which bypasses the need to quantize a model yourself in most use cases. Nevertheless, you can also quantize a model using your own dataset appropriate for the particular domain you are working on."
12
   ]
13
  },
14
  {
15
   "cell_type": "code",
16
   "execution_count": null,
17
   "metadata": {},
18
   "outputs": [],
19
   "source": [
20
    "# Quantizing 🤗 Transformers models with the GPTQ method\n",
21
    "\n",
22
    "from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\n",
23
    "\n",
24
    "model_id = \"meta-llama/Llama-2-7b-hf\"\n",
25
    "tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
26
    "\n",
27
    "quantization_config = GPTQConfig(bits=4, dataset = \"c4\", tokenizer=tokenizer)\n",
28
    "\n",
29
    "model = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"auto\",\n",
30
    "                                    quantization_config=quantization_config)\n",
31
    "\n",
32
    "model.push_to_hub(\"username/Llama-2-7b-gptq-4bit\")\n",
33
    "tokenizer.push_to_hub(\"username/Llama-2-7b-gptq-4bit\")"
34
   ]
35
  },
36
  {
37
   "cell_type": "markdown",
38
   "metadata": {},
39
   "source": [
40
    "`bits` param: Its the number of bits to quantize to, supported numbers are (2, 3, 4, 8).\n",
41
    "\n",
42
    "`dataset`: The dataset used for calibration. I would leave “c4“ which seems to yield reasonable results. Other datasets are supported according to the documentation.\n",
43
    "\n",
44
    "`tokenizer`: The tokenizer of Llama 2 7B that will be applied to c4.\n",
45
    "\n",
46
    "- `desc_act` : Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. If inference speed is not your concern, you should set `desc_act` to True.\n",
47
    "\n",
48
    "- `use_exllama` - bool, optional — Whether to use exllama backend. Defaults to True if unset. Only works with bits = 4. For 4-bit model, you can use the exllama kernels in order to a faster inference speed.\n",
49
    "\n",
50
    "You need to have the entire model on gpus if you want to `use_exllama` kernels. So if you plan to use the model on a configuration with a small VRAM that will split the model to multiple devices with `device_map`, you should set `use_exllama` to False."
51
   ]
52
  }
53
 ],
54
 "metadata": {
55
  "kernelspec": {
56
   "display_name": "py10env",
57
   "language": "python",
58
   "name": "python3"
59
  },
60
  "language_info": {
61
   "name": "python",
62
   "version": "3.10.13"
63
  }
64
 },
65
 "nbformat": 4,
66
 "nbformat_minor": 2
67
}
68

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.