LLM-FineTuning-Large-Language-Models

Форк
0
136 строк · 4.8 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "code",
5
   "execution_count": 1,
6
   "metadata": {},
7
   "outputs": [],
8
   "source": [
9
    "import torch\n",
10
    "from transformers import AutoModelForCausalLM, AutoTokenizer, GPT2Tokenizer"
11
   ]
12
  },
13
  {
14
   "cell_type": "code",
15
   "execution_count": 2,
16
   "metadata": {},
17
   "outputs": [],
18
   "source": [
19
    "def load_model(model_name):    \n",
20
    "    model = AutoModelForCausalLM.from_pretrained(\n",
21
    "        model_name,\n",
22
    "        torch_dtype=torch.float16,\n",
23
    "        device_map=\"auto\",\n",
24
    "        max_memory={\n",
25
    "            0: \"28GiB\",\n",
26
    "            \"cpu\": \"110GiB\", },\n",
27
    "        low_cpu_mem_usage=True,\n",
28
    "    ).eval()\n",
29
    "    \n",
30
    "    tokenizer = GPT2Tokenizer.from_pretrained(\n",
31
    "        model_name, )\n",
32
    "\n",
33
    "    return model, tokenizer"
34
   ]
35
  },
36
  {
37
   "cell_type": "code",
38
   "execution_count": null,
39
   "metadata": {},
40
   "outputs": [],
41
   "source": [
42
    "model_name = \"cerebras/Cerebras-GPT-13B\"\n",
43
    "model, tokenizer = load_model(model_name)"
44
   ]
45
  },
46
  {
47
   "cell_type": "code",
48
   "execution_count": 10,
49
   "metadata": {},
50
   "outputs": [
51
    {
52
     "name": "stderr",
53
     "output_type": "stream",
54
     "text": [
55
      "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
56
     ]
57
    },
58
    {
59
     "name": "stdout",
60
     "output_type": "stream",
61
     "text": [
62
      "What is capital of USA?\n",
63
      "The capital of the United States of America (USA) is Washington, D.C."
64
     ]
65
    }
66
   ],
67
   "source": [
68
    "inputs = tokenizer(\n",
69
    "    \"\"\"What is capital of USA?\n",
70
    "    \"\"\",\n",
71
    "    truncation=True,\n",
72
    "    return_tensors=\"pt\",\n",
73
    ").to(\"cuda\")\n",
74
    "\n",
75
    "with torch.inference_mode():\n",
76
    "    completion = model.generate(\n",
77
    "        **inputs,\n",
78
    "        use_cache=True,\n",
79
    "        do_sample=True,\n",
80
    "        temperature=0.5,\n",
81
    "        no_repeat_ngram_size=1,\n",
82
    "        repetition_penalty=1.0,\n",
83
    "        top_p=0.92,\n",
84
    "        top_k=0,\n",
85
    "        max_new_tokens=128, )\n",
86
    "\n",
87
    "output = tokenizer.decode(completion.squeeze())\n",
88
    "print(output)"
89
   ]
90
  },
91
  {
92
   "attachments": {},
93
   "cell_type": "markdown",
94
   "metadata": {},
95
   "source": [
96
    "**The temperature parameter in the generate method of Hugging Face's transformers library** - is used to control the degree of randomness and creativity in the generated text. It determines the range of probabilities that are considered when selecting the next word in the generated text.\n",
97
    "\n",
98
    "A lower temperature will result in the model generating more conservative and predictable text, while a higher temperature will lead to more creative and diverse output, but with a higher likelihood of generating nonsensical or irrelevant text.\n",
99
    "\n",
100
    "The temperature parameter is usually set to a value between 0 and 1, with 1 being the default value. A temperature of 0 would result in the model always generating the same output, while a very high temperature (e.g., 10) would result in the model generating extremely diverse and unpredictable output.\n",
101
    "\n",
102
    "---------------\n",
103
    "\n",
104
    "The **use_cache parameter in the generate method of Hugging Face's transformers** library controls whether or not the model's internal cache should be used during text generation.\n",
105
    "\n",
106
    "When use_cache is set to True, the model will use the cached values of the previous sequence generated to speed up the generation of subsequent sequences. This can be particularly useful when generating long sequences, as it can significantly reduce the amount of time required to generate each subsequent token.\n",
107
    "\n",
108
    "On the other hand, when use_cache is set to False, the model will not use its internal cache, and will instead generate each token in the sequence from scratch. This can be useful in situations where you want to ensure that the generated sequence is completely independent of any previously generated sequence.\n",
109
    "\n",
110
    "The default value of use_cache is True, meaning that the model will use its internal cache by default during text generation. However, depending on your use case, you may want to experiment with setting use_cache to False to see how it affects the quality and speed of the generated text."
111
   ]
112
  }
113
 ],
114
 "metadata": {
115
  "kernelspec": {
116
   "display_name": "Python 3",
117
   "language": "python",
118
   "name": "python3"
119
  },
120
  "language_info": {
121
   "codemirror_mode": {
122
    "name": "ipython",
123
    "version": 3
124
   },
125
   "file_extension": ".py",
126
   "mimetype": "text/x-python",
127
   "name": "python",
128
   "nbconvert_exporter": "python",
129
   "pygments_lexer": "ipython3",
130
   "version": "3.10.9"
131
  },
132
  "orig_nbformat": 4
133
 },
134
 "nbformat": 4,
135
 "nbformat_minor": 2
136
}
137

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.