examples

Форк
0
/
xx-langchain-chunking.ipynb 
549 строк · 14.7 Кб
1
{
2
 "cells": [
3
  {
4
   "attachments": {},
5
   "cell_type": "markdown",
6
   "metadata": {},
7
   "source": [
8
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/xx-langchain-chunking.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/xx-langchain-chunking.ipynb)"
9
   ]
10
  },
11
  {
12
   "attachments": {},
13
   "cell_type": "markdown",
14
   "metadata": {},
15
   "source": [
16
    "#### [LangChain Handbook](https://pinecone.io/learn/langchain)\n",
17
    "\n",
18
    "# Preparing Text Data for use with Retrieval-Augmented LLMs\n",
19
    "\n",
20
    "In this walkthrough we'll take a look at an example and some of the considerations when we need to prepare text data for retrieval augmented question-answering using **L**arge **L**anguage **M**odels (LLMs)."
21
   ]
22
  },
23
  {
24
   "attachments": {},
25
   "cell_type": "markdown",
26
   "metadata": {},
27
   "source": [
28
    "## Required Libraries"
29
   ]
30
  },
31
  {
32
   "attachments": {},
33
   "cell_type": "markdown",
34
   "metadata": {},
35
   "source": [
36
    "There are a few Python libraries we must `pip install` for this notebook to run, those are:"
37
   ]
38
  },
39
  {
40
   "cell_type": "code",
41
   "execution_count": null,
42
   "metadata": {},
43
   "outputs": [],
44
   "source": [
45
    "!pip install -qU langchain tiktoken matplotlib seaborn tqdm"
46
   ]
47
  },
48
  {
49
   "attachments": {},
50
   "cell_type": "markdown",
51
   "metadata": {},
52
   "source": [
53
    "## Preparing Data"
54
   ]
55
  },
56
  {
57
   "attachments": {},
58
   "cell_type": "markdown",
59
   "metadata": {},
60
   "source": [
61
    "In this example, we will download the LangChain docs from [langchain.readthedocs.io/](https://langchain.readthedocs.io/latest/en/). We get all `.html` files located on the site like so:"
62
   ]
63
  },
64
  {
65
   "cell_type": "code",
66
   "execution_count": null,
67
   "metadata": {},
68
   "outputs": [],
69
   "source": [
70
    "!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/"
71
   ]
72
  },
73
  {
74
   "attachments": {},
75
   "cell_type": "markdown",
76
   "metadata": {},
77
   "source": [
78
    "This downloads all HTML into the `rtdocs` directory. Now we can use LangChain itself to process these docs. We do this using the `ReadTheDocsLoader` like so:"
79
   ]
80
  },
81
  {
82
   "cell_type": "code",
83
   "execution_count": null,
84
   "metadata": {},
85
   "outputs": [],
86
   "source": [
87
    "from langchain.document_loaders import ReadTheDocsLoader\n",
88
    "\n",
89
    "loader = ReadTheDocsLoader('rtdocs')\n",
90
    "docs = loader.load()\n",
91
    "len(docs)"
92
   ]
93
  },
94
  {
95
   "attachments": {},
96
   "cell_type": "markdown",
97
   "metadata": {},
98
   "source": [
99
    "This leaves us with `389` processed doc pages. Let's take a look at the format each one contains:"
100
   ]
101
  },
102
  {
103
   "cell_type": "code",
104
   "execution_count": null,
105
   "metadata": {},
106
   "outputs": [],
107
   "source": [
108
    "docs[0]"
109
   ]
110
  },
111
  {
112
   "attachments": {},
113
   "cell_type": "markdown",
114
   "metadata": {},
115
   "source": [
116
    "We access the plaintext page content like so:"
117
   ]
118
  },
119
  {
120
   "cell_type": "code",
121
   "execution_count": null,
122
   "metadata": {},
123
   "outputs": [],
124
   "source": [
125
    "print(docs[0].page_content)"
126
   ]
127
  },
128
  {
129
   "cell_type": "code",
130
   "execution_count": null,
131
   "metadata": {},
132
   "outputs": [],
133
   "source": [
134
    "print(docs[5].page_content)"
135
   ]
136
  },
137
  {
138
   "attachments": {},
139
   "cell_type": "markdown",
140
   "metadata": {},
141
   "source": [
142
    "We can also find the source of each document:"
143
   ]
144
  },
145
  {
146
   "cell_type": "code",
147
   "execution_count": null,
148
   "metadata": {},
149
   "outputs": [],
150
   "source": [
151
    "docs[5].metadata['source'].replace('rtdocs/', 'https://')"
152
   ]
153
  },
154
  {
155
   "attachments": {},
156
   "cell_type": "markdown",
157
   "metadata": {},
158
   "source": [
159
    "Looks good, we need to also consider the length of each page with respect to the number of tokens that will reasonably fit within the window of the latest LLMs. We will use `gpt-3.5-turbo` as an example.\n",
160
    "\n",
161
    "To count the number of tokens that `gpt-3.5-turbo` will use for some text we need to initialize the `tiktoken` tokenizer."
162
   ]
163
  },
164
  {
165
   "cell_type": "code",
166
   "execution_count": null,
167
   "metadata": {},
168
   "outputs": [],
169
   "source": [
170
    "import tiktoken\n",
171
    "\n",
172
    "tokenizer = tiktoken.get_encoding('cl100k_base')\n",
173
    "\n",
174
    "# create the length function\n",
175
    "def tiktoken_len(text):\n",
176
    "    tokens = tokenizer.encode(\n",
177
    "        text,\n",
178
    "        disallowed_special=()\n",
179
    "    )\n",
180
    "    return len(tokens)"
181
   ]
182
  },
183
  {
184
   "attachments": {},
185
   "cell_type": "markdown",
186
   "metadata": {},
187
   "source": [
188
     "Note that for the tokenizer we defined the encoder as `\"cl100k_base\"`. This is a specific tiktoken encoder which is used by `gpt-3.5-turbo`, as well as `gpt-4`, and `text-embedding-ada-002` which are models supported by OpenAI at the time of this writing. Other encoders may be available, but are used with models that are now deprecated by OpenAI.\n",
189
                "\n",
190
     "You can find more details in the [Tiktoken `model.py` script](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py), or using `tiktoken.encoding_for_model`:"
191
   ]
192
  },
193
  {
194
   "cell_type": "code",
195
   "execution_count": null,
196
   "metadata": {},
197
   "outputs": [],
198
   "source": [
199
    "tiktoken.encoding_for_model('gpt-3.5-turbo')"
200
   ]
201
  },
202
  {
203
   "attachments": {},
204
   "cell_type": "markdown",
205
   "metadata": {},
206
   "source": [
207
    "Using the `tiktoken_len` function, let's count and visualize the number of tokens across our webpages."
208
   ]
209
  },
210
  {
211
   "cell_type": "code",
212
   "execution_count": null,
213
   "metadata": {},
214
   "outputs": [],
215
   "source": [
216
    "token_counts = [tiktoken_len(doc.page_content) for doc in docs]"
217
   ]
218
  },
219
  {
220
   "attachments": {},
221
   "cell_type": "markdown",
222
   "metadata": {},
223
   "source": [
224
    "Let's see `min`, average, and `max` values:"
225
   ]
226
  },
227
  {
228
   "cell_type": "code",
229
   "execution_count": null,
230
   "metadata": {},
231
   "outputs": [],
232
   "source": [
233
    "print(f\"\"\"Min: {min(token_counts)}\n",
234
    "Avg: {int(sum(token_counts) / len(token_counts))}\n",
235
    "Max: {max(token_counts)}\"\"\")"
236
   ]
237
  },
238
  {
239
   "attachments": {},
240
   "cell_type": "markdown",
241
   "metadata": {},
242
   "source": [
243
    "Now visualize:"
244
   ]
245
  },
246
  {
247
   "cell_type": "code",
248
   "execution_count": null,
249
   "metadata": {},
250
   "outputs": [],
251
   "source": [
252
    "import matplotlib.pyplot as plt\n",
253
    "import seaborn as sns\n",
254
    "\n",
255
    "# set style and color palette for the plot\n",
256
    "sns.set_style(\"whitegrid\")\n",
257
    "sns.set_palette(\"muted\")\n",
258
    "\n",
259
    "# create histogram\n",
260
    "plt.figure(figsize=(12, 6))\n",
261
    "sns.histplot(token_counts, kde=False, bins=50)\n",
262
    "\n",
263
    "# customize the plot info\n",
264
    "plt.title(\"Token Counts Histogram\")\n",
265
    "plt.xlabel(\"Token Count\")\n",
266
    "plt.ylabel(\"Frequency\")\n",
267
    "\n",
268
    "plt.show()"
269
   ]
270
  },
271
  {
272
   "attachments": {},
273
   "cell_type": "markdown",
274
   "metadata": {},
275
   "source": [
276
    "The vast majority of pages seem to contain a lower number of tokens. But our limits for the number of tokens to add to each chunk is actually smaller than some of the smaller pages. But, how do we decide what this number should be?"
277
   ]
278
  },
279
  {
280
   "attachments": {},
281
   "cell_type": "markdown",
282
   "metadata": {},
283
   "source": [
284
    "### Chunking the Text\n",
285
    "\n",
286
    "At the time of writing, `gpt-3.5-turbo` supports a context window of 4096 tokens — that means that input tokens + generated ( / completion) output tokens, cannot total more than 4096 without hitting an error.\n",
287
    "\n",
288
    "So we 100% need to keep below this. If we assume a very safe margin of ~2000 tokens for the input prompt into `gpt-3.5-turbo`, leaving ~2000 tokens for conversation history and completion.\n",
289
    "\n",
290
    "With this ~2000 token limit we may want to include *five* snippets of relevant information, meaning each snippet can be no more than **400** token long.\n",
291
    "\n",
292
    "To create these snippets we use the `RecursiveCharacterTextSplitter` from LangChain. To measure the length of snippets we also need a *length function*. This is a function that consumes text, counts the number of tokens within the text (after tokenization using the `gpt-3.5-turbo` tokenizer), and returns that number. We define it like so:"
293
   ]
294
  },
295
  {
296
   "attachments": {},
297
   "cell_type": "markdown",
298
   "metadata": {},
299
   "source": [
300
    "With the length function defined we can initialize our `RecursiveCharacterTextSplitter` object like so:"
301
   ]
302
  },
303
  {
304
   "cell_type": "code",
305
   "execution_count": null,
306
   "metadata": {},
307
   "outputs": [],
308
   "source": [
309
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
310
    "\n",
311
    "text_splitter = RecursiveCharacterTextSplitter(\n",
312
    "    chunk_size=400,\n",
313
    "    chunk_overlap=20,  # number of tokens overlap between chunks\n",
314
    "    length_function=tiktoken_len,\n",
315
    "    separators=['\\n\\n', '\\n', ' ', '']\n",
316
    ")"
317
   ]
318
  },
319
  {
320
   "attachments": {},
321
   "cell_type": "markdown",
322
   "metadata": {},
323
   "source": [
324
    "Then we split the text for a document like so:"
325
   ]
326
  },
327
  {
328
   "cell_type": "code",
329
   "execution_count": null,
330
   "metadata": {},
331
   "outputs": [],
332
   "source": [
333
    "chunks = text_splitter.split_text(docs[5].page_content)\n",
334
    "len(chunks)"
335
   ]
336
  },
337
  {
338
   "cell_type": "code",
339
   "execution_count": null,
340
   "metadata": {},
341
   "outputs": [],
342
   "source": [
343
    "tiktoken_len(chunks[0]), tiktoken_len(chunks[1])"
344
   ]
345
  },
346
  {
347
   "attachments": {},
348
   "cell_type": "markdown",
349
   "metadata": {},
350
   "source": [
351
    "For `docs[5]` we created `2` chunks of token length `346` and `247`.\n",
352
    "\n",
353
    "This is for a single document, we need to do this over all of our documents. While we iterate through the docs to create these chunks we will reformat them into a format that looks like:\n",
354
    "\n",
355
    "```json\n",
356
    "[\n",
357
    "    {\n",
358
    "        \"id\": \"abc-0\",\n",
359
    "        \"text\": \"some important document text\",\n",
360
    "        \"source\": \"https://langchain.readthedocs.io/en/latest/glossary.html\"\n",
361
    "    },\n",
362
    "    {\n",
363
    "        \"id\": \"abc-1\",\n",
364
    "        \"text\": \"the next chunk of important document text\",\n",
365
    "        \"source\": \"https://langchain.readthedocs.io/en/latest/glossary.html\"\n",
366
    "    }\n",
367
    "    ...\n",
368
    "]\n",
369
    "```"
370
   ]
371
  },
372
  {
373
   "attachments": {},
374
   "cell_type": "markdown",
375
   "metadata": {},
376
   "source": [
377
    "The `\"id\"` will be created based on the URL of the text + it's chunk number."
378
   ]
379
  },
380
  {
381
   "cell_type": "code",
382
   "execution_count": null,
383
   "metadata": {},
384
   "outputs": [],
385
   "source": [
386
    "import hashlib\n",
387
    "m = hashlib.md5()  # this will convert URL into unique ID\n",
388
    "\n",
389
    "url = docs[5].metadata['source'].replace('rtdocs/', 'https://')\n",
390
    "print(url)\n",
391
    "\n",
392
    "# convert URL to unique ID\n",
393
    "m.update(url.encode('utf-8'))\n",
394
    "uid = m.hexdigest()[:12]\n",
395
    "print(uid)"
396
   ]
397
  },
398
  {
399
   "attachments": {},
400
   "cell_type": "markdown",
401
   "metadata": {},
402
   "source": [
403
    "Then use the `uid` alongside chunk number and actual `url` to create the format needed:"
404
   ]
405
  },
406
  {
407
   "cell_type": "code",
408
   "execution_count": null,
409
   "metadata": {},
410
   "outputs": [],
411
   "source": [
412
    "data = [\n",
413
    "    {\n",
414
    "        'id': f'{uid}-{i}',\n",
415
    "        'text': chunk,\n",
416
    "        'source': url\n",
417
    "    } for i, chunk in enumerate(chunks)\n",
418
    "]\n",
419
    "data"
420
   ]
421
  },
422
  {
423
   "attachments": {},
424
   "cell_type": "markdown",
425
   "metadata": {},
426
   "source": [
427
    "Now we repeat the same logic across our full dataset:"
428
   ]
429
  },
430
  {
431
   "cell_type": "code",
432
   "execution_count": null,
433
   "metadata": {},
434
   "outputs": [],
435
   "source": [
436
    "from tqdm.auto import tqdm\n",
437
    "\n",
438
    "documents = []\n",
439
    "\n",
440
    "for doc in tqdm(docs):\n",
441
    "    url = doc.metadata['source'].replace('rtdocs/', 'https://')\n",
442
    "    m.update(url.encode('utf-8'))\n",
443
    "    uid = m.hexdigest()[:12]\n",
444
    "    chunks = text_splitter.split_text(doc.page_content)\n",
445
    "    for i, chunk in enumerate(chunks):\n",
446
    "        documents.append({\n",
447
    "            'id': f'{uid}-{i}',\n",
448
    "            'text': chunk,\n",
449
    "            'source': url\n",
450
    "        })\n",
451
    "\n",
452
    "len(documents)"
453
   ]
454
  },
455
  {
456
   "attachments": {},
457
   "cell_type": "markdown",
458
   "metadata": {},
459
   "source": [
460
    "We're now left with `2201` documents. We can save them to a JSON lines (`.jsonl`) file like so:"
461
   ]
462
  },
463
  {
464
   "cell_type": "code",
465
   "execution_count": null,
466
   "metadata": {},
467
   "outputs": [],
468
   "source": [
469
    "import json\n",
470
    "\n",
471
    "with open('train.jsonl', 'w') as f:\n",
472
    "    for doc in documents:\n",
473
    "        f.write(json.dumps(doc) + '\\n')"
474
   ]
475
  },
476
  {
477
   "attachments": {},
478
   "cell_type": "markdown",
479
   "metadata": {},
480
   "source": [
481
    "To load the data from file we'd write:"
482
   ]
483
  },
484
  {
485
   "cell_type": "code",
486
   "execution_count": null,
487
   "metadata": {},
488
   "outputs": [],
489
   "source": [
490
    "documents = []\n",
491
    "\n",
492
    "with open('train.jsonl', 'r') as f:\n",
493
    "    for line in f:\n",
494
    "        documents.append(json.loads(line))\n",
495
    "\n",
496
    "len(documents)"
497
   ]
498
  },
499
  {
500
   "cell_type": "code",
501
   "execution_count": null,
502
   "metadata": {},
503
   "outputs": [],
504
   "source": [
505
    "documents[0]"
506
   ]
507
  },
508
  {
509
   "attachments": {},
510
   "cell_type": "markdown",
511
   "metadata": {},
512
   "source": [
513
    "### (Optional) Sharing the Dataset"
514
   ]
515
  },
516
  {
517
   "attachments": {},
518
   "cell_type": "markdown",
519
   "metadata": {},
520
   "source": [
521
    "We've now created our dataset and you can go ahead and use it in any way you like. However, if you'd like to share the dataset, or store it somewhere that you can get easy access to later — we can use [Hugging Face Datasets Hub](https://huggingface.co/datasets).\n",
522
    "\n",
523
    "To begin we first need to create an account by clicking the **Sign Up** button at [huggingface.co](https://huggingface.co/). Once done we click our profile button in the same location > click **New Dataset** > give it a name like *\"langchain-docs\"* > set the dataset to **Public** or **Private** > click **Create dataset**."
524
   ]
525
  }
526
 ],
527
 "metadata": {
528
  "kernelspec": {
529
   "display_name": "ml",
530
   "language": "python",
531
   "name": "python3"
532
  },
533
  "language_info": {
534
   "codemirror_mode": {
535
    "name": "ipython",
536
    "version": 3
537
   },
538
   "file_extension": ".py",
539
   "mimetype": "text/x-python",
540
   "name": "python",
541
   "nbconvert_exporter": "python",
542
   "pygments_lexer": "ipython3",
543
   "version": "3.9.12"
544
  },
545
  "orig_nbformat": 4
546
 },
547
 "nbformat": 4,
548
 "nbformat_minor": 2
549
}
550

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.