examples

Форк
0
/
gpt-4-langchain-docs.ipynb 
4310 строк · 169.2 Кб
1
{
2
  "cells": [
3
    {
4
      "cell_type": "markdown",
5
      "metadata": {
6
        "id": "GFLLl1Agum8O"
7
      },
8
      "source": [
9
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/openai/gpt-4-langchain-docs.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/openai/gpt-4-langchain-docs.ipynb)\n",
10
        "\n",
11
        "# GPT4 with Retrieval Augmentation over LangChain Docs\n",
12
        "\n",
13
        "[![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/fast-link.svg)](https://github.com/pinecone-io/examples/blob/master/docs/gpt-4-langchain-docs.ipynb)\n",
14
        "\n",
15
        "In this notebook we'll work through an example of using GPT-4 with retrieval augmentation to answer questions about the LangChain Python library."
16
      ]
17
    },
18
    {
19
      "cell_type": "code",
20
      "execution_count": 1,
21
      "metadata": {
22
        "id": "_HDKlQO5svqI"
23
      },
24
      "outputs": [],
25
      "source": [
26
        "!pip install -qU \\\n",
27
        "  tiktoken==0.4.0 \\\n",
28
        "  openai==0.27.7 \\\n",
29
        "  langchain==0.0.179 \\\n",
30
        "  pinecone-client==2.2.1 \\\n",
31
        "  datasets==2.13.1"
32
      ]
33
    },
34
    {
35
      "cell_type": "markdown",
36
      "metadata": {
37
        "id": "7c1EpQ-jq7SU"
38
      },
39
      "source": [
40
        "---\n",
41
        "\n",
42
        "🚨 _Note: the above `pip install` is formatted for Jupyter notebooks. If running elsewhere you may need to drop the `!`._\n",
43
        "\n",
44
        "---"
45
      ]
46
    },
47
    {
48
      "cell_type": "markdown",
49
      "metadata": {
50
        "id": "NgUEJ6vDum8q"
51
      },
52
      "source": [
53
        "In this example, we will download the LangChain docs, we can find a static version of the docs on Hugging Face datasets in `jamescalam/langchain-docs-23-06-27`. To download them we do:"
54
      ]
55
    },
56
    {
57
      "cell_type": "code",
58
      "execution_count": 2,
59
      "metadata": {
60
        "colab": {
61
          "base_uri": "https://localhost:8080/",
62
          "height": 237,
63
          "referenced_widgets": [
64
            "63de2154fea24b49a87bf4b8428fa630",
65
            "4b4cfb1a834342198c75a02d28448b57",
66
            "a9d471008dc34f67a5307bbb26d6123c",
67
            "580e5dd4c9d9497caa40802d5918e75c",
68
            "bd09981e486d461eaa2cf166b32921e1",
69
            "bed2dd81769b4910831cb34a7b475c72",
70
            "ccad7c2aec604ee29b41497ec0f37fa7",
71
            "390f06d63dd547d395dcf18f1ebe265d",
72
            "6545006e51824be9b6cb5cdb2cb2ba5a",
73
            "241b0de59e53465f8acad4ac74b17b57",
74
            "05199362d95449699254c45c1d5cee94",
75
            "6881722e02fe4395a5fcaf668cb7ebcb",
76
            "2b960a7f46444ad3bd3392517b415f2d",
77
            "a3e8499ed740449586ca31500038c7a8",
78
            "08c52a0369b74e7da99574ec29612189",
79
            "ffb822b2f739434dbe99e8a992716c30",
80
            "7e2b88be1cae49da824e6c6c0782cb50",
81
            "9f4e9da63bb64d279ded5ee1730b5cba",
82
            "3b319c7a4f6f41ea9ea6e6268cd29343",
83
            "908935a03fea42efbded99cd81de54c5",
84
            "dd3ece4c242d4eae946f8bc4f95d1dbf",
85
            "ae71cc7e26ee4b51b7eb67520f66c9bd",
86
            "d83b0b3089c34bb58ddb1272a240c2f9",
87
            "34d21f61f6dc499a9d1504634e470bdd",
88
            "64aae9675d394df48d233b31e5f0eb3c",
89
            "d1d3dde6ec3b483f8b14139a7d6a9ae0",
90
            "690ca50e9785402bb17fa266f8e40ea9",
91
            "482f891d61ab4c2080d95a9b84ea5c6d",
92
            "622987b045e74a13b79553d3d062e72a",
93
            "6c7236b0655e4397b3a9d5f4d83c03fe",
94
            "6f7e876e10fd4c58aa2d1f1ed4ff2762",
95
            "9a8b01998f8a4c6bb0bfe71e02b3352c",
96
            "ec224feb9828415eb018831e985d22c0",
97
            "a532b2307c734cf188092d40299c40ad",
98
            "fab781bfae4647968aa69f19ae6a5754",
99
            "5961b9e44ce14a2a8eb65a9e5b6be90d",
100
            "5f15e4b12305489180e54c61769dcebe",
101
            "324465ed674740c2a18a88a2633f2093",
102
            "f82b21e87eba4e06a0531c791dc09b3f",
103
            "5c0bb7407c844ae19479416752f66190",
104
            "5ef6d125261b49679dcb4d886b3e382c",
105
            "294d5fc4fa1e40429e08137934481ba2",
106
            "f5d992e8c1224879be5e5464a424a3a4",
107
            "7e828bf7b91e4029bc2093876128a78b"
108
          ]
109
        },
110
        "id": "xo9gYhGPr_DQ",
111
        "outputId": "016b896d-87a6-4d17-bad1-027475510a8b"
112
      },
113
      "outputs": [
114
        {
115
          "name": "stdout",
116
          "output_type": "stream",
117
          "text": [
118
            "Downloading and preparing dataset json/jamescalam--langchain-docs-23-06-27 to /root/.cache/huggingface/datasets/jamescalam___json/jamescalam--langchain-docs-23-06-27-4631410d07444b03/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...\n"
119
          ]
120
        },
121
        {
122
          "data": {
123
            "application/vnd.jupyter.widget-view+json": {
124
              "model_id": "63de2154fea24b49a87bf4b8428fa630",
125
              "version_major": 2,
126
              "version_minor": 0
127
            },
128
            "text/plain": [
129
              "Downloading data files:   0%|          | 0/1 [00:00<?, ?it/s]"
130
            ]
131
          },
132
          "metadata": {},
133
          "output_type": "display_data"
134
        },
135
        {
136
          "data": {
137
            "application/vnd.jupyter.widget-view+json": {
138
              "model_id": "6881722e02fe4395a5fcaf668cb7ebcb",
139
              "version_major": 2,
140
              "version_minor": 0
141
            },
142
            "text/plain": [
143
              "Downloading data:   0%|          | 0.00/4.68M [00:00<?, ?B/s]"
144
            ]
145
          },
146
          "metadata": {},
147
          "output_type": "display_data"
148
        },
149
        {
150
          "data": {
151
            "application/vnd.jupyter.widget-view+json": {
152
              "model_id": "d83b0b3089c34bb58ddb1272a240c2f9",
153
              "version_major": 2,
154
              "version_minor": 0
155
            },
156
            "text/plain": [
157
              "Extracting data files:   0%|          | 0/1 [00:00<?, ?it/s]"
158
            ]
159
          },
160
          "metadata": {},
161
          "output_type": "display_data"
162
        },
163
        {
164
          "data": {
165
            "application/vnd.jupyter.widget-view+json": {
166
              "model_id": "a532b2307c734cf188092d40299c40ad",
167
              "version_major": 2,
168
              "version_minor": 0
169
            },
170
            "text/plain": [
171
              "Generating train split: 0 examples [00:00, ? examples/s]"
172
            ]
173
          },
174
          "metadata": {},
175
          "output_type": "display_data"
176
        },
177
        {
178
          "name": "stdout",
179
          "output_type": "stream",
180
          "text": [
181
            "Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/jamescalam___json/jamescalam--langchain-docs-23-06-27-4631410d07444b03/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96. Subsequent calls will reuse this data.\n"
182
          ]
183
        },
184
        {
185
          "data": {
186
            "text/plain": [
187
              "Dataset({\n",
188
              "    features: ['id', 'text', 'url'],\n",
189
              "    num_rows: 505\n",
190
              "})"
191
            ]
192
          },
193
          "execution_count": 2,
194
          "metadata": {},
195
          "output_type": "execute_result"
196
        }
197
      ],
198
      "source": [
199
        "from datasets import load_dataset\n",
200
        "\n",
201
        "docs = load_dataset('jamescalam/langchain-docs-23-06-27', split='train')\n",
202
        "docs"
203
      ]
204
    },
205
    {
206
      "cell_type": "markdown",
207
      "metadata": {
208
        "id": "ahFEI4U3vdxV"
209
      },
210
      "source": [
211
        "This leaves us with `505` doc pages. Let's take a look at the format each one contains:"
212
      ]
213
    },
214
    {
215
      "cell_type": "code",
216
      "execution_count": 3,
217
      "metadata": {
218
        "colab": {
219
          "base_uri": "https://localhost:8080/",
220
          "height": 52
221
        },
222
        "id": "BJuef8z1vfz4",
223
        "outputId": "6c62ecbe-cf82-475d-b39f-d97ff02422fe"
224
      },
225
      "outputs": [
226
        {
227
          "data": {
228
            "application/vnd.google.colaboratory.intrinsic+json": {
229
              "type": "string"
230
            },
231
            "text/plain": [
232
              "'Example Selector\\uf0c1\\nLogic for selecting examples to include in prompts.\\nclass langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_le'"
233
            ]
234
          },
235
          "execution_count": 3,
236
          "metadata": {},
237
          "output_type": "execute_result"
238
        }
239
      ],
240
      "source": [
241
        "docs[20]['text'][:200]"
242
      ]
243
    },
244
    {
245
      "cell_type": "markdown",
246
      "metadata": {
247
        "id": "jNfppr8fvhOX"
248
      },
249
      "source": [
250
        "We access the plaintext page content like so:"
251
      ]
252
    },
253
    {
254
      "cell_type": "code",
255
      "execution_count": 4,
256
      "metadata": {
257
        "colab": {
258
          "base_uri": "https://localhost:8080/"
259
        },
260
        "id": "vfdQLriyvjDk",
261
        "outputId": "b7644566-ac2d-4dcf-dab3-1736191d357a"
262
      },
263
      "outputs": [
264
        {
265
          "name": "stdout",
266
          "output_type": "stream",
267
          "text": [
268
            "Example Selector\n",
269
            "Logic for selecting examples to include in prompts.\n",
270
            "class langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_le\n"
271
          ]
272
        }
273
      ],
274
      "source": [
275
        "print(docs[20]['text'][:200])"
276
      ]
277
    },
278
    {
279
      "cell_type": "markdown",
280
      "metadata": {
281
        "id": "r-mxgm-6vo9s"
282
      },
283
      "source": [
284
        "We can also find the source of each document:"
285
      ]
286
    },
287
    {
288
      "cell_type": "code",
289
      "execution_count": 5,
290
      "metadata": {
291
        "colab": {
292
          "base_uri": "https://localhost:8080/",
293
          "height": 35
294
        },
295
        "id": "NGUGao9_uNH3",
296
        "outputId": "f3efbe72-d10c-4223-dd21-f38d02ad96d5"
297
      },
298
      "outputs": [
299
        {
300
          "data": {
301
            "application/vnd.google.colaboratory.intrinsic+json": {
302
              "type": "string"
303
            },
304
            "text/plain": [
305
              "'https://api.python.langchain.com/en/latest/modules/example_selector.html'"
306
            ]
307
          },
308
          "execution_count": 5,
309
          "metadata": {},
310
          "output_type": "execute_result"
311
        }
312
      ],
313
      "source": [
314
        "docs[20]['url']"
315
      ]
316
    },
317
    {
318
      "cell_type": "markdown",
319
      "metadata": {
320
        "id": "ouY4rcx7z2oa"
321
      },
322
      "source": [
323
        "Now let's see how we can process all of these. We will chunk everything into ~500 token chunks, we can do this easily with `langchain` and `tiktoken`:"
324
      ]
325
    },
326
    {
327
      "cell_type": "code",
328
      "execution_count": 6,
329
      "metadata": {
330
        "colab": {
331
          "base_uri": "https://localhost:8080/",
332
          "height": 35
333
        },
334
        "id": "Rb7KxUqYzsuV",
335
        "outputId": "af54a189-b2e5-4b6d-9752-e74edd01b5e4"
336
      },
337
      "outputs": [
338
        {
339
          "data": {
340
            "application/vnd.google.colaboratory.intrinsic+json": {
341
              "type": "string"
342
            },
343
            "text/plain": [
344
              "'cl100k_base'"
345
            ]
346
          },
347
          "execution_count": 6,
348
          "metadata": {},
349
          "output_type": "execute_result"
350
        }
351
      ],
352
      "source": [
353
        "import tiktoken\n",
354
        "\n",
355
        "tokenizer_name = tiktoken.encoding_for_model('gpt-4')\n",
356
        "tokenizer_name.name"
357
      ]
358
    },
359
    {
360
      "cell_type": "code",
361
      "execution_count": 7,
362
      "metadata": {
363
        "id": "N635Sgsbx_ME"
364
      },
365
      "outputs": [],
366
      "source": [
367
        "tokenizer = tiktoken.get_encoding(tokenizer_name.name)\n",
368
        "\n",
369
        "# create the length function\n",
370
        "def tiktoken_len(text):\n",
371
        "    tokens = tokenizer.encode(\n",
372
        "        text,\n",
373
        "        disallowed_special=()\n",
374
        "    )\n",
375
        "    return len(tokens)"
376
      ]
377
    },
378
    {
379
      "cell_type": "code",
380
      "execution_count": 8,
381
      "metadata": {
382
        "id": "OKO8e3Dp0dQS"
383
      },
384
      "outputs": [],
385
      "source": [
386
        "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
387
        "\n",
388
        "text_splitter = RecursiveCharacterTextSplitter(\n",
389
        "    chunk_size=500,\n",
390
        "    chunk_overlap=20,\n",
391
        "    length_function=tiktoken_len,\n",
392
        "    separators=[\"\\n\\n\", \"\\n\", \" \", \"\"]\n",
393
        ")"
394
      ]
395
    },
396
    {
397
      "cell_type": "markdown",
398
      "metadata": {
399
        "id": "bLdvW8eq06Zd"
400
      },
401
      "source": [
402
        "Process the `docs` into more chunks using this approach."
403
      ]
404
    },
405
    {
406
      "cell_type": "code",
407
      "execution_count": 10,
408
      "metadata": {
409
        "colab": {
410
          "base_uri": "https://localhost:8080/",
411
          "height": 66,
412
          "referenced_widgets": [
413
            "01296cac12234000a13bdca80b31ba8b",
414
            "930601ee00454f71b1114c4aaff0175b",
415
            "e976d05935374e47b86773ca852cfa9e",
416
            "bf9b29814dd04a22a7ff4ca1c6160c21",
417
            "6d110cd070fe4776b9449de74759dff3",
418
            "d670714b504847e3b72cd84510219ec7",
419
            "037869180d9d4b1eb1bdbed67337e349",
420
            "894a9b32ecc3404eb1213a8fa9ea38e2",
421
            "5b14b2d018c74766954d580853eae7fc",
422
            "41920d8d2aa44511814576dab37d96e7",
423
            "d4c5704e6136468b910684e418074271"
424
          ]
425
        },
426
        "id": "uOdPyiAQ0uWs",
427
        "outputId": "a36d52b2-810d-4422-cc82-105af8d1c83b"
428
      },
429
      "outputs": [
430
        {
431
          "data": {
432
            "application/vnd.jupyter.widget-view+json": {
433
              "model_id": "01296cac12234000a13bdca80b31ba8b",
434
              "version_major": 2,
435
              "version_minor": 0
436
            },
437
            "text/plain": [
438
              "  0%|          | 0/505 [00:00<?, ?it/s]"
439
            ]
440
          },
441
          "metadata": {},
442
          "output_type": "display_data"
443
        },
444
        {
445
          "data": {
446
            "text/plain": [
447
              "2482"
448
            ]
449
          },
450
          "execution_count": 10,
451
          "metadata": {},
452
          "output_type": "execute_result"
453
        }
454
      ],
455
      "source": [
456
        "from typing_extensions import Concatenate\n",
457
        "from uuid import uuid4\n",
458
        "from tqdm.auto import tqdm\n",
459
        "\n",
460
        "chunks = []\n",
461
        "\n",
462
        "for page in tqdm(docs):\n",
463
        "    if len(page['text']) < 200:\n",
464
        "        # if page content is short we can skip\n",
465
        "        continue\n",
466
        "    texts = text_splitter.split_text(page['text'])\n",
467
        "    chunks.extend([{\n",
468
        "        'id': page['id'] + f'-{i}',\n",
469
        "        'text': texts[i],\n",
470
        "        'url': page['url'],\n",
471
        "        'chunk': i\n",
472
        "    } for i in range(len(texts))])\n",
473
        "len(chunks)"
474
      ]
475
    },
476
    {
477
      "cell_type": "markdown",
478
      "metadata": {
479
        "id": "JegURaAg2PuN"
480
      },
481
      "source": [
482
        "Our chunks are ready so now we move onto embedding and indexing everything."
483
      ]
484
    },
485
    {
486
      "cell_type": "markdown",
487
      "metadata": {
488
        "id": "zGIZbQqJ2WBh"
489
      },
490
      "source": [
491
        "## Initialize Embedding Model\n",
492
        "\n",
493
        "We use `text-embedding-ada-002` as the embedding model. We can embed text like so:"
494
      ]
495
    },
496
    {
497
      "cell_type": "code",
498
      "execution_count": 11,
499
      "metadata": {
500
        "colab": {
501
          "base_uri": "https://localhost:8080/"
502
        },
503
        "id": "p0U9_7Fium8u",
504
        "outputId": "e2b285d4-dbde-4b2b-8624-bb515f6206cf"
505
      },
506
      "outputs": [
507
        {
508
          "data": {
509
            "text/plain": [
510
              "<OpenAIObject list at 0x7fc82e74dee0> JSON: {\n",
511
              "  \"data\": [\n",
512
              "    {\n",
513
              "      \"created\": null,\n",
514
              "      \"id\": \"whisper-1\",\n",
515
              "      \"object\": \"engine\",\n",
516
              "      \"owner\": \"openai-internal\",\n",
517
              "      \"permissions\": null,\n",
518
              "      \"ready\": true\n",
519
              "    },\n",
520
              "    {\n",
521
              "      \"created\": null,\n",
522
              "      \"id\": \"babbage\",\n",
523
              "      \"object\": \"engine\",\n",
524
              "      \"owner\": \"openai\",\n",
525
              "      \"permissions\": null,\n",
526
              "      \"ready\": true\n",
527
              "    },\n",
528
              "    {\n",
529
              "      \"created\": null,\n",
530
              "      \"id\": \"davinci\",\n",
531
              "      \"object\": \"engine\",\n",
532
              "      \"owner\": \"openai\",\n",
533
              "      \"permissions\": null,\n",
534
              "      \"ready\": true\n",
535
              "    },\n",
536
              "    {\n",
537
              "      \"created\": null,\n",
538
              "      \"id\": \"text-davinci-edit-001\",\n",
539
              "      \"object\": \"engine\",\n",
540
              "      \"owner\": \"openai\",\n",
541
              "      \"permissions\": null,\n",
542
              "      \"ready\": true\n",
543
              "    },\n",
544
              "    {\n",
545
              "      \"created\": null,\n",
546
              "      \"id\": \"babbage-code-search-code\",\n",
547
              "      \"object\": \"engine\",\n",
548
              "      \"owner\": \"openai-dev\",\n",
549
              "      \"permissions\": null,\n",
550
              "      \"ready\": true\n",
551
              "    },\n",
552
              "    {\n",
553
              "      \"created\": null,\n",
554
              "      \"id\": \"text-similarity-babbage-001\",\n",
555
              "      \"object\": \"engine\",\n",
556
              "      \"owner\": \"openai-dev\",\n",
557
              "      \"permissions\": null,\n",
558
              "      \"ready\": true\n",
559
              "    },\n",
560
              "    {\n",
561
              "      \"created\": null,\n",
562
              "      \"id\": \"text-embedding-ada-002\",\n",
563
              "      \"object\": \"engine\",\n",
564
              "      \"owner\": \"openai-internal\",\n",
565
              "      \"permissions\": null,\n",
566
              "      \"ready\": true\n",
567
              "    },\n",
568
              "    {\n",
569
              "      \"created\": null,\n",
570
              "      \"id\": \"code-davinci-edit-001\",\n",
571
              "      \"object\": \"engine\",\n",
572
              "      \"owner\": \"openai\",\n",
573
              "      \"permissions\": null,\n",
574
              "      \"ready\": true\n",
575
              "    },\n",
576
              "    {\n",
577
              "      \"created\": null,\n",
578
              "      \"id\": \"text-davinci-001\",\n",
579
              "      \"object\": \"engine\",\n",
580
              "      \"owner\": \"openai\",\n",
581
              "      \"permissions\": null,\n",
582
              "      \"ready\": true\n",
583
              "    },\n",
584
              "    {\n",
585
              "      \"created\": null,\n",
586
              "      \"id\": \"ada\",\n",
587
              "      \"object\": \"engine\",\n",
588
              "      \"owner\": \"openai\",\n",
589
              "      \"permissions\": null,\n",
590
              "      \"ready\": true\n",
591
              "    },\n",
592
              "    {\n",
593
              "      \"created\": null,\n",
594
              "      \"id\": \"babbage-code-search-text\",\n",
595
              "      \"object\": \"engine\",\n",
596
              "      \"owner\": \"openai-dev\",\n",
597
              "      \"permissions\": null,\n",
598
              "      \"ready\": true\n",
599
              "    },\n",
600
              "    {\n",
601
              "      \"created\": null,\n",
602
              "      \"id\": \"babbage-similarity\",\n",
603
              "      \"object\": \"engine\",\n",
604
              "      \"owner\": \"openai-dev\",\n",
605
              "      \"permissions\": null,\n",
606
              "      \"ready\": true\n",
607
              "    },\n",
608
              "    {\n",
609
              "      \"created\": null,\n",
610
              "      \"id\": \"code-search-babbage-text-001\",\n",
611
              "      \"object\": \"engine\",\n",
612
              "      \"owner\": \"openai-dev\",\n",
613
              "      \"permissions\": null,\n",
614
              "      \"ready\": true\n",
615
              "    },\n",
616
              "    {\n",
617
              "      \"created\": null,\n",
618
              "      \"id\": \"text-curie-001\",\n",
619
              "      \"object\": \"engine\",\n",
620
              "      \"owner\": \"openai\",\n",
621
              "      \"permissions\": null,\n",
622
              "      \"ready\": true\n",
623
              "    },\n",
624
              "    {\n",
625
              "      \"created\": null,\n",
626
              "      \"id\": \"gpt-4-0314\",\n",
627
              "      \"object\": \"engine\",\n",
628
              "      \"owner\": \"openai\",\n",
629
              "      \"permissions\": null,\n",
630
              "      \"ready\": true\n",
631
              "    },\n",
632
              "    {\n",
633
              "      \"created\": null,\n",
634
              "      \"id\": \"gpt-4-0613\",\n",
635
              "      \"object\": \"engine\",\n",
636
              "      \"owner\": \"openai\",\n",
637
              "      \"permissions\": null,\n",
638
              "      \"ready\": true\n",
639
              "    },\n",
640
              "    {\n",
641
              "      \"created\": null,\n",
642
              "      \"id\": \"code-search-babbage-code-001\",\n",
643
              "      \"object\": \"engine\",\n",
644
              "      \"owner\": \"openai-dev\",\n",
645
              "      \"permissions\": null,\n",
646
              "      \"ready\": true\n",
647
              "    },\n",
648
              "    {\n",
649
              "      \"created\": null,\n",
650
              "      \"id\": \"text-ada-001\",\n",
651
              "      \"object\": \"engine\",\n",
652
              "      \"owner\": \"openai\",\n",
653
              "      \"permissions\": null,\n",
654
              "      \"ready\": true\n",
655
              "    },\n",
656
              "    {\n",
657
              "      \"created\": null,\n",
658
              "      \"id\": \"text-similarity-ada-001\",\n",
659
              "      \"object\": \"engine\",\n",
660
              "      \"owner\": \"openai-dev\",\n",
661
              "      \"permissions\": null,\n",
662
              "      \"ready\": true\n",
663
              "    },\n",
664
              "    {\n",
665
              "      \"created\": null,\n",
666
              "      \"id\": \"curie-instruct-beta\",\n",
667
              "      \"object\": \"engine\",\n",
668
              "      \"owner\": \"openai\",\n",
669
              "      \"permissions\": null,\n",
670
              "      \"ready\": true\n",
671
              "    },\n",
672
              "    {\n",
673
              "      \"created\": null,\n",
674
              "      \"id\": \"gpt-4\",\n",
675
              "      \"object\": \"engine\",\n",
676
              "      \"owner\": \"openai\",\n",
677
              "      \"permissions\": null,\n",
678
              "      \"ready\": true\n",
679
              "    },\n",
680
              "    {\n",
681
              "      \"created\": null,\n",
682
              "      \"id\": \"ada-code-search-code\",\n",
683
              "      \"object\": \"engine\",\n",
684
              "      \"owner\": \"openai-dev\",\n",
685
              "      \"permissions\": null,\n",
686
              "      \"ready\": true\n",
687
              "    },\n",
688
              "    {\n",
689
              "      \"created\": null,\n",
690
              "      \"id\": \"ada-similarity\",\n",
691
              "      \"object\": \"engine\",\n",
692
              "      \"owner\": \"openai-dev\",\n",
693
              "      \"permissions\": null,\n",
694
              "      \"ready\": true\n",
695
              "    },\n",
696
              "    {\n",
697
              "      \"created\": null,\n",
698
              "      \"id\": \"code-search-ada-text-001\",\n",
699
              "      \"object\": \"engine\",\n",
700
              "      \"owner\": \"openai-dev\",\n",
701
              "      \"permissions\": null,\n",
702
              "      \"ready\": true\n",
703
              "    },\n",
704
              "    {\n",
705
              "      \"created\": null,\n",
706
              "      \"id\": \"text-search-ada-query-001\",\n",
707
              "      \"object\": \"engine\",\n",
708
              "      \"owner\": \"openai-dev\",\n",
709
              "      \"permissions\": null,\n",
710
              "      \"ready\": true\n",
711
              "    },\n",
712
              "    {\n",
713
              "      \"created\": null,\n",
714
              "      \"id\": \"davinci-search-document\",\n",
715
              "      \"object\": \"engine\",\n",
716
              "      \"owner\": \"openai-dev\",\n",
717
              "      \"permissions\": null,\n",
718
              "      \"ready\": true\n",
719
              "    },\n",
720
              "    {\n",
721
              "      \"created\": null,\n",
722
              "      \"id\": \"ada-code-search-text\",\n",
723
              "      \"object\": \"engine\",\n",
724
              "      \"owner\": \"openai-dev\",\n",
725
              "      \"permissions\": null,\n",
726
              "      \"ready\": true\n",
727
              "    },\n",
728
              "    {\n",
729
              "      \"created\": null,\n",
730
              "      \"id\": \"text-search-ada-doc-001\",\n",
731
              "      \"object\": \"engine\",\n",
732
              "      \"owner\": \"openai-dev\",\n",
733
              "      \"permissions\": null,\n",
734
              "      \"ready\": true\n",
735
              "    },\n",
736
              "    {\n",
737
              "      \"created\": null,\n",
738
              "      \"id\": \"davinci-instruct-beta\",\n",
739
              "      \"object\": \"engine\",\n",
740
              "      \"owner\": \"openai\",\n",
741
              "      \"permissions\": null,\n",
742
              "      \"ready\": true\n",
743
              "    },\n",
744
              "    {\n",
745
              "      \"created\": null,\n",
746
              "      \"id\": \"text-similarity-curie-001\",\n",
747
              "      \"object\": \"engine\",\n",
748
              "      \"owner\": \"openai-dev\",\n",
749
              "      \"permissions\": null,\n",
750
              "      \"ready\": true\n",
751
              "    },\n",
752
              "    {\n",
753
              "      \"created\": null,\n",
754
              "      \"id\": \"code-search-ada-code-001\",\n",
755
              "      \"object\": \"engine\",\n",
756
              "      \"owner\": \"openai-dev\",\n",
757
              "      \"permissions\": null,\n",
758
              "      \"ready\": true\n",
759
              "    },\n",
760
              "    {\n",
761
              "      \"created\": null,\n",
762
              "      \"id\": \"ada-search-query\",\n",
763
              "      \"object\": \"engine\",\n",
764
              "      \"owner\": \"openai-dev\",\n",
765
              "      \"permissions\": null,\n",
766
              "      \"ready\": true\n",
767
              "    },\n",
768
              "    {\n",
769
              "      \"created\": null,\n",
770
              "      \"id\": \"text-search-davinci-query-001\",\n",
771
              "      \"object\": \"engine\",\n",
772
              "      \"owner\": \"openai-dev\",\n",
773
              "      \"permissions\": null,\n",
774
              "      \"ready\": true\n",
775
              "    },\n",
776
              "    {\n",
777
              "      \"created\": null,\n",
778
              "      \"id\": \"curie-search-query\",\n",
779
              "      \"object\": \"engine\",\n",
780
              "      \"owner\": \"openai-dev\",\n",
781
              "      \"permissions\": null,\n",
782
              "      \"ready\": true\n",
783
              "    },\n",
784
              "    {\n",
785
              "      \"created\": null,\n",
786
              "      \"id\": \"davinci-search-query\",\n",
787
              "      \"object\": \"engine\",\n",
788
              "      \"owner\": \"openai-dev\",\n",
789
              "      \"permissions\": null,\n",
790
              "      \"ready\": true\n",
791
              "    },\n",
792
              "    {\n",
793
              "      \"created\": null,\n",
794
              "      \"id\": \"babbage-search-document\",\n",
795
              "      \"object\": \"engine\",\n",
796
              "      \"owner\": \"openai-dev\",\n",
797
              "      \"permissions\": null,\n",
798
              "      \"ready\": true\n",
799
              "    },\n",
800
              "    {\n",
801
              "      \"created\": null,\n",
802
              "      \"id\": \"ada-search-document\",\n",
803
              "      \"object\": \"engine\",\n",
804
              "      \"owner\": \"openai-dev\",\n",
805
              "      \"permissions\": null,\n",
806
              "      \"ready\": true\n",
807
              "    },\n",
808
              "    {\n",
809
              "      \"created\": null,\n",
810
              "      \"id\": \"text-search-curie-query-001\",\n",
811
              "      \"object\": \"engine\",\n",
812
              "      \"owner\": \"openai-dev\",\n",
813
              "      \"permissions\": null,\n",
814
              "      \"ready\": true\n",
815
              "    },\n",
816
              "    {\n",
817
              "      \"created\": null,\n",
818
              "      \"id\": \"text-search-babbage-doc-001\",\n",
819
              "      \"object\": \"engine\",\n",
820
              "      \"owner\": \"openai-dev\",\n",
821
              "      \"permissions\": null,\n",
822
              "      \"ready\": true\n",
823
              "    },\n",
824
              "    {\n",
825
              "      \"created\": null,\n",
826
              "      \"id\": \"curie-search-document\",\n",
827
              "      \"object\": \"engine\",\n",
828
              "      \"owner\": \"openai-dev\",\n",
829
              "      \"permissions\": null,\n",
830
              "      \"ready\": true\n",
831
              "    },\n",
832
              "    {\n",
833
              "      \"created\": null,\n",
834
              "      \"id\": \"text-search-curie-doc-001\",\n",
835
              "      \"object\": \"engine\",\n",
836
              "      \"owner\": \"openai-dev\",\n",
837
              "      \"permissions\": null,\n",
838
              "      \"ready\": true\n",
839
              "    },\n",
840
              "    {\n",
841
              "      \"created\": null,\n",
842
              "      \"id\": \"babbage-search-query\",\n",
843
              "      \"object\": \"engine\",\n",
844
              "      \"owner\": \"openai-dev\",\n",
845
              "      \"permissions\": null,\n",
846
              "      \"ready\": true\n",
847
              "    },\n",
848
              "    {\n",
849
              "      \"created\": null,\n",
850
              "      \"id\": \"text-babbage-001\",\n",
851
              "      \"object\": \"engine\",\n",
852
              "      \"owner\": \"openai\",\n",
853
              "      \"permissions\": null,\n",
854
              "      \"ready\": true\n",
855
              "    },\n",
856
              "    {\n",
857
              "      \"created\": null,\n",
858
              "      \"id\": \"text-search-davinci-doc-001\",\n",
859
              "      \"object\": \"engine\",\n",
860
              "      \"owner\": \"openai-dev\",\n",
861
              "      \"permissions\": null,\n",
862
              "      \"ready\": true\n",
863
              "    },\n",
864
              "    {\n",
865
              "      \"created\": null,\n",
866
              "      \"id\": \"text-search-babbage-query-001\",\n",
867
              "      \"object\": \"engine\",\n",
868
              "      \"owner\": \"openai-dev\",\n",
869
              "      \"permissions\": null,\n",
870
              "      \"ready\": true\n",
871
              "    },\n",
872
              "    {\n",
873
              "      \"created\": null,\n",
874
              "      \"id\": \"curie-similarity\",\n",
875
              "      \"object\": \"engine\",\n",
876
              "      \"owner\": \"openai-dev\",\n",
877
              "      \"permissions\": null,\n",
878
              "      \"ready\": true\n",
879
              "    },\n",
880
              "    {\n",
881
              "      \"created\": null,\n",
882
              "      \"id\": \"gpt-3.5-turbo-0613\",\n",
883
              "      \"object\": \"engine\",\n",
884
              "      \"owner\": \"openai\",\n",
885
              "      \"permissions\": null,\n",
886
              "      \"ready\": true\n",
887
              "    },\n",
888
              "    {\n",
889
              "      \"created\": null,\n",
890
              "      \"id\": \"curie\",\n",
891
              "      \"object\": \"engine\",\n",
892
              "      \"owner\": \"openai\",\n",
893
              "      \"permissions\": null,\n",
894
              "      \"ready\": true\n",
895
              "    },\n",
896
              "    {\n",
897
              "      \"created\": null,\n",
898
              "      \"id\": \"gpt-3.5-turbo-16k-0613\",\n",
899
              "      \"object\": \"engine\",\n",
900
              "      \"owner\": \"openai\",\n",
901
              "      \"permissions\": null,\n",
902
              "      \"ready\": true\n",
903
              "    },\n",
904
              "    {\n",
905
              "      \"created\": null,\n",
906
              "      \"id\": \"text-similarity-davinci-001\",\n",
907
              "      \"object\": \"engine\",\n",
908
              "      \"owner\": \"openai-dev\",\n",
909
              "      \"permissions\": null,\n",
910
              "      \"ready\": true\n",
911
              "    },\n",
912
              "    {\n",
913
              "      \"created\": null,\n",
914
              "      \"id\": \"text-davinci-002\",\n",
915
              "      \"object\": \"engine\",\n",
916
              "      \"owner\": \"openai\",\n",
917
              "      \"permissions\": null,\n",
918
              "      \"ready\": true\n",
919
              "    },\n",
920
              "    {\n",
921
              "      \"created\": null,\n",
922
              "      \"id\": \"gpt-3.5-turbo-0301\",\n",
923
              "      \"object\": \"engine\",\n",
924
              "      \"owner\": \"openai\",\n",
925
              "      \"permissions\": null,\n",
926
              "      \"ready\": true\n",
927
              "    },\n",
928
              "    {\n",
929
              "      \"created\": null,\n",
930
              "      \"id\": \"text-davinci-003\",\n",
931
              "      \"object\": \"engine\",\n",
932
              "      \"owner\": \"openai-internal\",\n",
933
              "      \"permissions\": null,\n",
934
              "      \"ready\": true\n",
935
              "    },\n",
936
              "    {\n",
937
              "      \"created\": null,\n",
938
              "      \"id\": \"davinci-similarity\",\n",
939
              "      \"object\": \"engine\",\n",
940
              "      \"owner\": \"openai-dev\",\n",
941
              "      \"permissions\": null,\n",
942
              "      \"ready\": true\n",
943
              "    },\n",
944
              "    {\n",
945
              "      \"created\": null,\n",
946
              "      \"id\": \"gpt-3.5-turbo\",\n",
947
              "      \"object\": \"engine\",\n",
948
              "      \"owner\": \"openai\",\n",
949
              "      \"permissions\": null,\n",
950
              "      \"ready\": true\n",
951
              "    },\n",
952
              "    {\n",
953
              "      \"created\": null,\n",
954
              "      \"id\": \"gpt-3.5-turbo-16k\",\n",
955
              "      \"object\": \"engine\",\n",
956
              "      \"owner\": \"openai-internal\",\n",
957
              "      \"permissions\": null,\n",
958
              "      \"ready\": true\n",
959
              "    }\n",
960
              "  ],\n",
961
              "  \"object\": \"list\"\n",
962
              "}"
963
            ]
964
          },
965
          "execution_count": 11,
966
          "metadata": {},
967
          "output_type": "execute_result"
968
        }
969
      ],
970
      "source": [
971
        "import os\n",
972
        "import openai\n",
973
        "\n",
974
        "# get API key from top-right dropdown on OpenAI website\n",
975
        "openai.api_key = os.getenv(\"OPENAI_API_KEY\") or \"OPENAI_API_KEY\"\n",
976
        "\n",
977
        "openai.Engine.list()  # check we have authenticated"
978
      ]
979
    },
980
    {
981
      "cell_type": "code",
982
      "execution_count": 12,
983
      "metadata": {
984
        "id": "kteZ69Z5M55S"
985
      },
986
      "outputs": [],
987
      "source": [
988
        "embed_model = \"text-embedding-ada-002\"\n",
989
        "\n",
990
        "res = openai.Embedding.create(\n",
991
        "    input=[\n",
992
        "        \"Sample document text goes here\",\n",
993
        "        \"there will be several phrases in each batch\"\n",
994
        "    ], engine=embed_model\n",
995
        ")"
996
      ]
997
    },
998
    {
999
      "cell_type": "markdown",
1000
      "metadata": {
1001
        "id": "aNZ7IWekNLbu"
1002
      },
1003
      "source": [
1004
        "In the response `res` we will find a JSON-like object containing our new embeddings within the `'data'` field."
1005
      ]
1006
    },
1007
    {
1008
      "cell_type": "code",
1009
      "execution_count": 13,
1010
      "metadata": {
1011
        "colab": {
1012
          "base_uri": "https://localhost:8080/"
1013
        },
1014
        "id": "esagZj6iNLPZ",
1015
        "outputId": "8e26f18a-4890-43ca-95e7-9e256e29e3be"
1016
      },
1017
      "outputs": [
1018
        {
1019
          "data": {
1020
            "text/plain": [
1021
              "dict_keys(['object', 'data', 'model', 'usage'])"
1022
            ]
1023
          },
1024
          "execution_count": 13,
1025
          "metadata": {},
1026
          "output_type": "execute_result"
1027
        }
1028
      ],
1029
      "source": [
1030
        "res.keys()"
1031
      ]
1032
    },
1033
    {
1034
      "cell_type": "markdown",
1035
      "metadata": {
1036
        "id": "zStnHFpkNVIU"
1037
      },
1038
      "source": [
1039
        "Inside `'data'` we will find two records, one for each of the two sentences we just embedded. Each vector embedding contains `1536` dimensions (the output dimensionality of the `text-embedding-ada-002` model."
1040
      ]
1041
    },
1042
    {
1043
      "cell_type": "code",
1044
      "execution_count": 14,
1045
      "metadata": {
1046
        "colab": {
1047
          "base_uri": "https://localhost:8080/"
1048
        },
1049
        "id": "uVoP9VcINWAC",
1050
        "outputId": "d9f797af-0df8-4ee9-f779-8d8a62589134"
1051
      },
1052
      "outputs": [
1053
        {
1054
          "data": {
1055
            "text/plain": [
1056
              "2"
1057
            ]
1058
          },
1059
          "execution_count": 14,
1060
          "metadata": {},
1061
          "output_type": "execute_result"
1062
        }
1063
      ],
1064
      "source": [
1065
        "len(res['data'])"
1066
      ]
1067
    },
1068
    {
1069
      "cell_type": "code",
1070
      "execution_count": 15,
1071
      "metadata": {
1072
        "colab": {
1073
          "base_uri": "https://localhost:8080/"
1074
        },
1075
        "id": "s-zraDCjNeC6",
1076
        "outputId": "5f09e471-28de-4c39-d040-a80def97708e"
1077
      },
1078
      "outputs": [
1079
        {
1080
          "data": {
1081
            "text/plain": [
1082
              "(1536, 1536)"
1083
            ]
1084
          },
1085
          "execution_count": 15,
1086
          "metadata": {},
1087
          "output_type": "execute_result"
1088
        }
1089
      ],
1090
      "source": [
1091
        "len(res['data'][0]['embedding']), len(res['data'][1]['embedding'])"
1092
      ]
1093
    },
1094
    {
1095
      "cell_type": "markdown",
1096
      "metadata": {
1097
        "id": "XPd41MjANhmp"
1098
      },
1099
      "source": [
1100
        "We will apply this same embedding logic to the langchain docs dataset we've just scraped. But before doing so we must create a place to store the embeddings."
1101
      ]
1102
    },
1103
    {
1104
      "cell_type": "markdown",
1105
      "metadata": {
1106
        "id": "WPi4MZvMNvUH"
1107
      },
1108
      "source": [
1109
        "## Initializing the Index"
1110
      ]
1111
    },
1112
    {
1113
      "cell_type": "markdown",
1114
      "metadata": {
1115
        "id": "H5RRQArrN2lN"
1116
      },
1117
      "source": [
1118
        "Now we need a place to store these embeddings and enable a efficient vector search through them all. To do that we use Pinecone, we can get a [free API key](https://app.pinecone.io/) and enter it below where we will initialize our connection to Pinecone and create a new index."
1119
      ]
1120
    },
1121
    {
1122
      "cell_type": "code",
1123
      "execution_count": 16,
1124
      "metadata": {
1125
        "colab": {
1126
          "base_uri": "https://localhost:8080/"
1127
        },
1128
        "id": "ZO5_EdUAum8v",
1129
        "outputId": "c8e1358d-085c-40a6-de48-e70e4ccf8165"
1130
      },
1131
      "outputs": [
1132
        {
1133
          "data": {
1134
            "text/plain": [
1135
              "WhoAmIResponse(username='c78f2bd', user_label='default', projectname='3947fb1')"
1136
            ]
1137
          },
1138
          "execution_count": 16,
1139
          "metadata": {},
1140
          "output_type": "execute_result"
1141
        }
1142
      ],
1143
      "source": [
1144
        "from pinecone import Pinecone\n",
1145
        "\n",
1146
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
1147
        "api_key = os.getenv(\"PINECONE_API_KEY\") or \"PINECONE_API_KEY\"\n",
1148
        "# find your environment next to the api key in pinecone console\n",
1149
        "env = os.getenv(\"PINECONE_ENVIRONMENT\") or \"PINECONE_ENVIRONMENT\"\n",
1150
        "\n",
1151
        "pc = Pinecone(api_key=api_key)\n",
1152
        "pinecone.whoami()"
1153
      ]
1154
    },
1155
    {
1156
      "cell_type": "code",
1157
      "execution_count": 17,
1158
      "metadata": {
1159
        "id": "2GQAnohhum8v",
1160
        "tags": [
1161
          "parameters"
1162
        ]
1163
      },
1164
      "outputs": [],
1165
      "source": [
1166
        "index_name = 'gpt-4-langchain-docs'"
1167
      ]
1168
    },
1169
    {
1170
      "cell_type": "code",
1171
      "execution_count": 18,
1172
      "metadata": {
1173
        "colab": {
1174
          "base_uri": "https://localhost:8080/"
1175
        },
1176
        "id": "EO8sbJFZNyIZ",
1177
        "outputId": "7a6a6434-79c1-49ef-b8f1-7d91f491a196"
1178
      },
1179
      "outputs": [
1180
        {
1181
          "data": {
1182
            "text/plain": [
1183
              "{'dimension': 1536,\n",
1184
              " 'index_fullness': 0.0,\n",
1185
              " 'namespaces': {},\n",
1186
              " 'total_vector_count': 0}"
1187
            ]
1188
          },
1189
          "execution_count": 18,
1190
          "metadata": {},
1191
          "output_type": "execute_result"
1192
        }
1193
      ],
1194
      "source": [
1195
        "import time\n",
1196
        "\n",
1197
        "# check if index already exists (it shouldn't if this is first time)\n",
1198
        "if index_name not in pinecone.list_indexes().names():\n",
1199
        "    # if does not exist, create index\n",
1200
        "    pinecone.create_index(\n",
1201
        "        index_name,\n",
1202
        "        dimension=len(res['data'][0]['embedding']),\n",
1203
        "        metric='cosine'\n",
1204
        "    )\n",
1205
        "    # wait for index to be initialized\n",
1206
        "    while not pinecone.describe_index(index_name).status['ready']:\n",
1207
        "        time.sleep(1)\n",
1208
        "\n",
1209
        "# connect to index\n",
1210
        "index = pinecone.Index(index_name)\n",
1211
        "# view index stats\n",
1212
        "index.describe_index_stats()"
1213
      ]
1214
    },
1215
    {
1216
      "cell_type": "markdown",
1217
      "metadata": {
1218
        "id": "ezSTzN2rPa2o"
1219
      },
1220
      "source": [
1221
        "We can see the index is currently empty with a `total_vector_count` of `0`. We can begin populating it with OpenAI `text-embedding-ada-002` built embeddings like so:"
1222
      ]
1223
    },
1224
    {
1225
      "cell_type": "code",
1226
      "execution_count": 19,
1227
      "metadata": {
1228
        "colab": {
1229
          "base_uri": "https://localhost:8080/",
1230
          "height": 49,
1231
          "referenced_widgets": [
1232
            "760c608de89946298cb6845d5ff1b020",
1233
            "f6f7d673d7a145bda593848f7e87ca2c",
1234
            "effb0c1b07574547aca5956963b371c8",
1235
            "e6e0b0054fb5449c84ad745308510ddb",
1236
            "b1e6d4d46b334bcf96efcab6f57c7536",
1237
            "e5a120d5b9494d14a142fbf519bcbbdf",
1238
            "78fe5eb48ae748bda91ddc70f422212c",
1239
            "34e43d6a7a92453490c45e39498afd64",
1240
            "45c7fb32593141abb8168b8077e31f59",
1241
            "0ed96243151440a18994669e2f85e819",
1242
            "05a0a1ebc92f463d9f3e953e51742a85"
1243
          ]
1244
        },
1245
        "id": "iZbFbulAPeop",
1246
        "outputId": "97cbb020-f6f9-4914-ff14-dd472354f64a"
1247
      },
1248
      "outputs": [
1249
        {
1250
          "data": {
1251
            "application/vnd.jupyter.widget-view+json": {
1252
              "model_id": "760c608de89946298cb6845d5ff1b020",
1253
              "version_major": 2,
1254
              "version_minor": 0
1255
            },
1256
            "text/plain": [
1257
              "  0%|          | 0/25 [00:00<?, ?it/s]"
1258
            ]
1259
          },
1260
          "metadata": {},
1261
          "output_type": "display_data"
1262
        }
1263
      ],
1264
      "source": [
1265
        "from tqdm.auto import tqdm\n",
1266
        "\n",
1267
        "batch_size = 100  # how many embeddings we create and insert at once\n",
1268
        "\n",
1269
        "for i in tqdm(range(0, len(chunks), batch_size)):\n",
1270
        "    # find end of batch\n",
1271
        "    i_end = min(len(chunks), i+batch_size)\n",
1272
        "    meta_batch = chunks[i:i_end]\n",
1273
        "    # get ids\n",
1274
        "    ids_batch = [x['id'] for x in meta_batch]\n",
1275
        "    # get texts to encode\n",
1276
        "    texts = [x['text'] for x in meta_batch]\n",
1277
        "    # create embeddings (try-except added to avoid RateLimitError)\n",
1278
        "    try:\n",
1279
        "        res = openai.Embedding.create(input=texts, engine=embed_model)\n",
1280
        "    except:\n",
1281
        "        done = False\n",
1282
        "        while not done:\n",
1283
        "            time.sleep(5)\n",
1284
        "            try:\n",
1285
        "                res = openai.Embedding.create(input=texts, engine=embed_model)\n",
1286
        "                done = True\n",
1287
        "            except:\n",
1288
        "                pass\n",
1289
        "    embeds = [record['embedding'] for record in res['data']]\n",
1290
        "    # cleanup metadata\n",
1291
        "    meta_batch = [{\n",
1292
        "        'text': x['text'],\n",
1293
        "        'chunk': x['chunk'],\n",
1294
        "        'url': x['url']\n",
1295
        "    } for x in meta_batch]\n",
1296
        "    to_upsert = list(zip(ids_batch, embeds, meta_batch))\n",
1297
        "    # upsert to Pinecone\n",
1298
        "    index.upsert(vectors=to_upsert)"
1299
      ]
1300
    },
1301
    {
1302
      "cell_type": "markdown",
1303
      "metadata": {
1304
        "id": "YttJOrEtQIF9"
1305
      },
1306
      "source": [
1307
        "Now we've added all of our langchain docs to the index. With that we can move on to retrieval and then answer generation using GPT-4."
1308
      ]
1309
    },
1310
    {
1311
      "cell_type": "markdown",
1312
      "metadata": {
1313
        "id": "FumVmMRlQQ7w"
1314
      },
1315
      "source": [
1316
        "## Retrieval"
1317
      ]
1318
    },
1319
    {
1320
      "cell_type": "markdown",
1321
      "metadata": {
1322
        "id": "nLRODeL-QTJ9"
1323
      },
1324
      "source": [
1325
        "To search through our documents we first need to create a query vector `xq`. Using `xq` we will retrieve the most relevant chunks from the LangChain docs, like so:"
1326
      ]
1327
    },
1328
    {
1329
      "cell_type": "code",
1330
      "execution_count": 20,
1331
      "metadata": {
1332
        "id": "FMUPdX9cQQYC"
1333
      },
1334
      "outputs": [],
1335
      "source": [
1336
        "query = \"how do I use the LLMChain in LangChain?\"\n",
1337
        "\n",
1338
        "res = openai.Embedding.create(\n",
1339
        "    input=[query],\n",
1340
        "    engine=embed_model\n",
1341
        ")\n",
1342
        "\n",
1343
        "# retrieve from Pinecone\n",
1344
        "xq = res['data'][0]['embedding']\n",
1345
        "\n",
1346
        "# get relevant contexts (including the questions)\n",
1347
        "res = index.query(vector=xq, top_k=5, include_metadata=True)"
1348
      ]
1349
    },
1350
    {
1351
      "cell_type": "code",
1352
      "execution_count": 21,
1353
      "metadata": {
1354
        "colab": {
1355
          "base_uri": "https://localhost:8080/"
1356
        },
1357
        "id": "zl9SrFPkQjg-",
1358
        "outputId": "f7e6e60b-ad81-4aae-89a9-3e2f04a65ccd"
1359
      },
1360
      "outputs": [
1361
        {
1362
          "data": {
1363
            "text/plain": [
1364
              "{'matches': [{'id': '35afffd0-a42a-42ee-ac6f-92b5491183fb-0',\n",
1365
              "              'metadata': {'chunk': 0.0,\n",
1366
              "                           'text': 'Source code for langchain.chains.llm\\n'\n",
1367
              "                                   '\"\"\"Chain that just formats a prompt and '\n",
1368
              "                                   'calls an LLM.\"\"\"\\n'\n",
1369
              "                                   'from __future__ import annotations\\n'\n",
1370
              "                                   'import warnings\\n'\n",
1371
              "                                   'from typing import Any, Dict, List, '\n",
1372
              "                                   'Optional, Sequence, Tuple, Union\\n'\n",
1373
              "                                   'from pydantic import Extra, Field\\n'\n",
1374
              "                                   'from langchain.base_language import '\n",
1375
              "                                   'BaseLanguageModel\\n'\n",
1376
              "                                   'from langchain.callbacks.manager import (\\n'\n",
1377
              "                                   '    AsyncCallbackManager,\\n'\n",
1378
              "                                   '    AsyncCallbackManagerForChainRun,\\n'\n",
1379
              "                                   '    CallbackManager,\\n'\n",
1380
              "                                   '    CallbackManagerForChainRun,\\n'\n",
1381
              "                                   '    Callbacks,\\n'\n",
1382
              "                                   ')\\n'\n",
1383
              "                                   'from langchain.chains.base import Chain\\n'\n",
1384
              "                                   'from langchain.input import '\n",
1385
              "                                   'get_colored_text\\n'\n",
1386
              "                                   'from langchain.load.dump import dumpd\\n'\n",
1387
              "                                   'from langchain.prompts.base import '\n",
1388
              "                                   'BasePromptTemplate\\n'\n",
1389
              "                                   'from langchain.prompts.prompt import '\n",
1390
              "                                   'PromptTemplate\\n'\n",
1391
              "                                   'from langchain.schema import (\\n'\n",
1392
              "                                   '    BaseLLMOutputParser,\\n'\n",
1393
              "                                   '    LLMResult,\\n'\n",
1394
              "                                   '    NoOpOutputParser,\\n'\n",
1395
              "                                   '    PromptValue,\\n'\n",
1396
              "                                   ')\\n'\n",
1397
              "                                   '[docs]class LLMChain(Chain):\\n'\n",
1398
              "                                   '    \"\"\"Chain to run queries against LLMs.\\n'\n",
1399
              "                                   '    Example:\\n'\n",
1400
              "                                   '        .. code-block:: python\\n'\n",
1401
              "                                   '            from langchain import '\n",
1402
              "                                   'LLMChain, OpenAI, PromptTemplate\\n'\n",
1403
              "                                   '            prompt_template = \"Tell me a '\n",
1404
              "                                   '{adjective} joke\"\\n'\n",
1405
              "                                   '            prompt = PromptTemplate(\\n'\n",
1406
              "                                   '                '\n",
1407
              "                                   'input_variables=[\"adjective\"], '\n",
1408
              "                                   'template=prompt_template\\n'\n",
1409
              "                                   '            )\\n'\n",
1410
              "                                   '            llm = LLMChain(llm=OpenAI(), '\n",
1411
              "                                   'prompt=prompt)\\n'\n",
1412
              "                                   '    \"\"\"\\n'\n",
1413
              "                                   '    @property\\n'\n",
1414
              "                                   '    def lc_serializable(self) -> bool:\\n'\n",
1415
              "                                   '        return True\\n'\n",
1416
              "                                   '    prompt: BasePromptTemplate\\n'\n",
1417
              "                                   '    \"\"\"Prompt object to use.\"\"\"\\n'\n",
1418
              "                                   '    llm: BaseLanguageModel\\n'\n",
1419
              "                                   '    \"\"\"Language model to call.\"\"\"\\n'\n",
1420
              "                                   '    output_key: str = \"text\"  #: :meta '\n",
1421
              "                                   'private:\\n'\n",
1422
              "                                   '    output_parser: BaseLLMOutputParser = '\n",
1423
              "                                   'Field(default_factory=NoOpOutputParser)\\n'\n",
1424
              "                                   '    \"\"\"Output parser to use.\\n'\n",
1425
              "                                   '    Defaults to one that takes the most '\n",
1426
              "                                   'likely string but does not change it \\n'\n",
1427
              "                                   '    otherwise.\"\"\"\\n'\n",
1428
              "                                   '    return_final_only: bool = True\\n'\n",
1429
              "                                   '    \"\"\"Whether to return only the final '\n",
1430
              "                                   'parsed result. Defaults to True.\\n'\n",
1431
              "                                   '    If false, will return a bunch of extra '\n",
1432
              "                                   'information about the generation.\"\"\"\\n'\n",
1433
              "                                   '    llm_kwargs: dict = '\n",
1434
              "                                   'Field(default_factory=dict)\\n'\n",
1435
              "                                   '    class Config:\\n'\n",
1436
              "                                   '        \"\"\"Configuration for this pydantic '\n",
1437
              "                                   'object.\"\"\"\\n'\n",
1438
              "                                   '        extra = Extra.forbid\\n'\n",
1439
              "                                   '        arbitrary_types_allowed = True',\n",
1440
              "                           'url': 'https://api.python.langchain.com/en/latest/_modules/langchain/chains/llm.html'},\n",
1441
              "              'score': 0.800940871,\n",
1442
              "              'values': []},\n",
1443
              "             {'id': '35cde68a-b909-43b6-b918-81c4eb2db5cd-82',\n",
1444
              "              'metadata': {'chunk': 82.0,\n",
1445
              "                           'text': 'Bases: langchain.chains.base.Chain\\n'\n",
1446
              "                                   'Chain for question-answering with '\n",
1447
              "                                   'self-verification.\\n'\n",
1448
              "                                   'Example\\n'\n",
1449
              "                                   'from langchain import OpenAI, '\n",
1450
              "                                   'LLMSummarizationCheckerChain\\n'\n",
1451
              "                                   'llm = OpenAI(temperature=0.0)\\n'\n",
1452
              "                                   'checker_chain = '\n",
1453
              "                                   'LLMSummarizationCheckerChain.from_llm(llm)\\n'\n",
1454
              "                                   'Parameters\\n'\n",
1455
              "                                   'memory '\n",
1456
              "                                   '(Optional[langchain.schema.BaseMemory]) '\n",
1457
              "                                   '– \\n'\n",
1458
              "                                   'callbacks '\n",
1459
              "                                   '(Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], '\n",
1460
              "                                   'langchain.callbacks.base.BaseCallbackManager]]) '\n",
1461
              "                                   '– \\n'\n",
1462
              "                                   'callback_manager '\n",
1463
              "                                   '(Optional[langchain.callbacks.base.BaseCallbackManager]) '\n",
1464
              "                                   '– \\n'\n",
1465
              "                                   'verbose (bool) – \\n'\n",
1466
              "                                   'tags (Optional[List[str]]) – \\n'\n",
1467
              "                                   'sequential_chain '\n",
1468
              "                                   '(langchain.chains.sequential.SequentialChain) '\n",
1469
              "                                   '– \\n'\n",
1470
              "                                   'llm '\n",
1471
              "                                   '(Optional[langchain.base_language.BaseLanguageModel]) '\n",
1472
              "                                   '– \\n'\n",
1473
              "                                   'create_assertions_prompt '\n",
1474
              "                                   '(langchain.prompts.prompt.PromptTemplate) '\n",
1475
              "                                   '– \\n'\n",
1476
              "                                   'check_assertions_prompt '\n",
1477
              "                                   '(langchain.prompts.prompt.PromptTemplate) '\n",
1478
              "                                   '– \\n'\n",
1479
              "                                   'revised_summary_prompt '\n",
1480
              "                                   '(langchain.prompts.prompt.PromptTemplate) '\n",
1481
              "                                   '– \\n'\n",
1482
              "                                   'are_all_true_prompt '\n",
1483
              "                                   '(langchain.prompts.prompt.PromptTemplate) '\n",
1484
              "                                   '– \\n'\n",
1485
              "                                   'input_key (str) – \\n'\n",
1486
              "                                   'output_key (str) – \\n'\n",
1487
              "                                   'max_checks (int) – \\n'\n",
1488
              "                                   'Return type\\n'\n",
1489
              "                                   'None',\n",
1490
              "                           'url': 'https://api.python.langchain.com/en/latest/modules/chains.html'},\n",
1491
              "              'score': 0.79580605,\n",
1492
              "              'values': []},\n",
1493
              "             {'id': '993db45b-4e3b-431d-a2a6-48ed5944912a-1',\n",
1494
              "              'metadata': {'chunk': 1.0,\n",
1495
              "                           'text': '[docs]    @classmethod\\n'\n",
1496
              "                                   '    def from_llm(\\n'\n",
1497
              "                                   '        cls,\\n'\n",
1498
              "                                   '        llm: BaseLanguageModel,\\n'\n",
1499
              "                                   '        chain: LLMChain,\\n'\n",
1500
              "                                   '        critique_prompt: '\n",
1501
              "                                   'BasePromptTemplate = CRITIQUE_PROMPT,\\n'\n",
1502
              "                                   '        revision_prompt: '\n",
1503
              "                                   'BasePromptTemplate = REVISION_PROMPT,\\n'\n",
1504
              "                                   '        **kwargs: Any,\\n'\n",
1505
              "                                   '    ) -> \"ConstitutionalChain\":\\n'\n",
1506
              "                                   '        \"\"\"Create a chain from an LLM.\"\"\"\\n'\n",
1507
              "                                   '        critique_chain = LLMChain(llm=llm, '\n",
1508
              "                                   'prompt=critique_prompt)\\n'\n",
1509
              "                                   '        revision_chain = LLMChain(llm=llm, '\n",
1510
              "                                   'prompt=revision_prompt)\\n'\n",
1511
              "                                   '        return cls(\\n'\n",
1512
              "                                   '            chain=chain,\\n'\n",
1513
              "                                   '            '\n",
1514
              "                                   'critique_chain=critique_chain,\\n'\n",
1515
              "                                   '            '\n",
1516
              "                                   'revision_chain=revision_chain,\\n'\n",
1517
              "                                   '            **kwargs,\\n'\n",
1518
              "                                   '        )\\n'\n",
1519
              "                                   '    @property\\n'\n",
1520
              "                                   '    def input_keys(self) -> List[str]:\\n'\n",
1521
              "                                   '        \"\"\"Defines the input keys.\"\"\"\\n'\n",
1522
              "                                   '        return self.chain.input_keys\\n'\n",
1523
              "                                   '    @property\\n'\n",
1524
              "                                   '    def output_keys(self) -> List[str]:\\n'\n",
1525
              "                                   '        \"\"\"Defines the output keys.\"\"\"\\n'\n",
1526
              "                                   '        if '\n",
1527
              "                                   'self.return_intermediate_steps:\\n'\n",
1528
              "                                   '            return [\"output\", '\n",
1529
              "                                   '\"critiques_and_revisions\", '\n",
1530
              "                                   '\"initial_output\"]\\n'\n",
1531
              "                                   '        return [\"output\"]\\n'\n",
1532
              "                                   '    def _call(\\n'\n",
1533
              "                                   '        self,\\n'\n",
1534
              "                                   '        inputs: Dict[str, Any],\\n'\n",
1535
              "                                   '        run_manager: '\n",
1536
              "                                   'Optional[CallbackManagerForChainRun] = '\n",
1537
              "                                   'None,\\n'\n",
1538
              "                                   '    ) -> Dict[str, Any]:\\n'\n",
1539
              "                                   '        _run_manager = run_manager or '\n",
1540
              "                                   'CallbackManagerForChainRun.get_noop_manager()\\n'\n",
1541
              "                                   '        response = self.chain.run(\\n'\n",
1542
              "                                   '            **inputs,\\n'\n",
1543
              "                                   '            '\n",
1544
              "                                   'callbacks=_run_manager.get_child(\"original\"),\\n'\n",
1545
              "                                   '        )\\n'\n",
1546
              "                                   '        initial_response = response\\n'\n",
1547
              "                                   '        input_prompt = '\n",
1548
              "                                   'self.chain.prompt.format(**inputs)\\n'\n",
1549
              "                                   '        _run_manager.on_text(\\n'\n",
1550
              "                                   '            text=\"Initial response: \" + '\n",
1551
              "                                   'response + \"\\\\n\\\\n\",\\n'\n",
1552
              "                                   '            verbose=self.verbose,\\n'\n",
1553
              "                                   '            color=\"yellow\",\\n'\n",
1554
              "                                   '        )\\n'\n",
1555
              "                                   '        critiques_and_revisions = []\\n'\n",
1556
              "                                   '        for constitutional_principle in '\n",
1557
              "                                   'self.constitutional_principles:\\n'\n",
1558
              "                                   '            # Do critique\\n'\n",
1559
              "                                   '            raw_critique = '\n",
1560
              "                                   'self.critique_chain.run(\\n'\n",
1561
              "                                   '                '\n",
1562
              "                                   'input_prompt=input_prompt,\\n'\n",
1563
              "                                   '                '\n",
1564
              "                                   'output_from_model=response,\\n'\n",
1565
              "                                   '                '\n",
1566
              "                                   'critique_request=constitutional_principle.critique_request,\\n'\n",
1567
              "                                   '                '\n",
1568
              "                                   'callbacks=_run_manager.get_child(\"critique\"),\\n'\n",
1569
              "                                   '            )\\n'\n",
1570
              "                                   '            critique = '\n",
1571
              "                                   'self._parse_critique(\\n'\n",
1572
              "                                   '                '\n",
1573
              "                                   'output_string=raw_critique,',\n",
1574
              "                           'url': 'https://api.python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html'},\n",
1575
              "              'score': 0.79369247,\n",
1576
              "              'values': []},\n",
1577
              "             {'id': 'adea5d40-2691-4bc9-9403-3360345bc25e-0',\n",
1578
              "              'metadata': {'chunk': 0.0,\n",
1579
              "                           'text': 'Source code for '\n",
1580
              "                                   'langchain.chains.conversation.base\\n'\n",
1581
              "                                   '\"\"\"Chain that carries on a conversation '\n",
1582
              "                                   'and calls an LLM.\"\"\"\\n'\n",
1583
              "                                   'from typing import Dict, List\\n'\n",
1584
              "                                   'from pydantic import Extra, Field, '\n",
1585
              "                                   'root_validator\\n'\n",
1586
              "                                   'from langchain.chains.conversation.prompt '\n",
1587
              "                                   'import PROMPT\\n'\n",
1588
              "                                   'from langchain.chains.llm import LLMChain\\n'\n",
1589
              "                                   'from langchain.memory.buffer import '\n",
1590
              "                                   'ConversationBufferMemory\\n'\n",
1591
              "                                   'from langchain.prompts.base import '\n",
1592
              "                                   'BasePromptTemplate\\n'\n",
1593
              "                                   'from langchain.schema import BaseMemory\\n'\n",
1594
              "                                   '[docs]class ConversationChain(LLMChain):\\n'\n",
1595
              "                                   '    \"\"\"Chain to have a conversation and '\n",
1596
              "                                   'load context from memory.\\n'\n",
1597
              "                                   '    Example:\\n'\n",
1598
              "                                   '        .. code-block:: python\\n'\n",
1599
              "                                   '            from langchain import '\n",
1600
              "                                   'ConversationChain, OpenAI\\n'\n",
1601
              "                                   '            conversation = '\n",
1602
              "                                   'ConversationChain(llm=OpenAI())\\n'\n",
1603
              "                                   '    \"\"\"\\n'\n",
1604
              "                                   '    memory: BaseMemory = '\n",
1605
              "                                   'Field(default_factory=ConversationBufferMemory)\\n'\n",
1606
              "                                   '    \"\"\"Default memory store.\"\"\"\\n'\n",
1607
              "                                   '    prompt: BasePromptTemplate = PROMPT\\n'\n",
1608
              "                                   '    \"\"\"Default conversation prompt to '\n",
1609
              "                                   'use.\"\"\"\\n'\n",
1610
              "                                   '    input_key: str = \"input\"  #: :meta '\n",
1611
              "                                   'private:\\n'\n",
1612
              "                                   '    output_key: str = \"response\"  #: :meta '\n",
1613
              "                                   'private:\\n'\n",
1614
              "                                   '    class Config:\\n'\n",
1615
              "                                   '        \"\"\"Configuration for this pydantic '\n",
1616
              "                                   'object.\"\"\"\\n'\n",
1617
              "                                   '        extra = Extra.forbid\\n'\n",
1618
              "                                   '        arbitrary_types_allowed = True\\n'\n",
1619
              "                                   '    @property\\n'\n",
1620
              "                                   '    def input_keys(self) -> List[str]:\\n'\n",
1621
              "                                   '        \"\"\"Use this since so some prompt '\n",
1622
              "                                   'vars come from history.\"\"\"\\n'\n",
1623
              "                                   '        return [self.input_key]\\n'\n",
1624
              "                                   '    @root_validator()\\n'\n",
1625
              "                                   '    def '\n",
1626
              "                                   'validate_prompt_input_variables(cls, '\n",
1627
              "                                   'values: Dict) -> Dict:\\n'\n",
1628
              "                                   '        \"\"\"Validate that prompt input '\n",
1629
              "                                   'variables are consistent.\"\"\"\\n'\n",
1630
              "                                   '        memory_keys = '\n",
1631
              "                                   'values[\"memory\"].memory_variables\\n'\n",
1632
              "                                   '        input_key = values[\"input_key\"]\\n'\n",
1633
              "                                   '        if input_key in memory_keys:\\n'\n",
1634
              "                                   '            raise ValueError(\\n'\n",
1635
              "                                   '                f\"The input key '\n",
1636
              "                                   '{input_key} was also found in the memory '\n",
1637
              "                                   'keys \"\\n'\n",
1638
              "                                   '                f\"({memory_keys}) - please '\n",
1639
              "                                   'provide keys that don\\'t overlap.\"\\n'\n",
1640
              "                                   '            )\\n'\n",
1641
              "                                   '        prompt_variables = '\n",
1642
              "                                   'values[\"prompt\"].input_variables\\n'\n",
1643
              "                                   '        expected_keys = memory_keys + '\n",
1644
              "                                   '[input_key]\\n'\n",
1645
              "                                   '        if set(expected_keys) != '\n",
1646
              "                                   'set(prompt_variables):\\n'\n",
1647
              "                                   '            raise ValueError(\\n'\n",
1648
              "                                   '                \"Got unexpected prompt '\n",
1649
              "                                   'input variables. The prompt expects \"\\n'\n",
1650
              "                                   '                f\"{prompt_variables}, but '\n",
1651
              "                                   'got {memory_keys} as inputs from \"\\n'\n",
1652
              "                                   '                f\"memory, and {input_key} '\n",
1653
              "                                   'as the normal input key.\"\\n'\n",
1654
              "                                   '            )\\n'\n",
1655
              "                                   '        return values',\n",
1656
              "                           'url': 'https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversation/base.html'},\n",
1657
              "              'score': 0.792259932,\n",
1658
              "              'values': []},\n",
1659
              "             {'id': '3b6f9660-0346-4992-a6f5-b9cc2977f446-5',\n",
1660
              "              'metadata': {'chunk': 5.0,\n",
1661
              "                           'text': 'callbacks: Callbacks = None,\\n'\n",
1662
              "                                   '        **kwargs: Any,\\n'\n",
1663
              "                                   '    ) -> '\n",
1664
              "                                   'BaseConversationalRetrievalChain:\\n'\n",
1665
              "                                   '        \"\"\"Load chain from LLM.\"\"\"\\n'\n",
1666
              "                                   '        combine_docs_chain_kwargs = '\n",
1667
              "                                   'combine_docs_chain_kwargs or {}\\n'\n",
1668
              "                                   '        doc_chain = load_qa_chain(\\n'\n",
1669
              "                                   '            llm,\\n'\n",
1670
              "                                   '            chain_type=chain_type,\\n'\n",
1671
              "                                   '            callbacks=callbacks,\\n'\n",
1672
              "                                   '            **combine_docs_chain_kwargs,\\n'\n",
1673
              "                                   '        )\\n'\n",
1674
              "                                   '        condense_question_chain = '\n",
1675
              "                                   'LLMChain(\\n'\n",
1676
              "                                   '            llm=llm, '\n",
1677
              "                                   'prompt=condense_question_prompt, '\n",
1678
              "                                   'callbacks=callbacks\\n'\n",
1679
              "                                   '        )\\n'\n",
1680
              "                                   '        return cls(\\n'\n",
1681
              "                                   '            vectorstore=vectorstore,\\n'\n",
1682
              "                                   '            combine_docs_chain=doc_chain,\\n'\n",
1683
              "                                   '            '\n",
1684
              "                                   'question_generator=condense_question_chain,\\n'\n",
1685
              "                                   '            callbacks=callbacks,\\n'\n",
1686
              "                                   '            **kwargs,\\n'\n",
1687
              "                                   '        )',\n",
1688
              "                           'url': 'https://api.python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html'},\n",
1689
              "              'score': 0.791279614,\n",
1690
              "              'values': []}],\n",
1691
              " 'namespace': ''}"
1692
            ]
1693
          },
1694
          "execution_count": 21,
1695
          "metadata": {},
1696
          "output_type": "execute_result"
1697
        }
1698
      ],
1699
      "source": [
1700
        "res"
1701
      ]
1702
    },
1703
    {
1704
      "cell_type": "markdown",
1705
      "metadata": {
1706
        "id": "MoBSiDLIUADZ"
1707
      },
1708
      "source": [
1709
        "With retrieval complete, we move on to feeding these into GPT-4 to produce answers."
1710
      ]
1711
    },
1712
    {
1713
      "cell_type": "markdown",
1714
      "metadata": {
1715
        "id": "qfzS4-6-UXgX"
1716
      },
1717
      "source": [
1718
        "## Retrieval Augmented Generation"
1719
      ]
1720
    },
1721
    {
1722
      "cell_type": "markdown",
1723
      "metadata": {
1724
        "id": "XPC1jQaKUcy0"
1725
      },
1726
      "source": [
1727
        "GPT-4 is currently accessed via the `ChatCompletions` endpoint of OpenAI. To add the information we retrieved into the model, we need to pass it into our user prompts *alongside* our original query. We can do that like so:"
1728
      ]
1729
    },
1730
    {
1731
      "cell_type": "code",
1732
      "execution_count": 22,
1733
      "metadata": {
1734
        "id": "unZstoHNUHeG"
1735
      },
1736
      "outputs": [],
1737
      "source": [
1738
        "# get list of retrieved text\n",
1739
        "contexts = [item['metadata']['text'] for item in res['matches']]\n",
1740
        "\n",
1741
        "augmented_query = \"\\n\\n---\\n\\n\".join(contexts)+\"\\n\\n-----\\n\\n\"+query"
1742
      ]
1743
    },
1744
    {
1745
      "cell_type": "code",
1746
      "execution_count": 27,
1747
      "metadata": {
1748
        "colab": {
1749
          "base_uri": "https://localhost:8080/"
1750
        },
1751
        "id": "LRcEHm0Z9fXE",
1752
        "outputId": "636c6825-ecd1-4953-ee25-ebabcb3a2fed"
1753
      },
1754
      "outputs": [
1755
        {
1756
          "name": "stdout",
1757
          "output_type": "stream",
1758
          "text": [
1759
            "Source code for langchain.chains.llm\n",
1760
            "\"\"\"Chain that just formats a prompt and calls an LLM.\"\"\"\n",
1761
            "from __future__ import annotations\n",
1762
            "import warnings\n",
1763
            "from typing import Any, Dict, List, Optional, Sequence, Tuple, Union\n",
1764
            "from pydantic import Extra, Field\n",
1765
            "from langchain.base_language import BaseLanguageModel\n",
1766
            "from langchain.callbacks.manager import (\n",
1767
            "    AsyncCallbackManager,\n",
1768
            "    AsyncCallbackManagerForChainRun,\n",
1769
            "    CallbackManager,\n",
1770
            "    CallbackManagerForChainRun,\n",
1771
            "    Callbacks,\n",
1772
            ")\n",
1773
            "from langchain.chains.base import Chain\n",
1774
            "from langchain.input import get_colored_text\n",
1775
            "from langchain.load.dump import dumpd\n",
1776
            "from langchain.prompts.base import BasePromptTemplate\n",
1777
            "from langchain.prompts.prompt import PromptTemplate\n",
1778
            "from langchain.schema import (\n",
1779
            "    BaseLLMOutputParser,\n",
1780
            "    LLMResult,\n",
1781
            "    NoOpOutputParser,\n",
1782
            "    PromptValue,\n",
1783
            ")\n",
1784
            "[docs]class LLMChain(Chain):\n",
1785
            "    \"\"\"Chain to run queries against LLMs.\n",
1786
            "    Example:\n",
1787
            "        .. code-block:: python\n",
1788
            "            from langchain import LLMChain, OpenAI, PromptTemplate\n",
1789
            "            prompt_template = \"Tell me a {adjective} joke\"\n",
1790
            "            prompt = PromptTemplate(\n",
1791
            "                input_variables=[\"adjective\"], template=prompt_template\n",
1792
            "            )\n",
1793
            "            llm = LLMChain(llm=OpenAI(), prompt=prompt)\n",
1794
            "    \"\"\"\n",
1795
            "    @property\n",
1796
            "    def lc_serializable(self) -> bool:\n",
1797
            "        return True\n",
1798
            "    prompt: BasePromptTemplate\n",
1799
            "    \"\"\"Prompt object to use.\"\"\"\n",
1800
            "    llm: BaseLanguageModel\n",
1801
            "    \"\"\"Language model to call.\"\"\"\n",
1802
            "    output_key: str = \"text\"  #: :meta private:\n",
1803
            "    output_parser: BaseLLMOutputParser = Field(default_factory=NoOpOutputParser)\n",
1804
            "    \"\"\"Output parser to use.\n",
1805
            "    Defaults to one that takes the most likely string but does not change it \n",
1806
            "    otherwise.\"\"\"\n",
1807
            "    return_final_only: bool = True\n",
1808
            "    \"\"\"Whether to return only the final parsed result. Defaults to True.\n",
1809
            "    If false, will return a bunch of extra information about the generation.\"\"\"\n",
1810
            "    llm_kwargs: dict = Field(default_factory=dict)\n",
1811
            "    class Config:\n",
1812
            "        \"\"\"Configuration for this pydantic object.\"\"\"\n",
1813
            "        extra = Extra.forbid\n",
1814
            "        arbitrary_types_allowed = True\n",
1815
            "\n",
1816
            "---\n",
1817
            "\n",
1818
            "Bases: langchain.chains.base.Chain\n",
1819
            "Chain for question-answering with self-verification.\n",
1820
            "Example\n",
1821
            "from langchain import OpenAI, LLMSummarizationCheckerChain\n",
1822
            "llm = OpenAI(temperature=0.0)\n",
1823
            "checker_chain = LLMSummarizationCheckerChain.from_llm(llm)\n",
1824
            "Parameters\n",
1825
            "memory (Optional[langchain.schema.BaseMemory]) – \n",
1826
            "callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – \n",
1827
            "callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – \n",
1828
            "verbose (bool) – \n",
1829
            "tags (Optional[List[str]]) – \n",
1830
            "sequential_chain (langchain.chains.sequential.SequentialChain) – \n",
1831
            "llm (Optional[langchain.base_language.BaseLanguageModel]) – \n",
1832
            "create_assertions_prompt (langchain.prompts.prompt.PromptTemplate) – \n",
1833
            "check_assertions_prompt (langchain.prompts.prompt.PromptTemplate) – \n",
1834
            "revised_summary_prompt (langchain.prompts.prompt.PromptTemplate) – \n",
1835
            "are_all_true_prompt (langchain.prompts.prompt.PromptTemplate) – \n",
1836
            "input_key (str) – \n",
1837
            "output_key (str) – \n",
1838
            "max_checks (int) – \n",
1839
            "Return type\n",
1840
            "None\n",
1841
            "\n",
1842
            "---\n",
1843
            "\n",
1844
            "[docs]    @classmethod\n",
1845
            "    def from_llm(\n",
1846
            "        cls,\n",
1847
            "        llm: BaseLanguageModel,\n",
1848
            "        chain: LLMChain,\n",
1849
            "        critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT,\n",
1850
            "        revision_prompt: BasePromptTemplate = REVISION_PROMPT,\n",
1851
            "        **kwargs: Any,\n",
1852
            "    ) -> \"ConstitutionalChain\":\n",
1853
            "        \"\"\"Create a chain from an LLM.\"\"\"\n",
1854
            "        critique_chain = LLMChain(llm=llm, prompt=critique_prompt)\n",
1855
            "        revision_chain = LLMChain(llm=llm, prompt=revision_prompt)\n",
1856
            "        return cls(\n",
1857
            "            chain=chain,\n",
1858
            "            critique_chain=critique_chain,\n",
1859
            "            revision_chain=revision_chain,\n",
1860
            "            **kwargs,\n",
1861
            "        )\n",
1862
            "    @property\n",
1863
            "    def input_keys(self) -> List[str]:\n",
1864
            "        \"\"\"Defines the input keys.\"\"\"\n",
1865
            "        return self.chain.input_keys\n",
1866
            "    @property\n",
1867
            "    def output_keys(self) -> List[str]:\n",
1868
            "        \"\"\"Defines the output keys.\"\"\"\n",
1869
            "        if self.return_intermediate_steps:\n",
1870
            "            return [\"output\", \"critiques_and_revisions\", \"initial_output\"]\n",
1871
            "        return [\"output\"]\n",
1872
            "    def _call(\n",
1873
            "        self,\n",
1874
            "        inputs: Dict[str, Any],\n",
1875
            "        run_manager: Optional[CallbackManagerForChainRun] = None,\n",
1876
            "    ) -> Dict[str, Any]:\n",
1877
            "        _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()\n",
1878
            "        response = self.chain.run(\n",
1879
            "            **inputs,\n",
1880
            "            callbacks=_run_manager.get_child(\"original\"),\n",
1881
            "        )\n",
1882
            "        initial_response = response\n",
1883
            "        input_prompt = self.chain.prompt.format(**inputs)\n",
1884
            "        _run_manager.on_text(\n",
1885
            "            text=\"Initial response: \" + response + \"\\n\\n\",\n",
1886
            "            verbose=self.verbose,\n",
1887
            "            color=\"yellow\",\n",
1888
            "        )\n",
1889
            "        critiques_and_revisions = []\n",
1890
            "        for constitutional_principle in self.constitutional_principles:\n",
1891
            "            # Do critique\n",
1892
            "            raw_critique = self.critique_chain.run(\n",
1893
            "                input_prompt=input_prompt,\n",
1894
            "                output_from_model=response,\n",
1895
            "                critique_request=constitutional_principle.critique_request,\n",
1896
            "                callbacks=_run_manager.get_child(\"critique\"),\n",
1897
            "            )\n",
1898
            "            critique = self._parse_critique(\n",
1899
            "                output_string=raw_critique,\n",
1900
            "\n",
1901
            "---\n",
1902
            "\n",
1903
            "Source code for langchain.chains.conversation.base\n",
1904
            "\"\"\"Chain that carries on a conversation and calls an LLM.\"\"\"\n",
1905
            "from typing import Dict, List\n",
1906
            "from pydantic import Extra, Field, root_validator\n",
1907
            "from langchain.chains.conversation.prompt import PROMPT\n",
1908
            "from langchain.chains.llm import LLMChain\n",
1909
            "from langchain.memory.buffer import ConversationBufferMemory\n",
1910
            "from langchain.prompts.base import BasePromptTemplate\n",
1911
            "from langchain.schema import BaseMemory\n",
1912
            "[docs]class ConversationChain(LLMChain):\n",
1913
            "    \"\"\"Chain to have a conversation and load context from memory.\n",
1914
            "    Example:\n",
1915
            "        .. code-block:: python\n",
1916
            "            from langchain import ConversationChain, OpenAI\n",
1917
            "            conversation = ConversationChain(llm=OpenAI())\n",
1918
            "    \"\"\"\n",
1919
            "    memory: BaseMemory = Field(default_factory=ConversationBufferMemory)\n",
1920
            "    \"\"\"Default memory store.\"\"\"\n",
1921
            "    prompt: BasePromptTemplate = PROMPT\n",
1922
            "    \"\"\"Default conversation prompt to use.\"\"\"\n",
1923
            "    input_key: str = \"input\"  #: :meta private:\n",
1924
            "    output_key: str = \"response\"  #: :meta private:\n",
1925
            "    class Config:\n",
1926
            "        \"\"\"Configuration for this pydantic object.\"\"\"\n",
1927
            "        extra = Extra.forbid\n",
1928
            "        arbitrary_types_allowed = True\n",
1929
            "    @property\n",
1930
            "    def input_keys(self) -> List[str]:\n",
1931
            "        \"\"\"Use this since so some prompt vars come from history.\"\"\"\n",
1932
            "        return [self.input_key]\n",
1933
            "    @root_validator()\n",
1934
            "    def validate_prompt_input_variables(cls, values: Dict) -> Dict:\n",
1935
            "        \"\"\"Validate that prompt input variables are consistent.\"\"\"\n",
1936
            "        memory_keys = values[\"memory\"].memory_variables\n",
1937
            "        input_key = values[\"input_key\"]\n",
1938
            "        if input_key in memory_keys:\n",
1939
            "            raise ValueError(\n",
1940
            "                f\"The input key {input_key} was also found in the memory keys \"\n",
1941
            "                f\"({memory_keys}) - please provide keys that don't overlap.\"\n",
1942
            "            )\n",
1943
            "        prompt_variables = values[\"prompt\"].input_variables\n",
1944
            "        expected_keys = memory_keys + [input_key]\n",
1945
            "        if set(expected_keys) != set(prompt_variables):\n",
1946
            "            raise ValueError(\n",
1947
            "                \"Got unexpected prompt input variables. The prompt expects \"\n",
1948
            "                f\"{prompt_variables}, but got {memory_keys} as inputs from \"\n",
1949
            "                f\"memory, and {input_key} as the normal input key.\"\n",
1950
            "            )\n",
1951
            "        return values\n",
1952
            "\n",
1953
            "---\n",
1954
            "\n",
1955
            "callbacks: Callbacks = None,\n",
1956
            "        **kwargs: Any,\n",
1957
            "    ) -> BaseConversationalRetrievalChain:\n",
1958
            "        \"\"\"Load chain from LLM.\"\"\"\n",
1959
            "        combine_docs_chain_kwargs = combine_docs_chain_kwargs or {}\n",
1960
            "        doc_chain = load_qa_chain(\n",
1961
            "            llm,\n",
1962
            "            chain_type=chain_type,\n",
1963
            "            callbacks=callbacks,\n",
1964
            "            **combine_docs_chain_kwargs,\n",
1965
            "        )\n",
1966
            "        condense_question_chain = LLMChain(\n",
1967
            "            llm=llm, prompt=condense_question_prompt, callbacks=callbacks\n",
1968
            "        )\n",
1969
            "        return cls(\n",
1970
            "            vectorstore=vectorstore,\n",
1971
            "            combine_docs_chain=doc_chain,\n",
1972
            "            question_generator=condense_question_chain,\n",
1973
            "            callbacks=callbacks,\n",
1974
            "            **kwargs,\n",
1975
            "        )\n",
1976
            "\n",
1977
            "-----\n",
1978
            "\n",
1979
            "how do I use the LLMChain in LangChain?\n"
1980
          ]
1981
        }
1982
      ],
1983
      "source": [
1984
        "print(augmented_query)"
1985
      ]
1986
    },
1987
    {
1988
      "cell_type": "markdown",
1989
      "metadata": {
1990
        "id": "sihH_GMiV5_p"
1991
      },
1992
      "source": [
1993
        "Now we ask the question:"
1994
      ]
1995
    },
1996
    {
1997
      "cell_type": "code",
1998
      "execution_count": 28,
1999
      "metadata": {
2000
        "id": "IThBqBi8V70d"
2001
      },
2002
      "outputs": [],
2003
      "source": [
2004
        "# system message to 'prime' the model\n",
2005
        "primer = f\"\"\"You are Q&A bot. A highly intelligent system that answers\n",
2006
        "user questions based on the information provided by the user above\n",
2007
        "each question. If the information can not be found in the information\n",
2008
        "provided by the user you truthfully say \"I don't know\".\n",
2009
        "\"\"\"\n",
2010
        "\n",
2011
        "res = openai.ChatCompletion.create(\n",
2012
        "    model=\"gpt-4\",\n",
2013
        "    messages=[\n",
2014
        "        {\"role\": \"system\", \"content\": primer},\n",
2015
        "        {\"role\": \"user\", \"content\": augmented_query}\n",
2016
        "    ]\n",
2017
        ")"
2018
      ]
2019
    },
2020
    {
2021
      "cell_type": "markdown",
2022
      "metadata": {
2023
        "id": "QvS1yJhOWpiJ"
2024
      },
2025
      "source": [
2026
        "To display this response nicely, we will display it in markdown."
2027
      ]
2028
    },
2029
    {
2030
      "cell_type": "code",
2031
      "execution_count": 29,
2032
      "metadata": {
2033
        "colab": {
2034
          "base_uri": "https://localhost:8080/",
2035
          "height": 465
2036
        },
2037
        "id": "RDo2qeMHWto1",
2038
        "outputId": "9a9b677f-9b4f-4f77-822d-80baf75ed04a"
2039
      },
2040
      "outputs": [
2041
        {
2042
          "data": {
2043
            "text/markdown": [
2044
              "To use the LLMChain in LangChain, you need to first import the necessary modules and classes. In this example, we will use the OpenAI language model. Follow the steps below:\n",
2045
              "\n",
2046
              "1. Import all required modules and classes:\n",
2047
              "\n",
2048
              "```python\n",
2049
              "from langchain import LLMChain, OpenAI, PromptTemplate\n",
2050
              "```\n",
2051
              "\n",
2052
              "2. Define the prompt template you want to use with the language model. For example, if you want to create jokes based on provided adjectives:\n",
2053
              "\n",
2054
              "```python\n",
2055
              "prompt_template = \"Tell me a {adjective} joke\"\n",
2056
              "```\n",
2057
              "\n",
2058
              "3. Create a PromptTemplate object passing the input_variables and template:\n",
2059
              "\n",
2060
              "```python\n",
2061
              "prompt = PromptTemplate(input_variables=[\"adjective\"], template=prompt_template)\n",
2062
              "```\n",
2063
              "\n",
2064
              "4. Instantiate the OpenAI language model:\n",
2065
              "\n",
2066
              "```python\n",
2067
              "llm = OpenAI()\n",
2068
              "```\n",
2069
              "\n",
2070
              "5. Create the LLMChain object using the OpenAI language model and the created prompt:\n",
2071
              "\n",
2072
              "```python\n",
2073
              "llm_chain = LLMChain(llm=llm, prompt=prompt)\n",
2074
              "```\n",
2075
              "\n",
2076
              "Now you can use the `llm_chain` object to generate jokes based on provided adjectives. For example:\n",
2077
              "\n",
2078
              "```python\n",
2079
              "response = llm_chain.run(adjective=\"funny\")\n",
2080
              "print(response)\n",
2081
              "```\n",
2082
              "\n",
2083
              "This will generate and print a funny joke based on the predefined prompt template. Replace `\"funny\"` with any other adjective to get a different result."
2084
            ],
2085
            "text/plain": [
2086
              "<IPython.core.display.Markdown object>"
2087
            ]
2088
          },
2089
          "metadata": {},
2090
          "output_type": "display_data"
2091
        }
2092
      ],
2093
      "source": [
2094
        "from IPython.display import Markdown\n",
2095
        "\n",
2096
        "display(Markdown(res['choices'][0]['message']['content']))"
2097
      ]
2098
    },
2099
    {
2100
      "cell_type": "markdown",
2101
      "metadata": {
2102
        "id": "eJ-a8MHg0eYQ"
2103
      },
2104
      "source": [
2105
        "Let's compare this to a non-augmented query..."
2106
      ]
2107
    },
2108
    {
2109
      "cell_type": "code",
2110
      "execution_count": 30,
2111
      "metadata": {
2112
        "colab": {
2113
          "base_uri": "https://localhost:8080/",
2114
          "height": 46
2115
        },
2116
        "id": "vwhaSgdF0ZDX",
2117
        "outputId": "ce085b0f-e0da-4c00-f3f5-43b49e64568c"
2118
      },
2119
      "outputs": [
2120
        {
2121
          "data": {
2122
            "text/markdown": [
2123
              "I don't know."
2124
            ],
2125
            "text/plain": [
2126
              "<IPython.core.display.Markdown object>"
2127
            ]
2128
          },
2129
          "metadata": {},
2130
          "output_type": "display_data"
2131
        }
2132
      ],
2133
      "source": [
2134
        "res = openai.ChatCompletion.create(\n",
2135
        "    model=\"gpt-4\",\n",
2136
        "    messages=[\n",
2137
        "        {\"role\": \"system\", \"content\": primer},\n",
2138
        "        {\"role\": \"user\", \"content\": query}\n",
2139
        "    ]\n",
2140
        ")\n",
2141
        "display(Markdown(res['choices'][0]['message']['content']))"
2142
      ]
2143
    },
2144
    {
2145
      "cell_type": "markdown",
2146
      "metadata": {
2147
        "id": "5CSsA-dW0m_P"
2148
      },
2149
      "source": [
2150
        "If we drop the `\"I don't know\"` part of the `primer`?"
2151
      ]
2152
    },
2153
    {
2154
      "cell_type": "code",
2155
      "execution_count": 31,
2156
      "metadata": {
2157
        "colab": {
2158
          "base_uri": "https://localhost:8080/",
2159
          "height": 371
2160
        },
2161
        "id": "Z3svdTCZ0iJ2",
2162
        "outputId": "19673965-a2f8-45be-b82a-6e491aa88416"
2163
      },
2164
      "outputs": [
2165
        {
2166
          "data": {
2167
            "text/markdown": [
2168
              "LLMChain, which stands for LangChain's Language Model Chain, is a feature within the LangChain ecosystem that allows connecting multiple language models to achieve more accurate translations and processing of natural language data.\n",
2169
              "\n",
2170
              "To use the LLMChain in LangChain, follow these steps:\n",
2171
              "\n",
2172
              "1. Sign up or log in: If you don't have an account with LangChain, sign up or log in to your existing account.\n",
2173
              "\n",
2174
              "2. Configure the LLMChain: Navigate to the LLMChain settings or configuration page (it may be under \"Settings\" or \"LLMChain Configuration\"). Here, you'll add, remove, or re-order language models in your chain.\n",
2175
              "\n",
2176
              "3. Add language models: Choose from the available language models and add them to your chain. Typically, language models are selected based on their performance or scope for specific language pairs or types of text.\n",
2177
              "\n",
2178
              "4. Set the order of language models: Arrange the order of the language models in your chain based on your preferences or needs. The LLMChain will process the input text in the order you've set, starting from the first model, and pass the output to the subsequent models in the chain.\n",
2179
              "\n",
2180
              "5. Test the LLMChain: Once you have configured your LLMChain, test it by inputting text and reviewing the generated translations or processed output. This step will allow you to fine-tune the chain to ensure optimal performance.\n",
2181
              "\n",
2182
              "6. Use the LLMChain in your translation projects or language processing tasks: With your LLMChain set up and tested, you can now use it for your translation or language processing needs.\n",
2183
              "\n",
2184
              "Remember that the LLMChain is part of the LangChain ecosystem, so any changes or modifications to it may require some knowledge of the platform and its interface. If needed, consult the official documentation or seek support from the community to ensure a seamless experience."
2185
            ],
2186
            "text/plain": [
2187
              "<IPython.core.display.Markdown object>"
2188
            ]
2189
          },
2190
          "metadata": {},
2191
          "output_type": "display_data"
2192
        }
2193
      ],
2194
      "source": [
2195
        "res = openai.ChatCompletion.create(\n",
2196
        "    model=\"gpt-4\",\n",
2197
        "    messages=[\n",
2198
        "        {\"role\": \"system\", \"content\": \"You are Q&A bot. A highly intelligent system that answers user questions\"},\n",
2199
        "        {\"role\": \"user\", \"content\": query}\n",
2200
        "    ]\n",
2201
        ")\n",
2202
        "display(Markdown(res['choices'][0]['message']['content']))"
2203
      ]
2204
    },
2205
    {
2206
      "cell_type": "markdown",
2207
      "metadata": {
2208
        "id": "GcGon5672lBb"
2209
      },
2210
      "source": [
2211
        "Then we see something even worse than `\"I don't know\"` — hallucinations. Clearly augmenting our queries with additional context can make a huge difference to the performance of our system.\n",
2212
        "\n",
2213
        "Great, we've seen how to augment GPT-4 with semantic search to allow us to answer LangChain specific queries.\n",
2214
        "\n",
2215
        "Once you're finished, we delete the index to save resources."
2216
      ]
2217
    },
2218
    {
2219
      "cell_type": "code",
2220
      "execution_count": 32,
2221
      "metadata": {
2222
        "id": "Ah_vfEHV2khx"
2223
      },
2224
      "outputs": [],
2225
      "source": [
2226
        "pinecone.delete_index(index_name)"
2227
      ]
2228
    },
2229
    {
2230
      "cell_type": "markdown",
2231
      "metadata": {
2232
        "id": "iEUMlO8M2h4Y"
2233
      },
2234
      "source": [
2235
        "---"
2236
      ]
2237
    }
2238
  ],
2239
  "metadata": {
2240
    "colab": {
2241
      "provenance": []
2242
    },
2243
    "gpuClass": "standard",
2244
    "kernelspec": {
2245
      "display_name": "Python 3",
2246
      "name": "python3"
2247
    },
2248
    "language_info": {
2249
      "name": "python"
2250
    },
2251
    "widgets": {
2252
      "application/vnd.jupyter.widget-state+json": {
2253
        "01296cac12234000a13bdca80b31ba8b": {
2254
          "model_module": "@jupyter-widgets/controls",
2255
          "model_module_version": "1.5.0",
2256
          "model_name": "HBoxModel",
2257
          "state": {
2258
            "_dom_classes": [],
2259
            "_model_module": "@jupyter-widgets/controls",
2260
            "_model_module_version": "1.5.0",
2261
            "_model_name": "HBoxModel",
2262
            "_view_count": null,
2263
            "_view_module": "@jupyter-widgets/controls",
2264
            "_view_module_version": "1.5.0",
2265
            "_view_name": "HBoxView",
2266
            "box_style": "",
2267
            "children": [
2268
              "IPY_MODEL_930601ee00454f71b1114c4aaff0175b",
2269
              "IPY_MODEL_e976d05935374e47b86773ca852cfa9e",
2270
              "IPY_MODEL_bf9b29814dd04a22a7ff4ca1c6160c21"
2271
            ],
2272
            "layout": "IPY_MODEL_6d110cd070fe4776b9449de74759dff3"
2273
          }
2274
        },
2275
        "037869180d9d4b1eb1bdbed67337e349": {
2276
          "model_module": "@jupyter-widgets/controls",
2277
          "model_module_version": "1.5.0",
2278
          "model_name": "DescriptionStyleModel",
2279
          "state": {
2280
            "_model_module": "@jupyter-widgets/controls",
2281
            "_model_module_version": "1.5.0",
2282
            "_model_name": "DescriptionStyleModel",
2283
            "_view_count": null,
2284
            "_view_module": "@jupyter-widgets/base",
2285
            "_view_module_version": "1.2.0",
2286
            "_view_name": "StyleView",
2287
            "description_width": ""
2288
          }
2289
        },
2290
        "05199362d95449699254c45c1d5cee94": {
2291
          "model_module": "@jupyter-widgets/controls",
2292
          "model_module_version": "1.5.0",
2293
          "model_name": "DescriptionStyleModel",
2294
          "state": {
2295
            "_model_module": "@jupyter-widgets/controls",
2296
            "_model_module_version": "1.5.0",
2297
            "_model_name": "DescriptionStyleModel",
2298
            "_view_count": null,
2299
            "_view_module": "@jupyter-widgets/base",
2300
            "_view_module_version": "1.2.0",
2301
            "_view_name": "StyleView",
2302
            "description_width": ""
2303
          }
2304
        },
2305
        "05a0a1ebc92f463d9f3e953e51742a85": {
2306
          "model_module": "@jupyter-widgets/controls",
2307
          "model_module_version": "1.5.0",
2308
          "model_name": "DescriptionStyleModel",
2309
          "state": {
2310
            "_model_module": "@jupyter-widgets/controls",
2311
            "_model_module_version": "1.5.0",
2312
            "_model_name": "DescriptionStyleModel",
2313
            "_view_count": null,
2314
            "_view_module": "@jupyter-widgets/base",
2315
            "_view_module_version": "1.2.0",
2316
            "_view_name": "StyleView",
2317
            "description_width": ""
2318
          }
2319
        },
2320
        "08c52a0369b74e7da99574ec29612189": {
2321
          "model_module": "@jupyter-widgets/controls",
2322
          "model_module_version": "1.5.0",
2323
          "model_name": "HTMLModel",
2324
          "state": {
2325
            "_dom_classes": [],
2326
            "_model_module": "@jupyter-widgets/controls",
2327
            "_model_module_version": "1.5.0",
2328
            "_model_name": "HTMLModel",
2329
            "_view_count": null,
2330
            "_view_module": "@jupyter-widgets/controls",
2331
            "_view_module_version": "1.5.0",
2332
            "_view_name": "HTMLView",
2333
            "description": "",
2334
            "description_tooltip": null,
2335
            "layout": "IPY_MODEL_dd3ece4c242d4eae946f8bc4f95d1dbf",
2336
            "placeholder": "​",
2337
            "style": "IPY_MODEL_ae71cc7e26ee4b51b7eb67520f66c9bd",
2338
            "value": " 4.68M/4.68M [00:01&lt;00:00, 5.43MB/s]"
2339
          }
2340
        },
2341
        "0ed96243151440a18994669e2f85e819": {
2342
          "model_module": "@jupyter-widgets/base",
2343
          "model_module_version": "1.2.0",
2344
          "model_name": "LayoutModel",
2345
          "state": {
2346
            "_model_module": "@jupyter-widgets/base",
2347
            "_model_module_version": "1.2.0",
2348
            "_model_name": "LayoutModel",
2349
            "_view_count": null,
2350
            "_view_module": "@jupyter-widgets/base",
2351
            "_view_module_version": "1.2.0",
2352
            "_view_name": "LayoutView",
2353
            "align_content": null,
2354
            "align_items": null,
2355
            "align_self": null,
2356
            "border": null,
2357
            "bottom": null,
2358
            "display": null,
2359
            "flex": null,
2360
            "flex_flow": null,
2361
            "grid_area": null,
2362
            "grid_auto_columns": null,
2363
            "grid_auto_flow": null,
2364
            "grid_auto_rows": null,
2365
            "grid_column": null,
2366
            "grid_gap": null,
2367
            "grid_row": null,
2368
            "grid_template_areas": null,
2369
            "grid_template_columns": null,
2370
            "grid_template_rows": null,
2371
            "height": null,
2372
            "justify_content": null,
2373
            "justify_items": null,
2374
            "left": null,
2375
            "margin": null,
2376
            "max_height": null,
2377
            "max_width": null,
2378
            "min_height": null,
2379
            "min_width": null,
2380
            "object_fit": null,
2381
            "object_position": null,
2382
            "order": null,
2383
            "overflow": null,
2384
            "overflow_x": null,
2385
            "overflow_y": null,
2386
            "padding": null,
2387
            "right": null,
2388
            "top": null,
2389
            "visibility": null,
2390
            "width": null
2391
          }
2392
        },
2393
        "241b0de59e53465f8acad4ac74b17b57": {
2394
          "model_module": "@jupyter-widgets/base",
2395
          "model_module_version": "1.2.0",
2396
          "model_name": "LayoutModel",
2397
          "state": {
2398
            "_model_module": "@jupyter-widgets/base",
2399
            "_model_module_version": "1.2.0",
2400
            "_model_name": "LayoutModel",
2401
            "_view_count": null,
2402
            "_view_module": "@jupyter-widgets/base",
2403
            "_view_module_version": "1.2.0",
2404
            "_view_name": "LayoutView",
2405
            "align_content": null,
2406
            "align_items": null,
2407
            "align_self": null,
2408
            "border": null,
2409
            "bottom": null,
2410
            "display": null,
2411
            "flex": null,
2412
            "flex_flow": null,
2413
            "grid_area": null,
2414
            "grid_auto_columns": null,
2415
            "grid_auto_flow": null,
2416
            "grid_auto_rows": null,
2417
            "grid_column": null,
2418
            "grid_gap": null,
2419
            "grid_row": null,
2420
            "grid_template_areas": null,
2421
            "grid_template_columns": null,
2422
            "grid_template_rows": null,
2423
            "height": null,
2424
            "justify_content": null,
2425
            "justify_items": null,
2426
            "left": null,
2427
            "margin": null,
2428
            "max_height": null,
2429
            "max_width": null,
2430
            "min_height": null,
2431
            "min_width": null,
2432
            "object_fit": null,
2433
            "object_position": null,
2434
            "order": null,
2435
            "overflow": null,
2436
            "overflow_x": null,
2437
            "overflow_y": null,
2438
            "padding": null,
2439
            "right": null,
2440
            "top": null,
2441
            "visibility": null,
2442
            "width": null
2443
          }
2444
        },
2445
        "294d5fc4fa1e40429e08137934481ba2": {
2446
          "model_module": "@jupyter-widgets/controls",
2447
          "model_module_version": "1.5.0",
2448
          "model_name": "ProgressStyleModel",
2449
          "state": {
2450
            "_model_module": "@jupyter-widgets/controls",
2451
            "_model_module_version": "1.5.0",
2452
            "_model_name": "ProgressStyleModel",
2453
            "_view_count": null,
2454
            "_view_module": "@jupyter-widgets/base",
2455
            "_view_module_version": "1.2.0",
2456
            "_view_name": "StyleView",
2457
            "bar_color": null,
2458
            "description_width": ""
2459
          }
2460
        },
2461
        "2b960a7f46444ad3bd3392517b415f2d": {
2462
          "model_module": "@jupyter-widgets/controls",
2463
          "model_module_version": "1.5.0",
2464
          "model_name": "HTMLModel",
2465
          "state": {
2466
            "_dom_classes": [],
2467
            "_model_module": "@jupyter-widgets/controls",
2468
            "_model_module_version": "1.5.0",
2469
            "_model_name": "HTMLModel",
2470
            "_view_count": null,
2471
            "_view_module": "@jupyter-widgets/controls",
2472
            "_view_module_version": "1.5.0",
2473
            "_view_name": "HTMLView",
2474
            "description": "",
2475
            "description_tooltip": null,
2476
            "layout": "IPY_MODEL_7e2b88be1cae49da824e6c6c0782cb50",
2477
            "placeholder": "​",
2478
            "style": "IPY_MODEL_9f4e9da63bb64d279ded5ee1730b5cba",
2479
            "value": "Downloading data: 100%"
2480
          }
2481
        },
2482
        "324465ed674740c2a18a88a2633f2093": {
2483
          "model_module": "@jupyter-widgets/base",
2484
          "model_module_version": "1.2.0",
2485
          "model_name": "LayoutModel",
2486
          "state": {
2487
            "_model_module": "@jupyter-widgets/base",
2488
            "_model_module_version": "1.2.0",
2489
            "_model_name": "LayoutModel",
2490
            "_view_count": null,
2491
            "_view_module": "@jupyter-widgets/base",
2492
            "_view_module_version": "1.2.0",
2493
            "_view_name": "LayoutView",
2494
            "align_content": null,
2495
            "align_items": null,
2496
            "align_self": null,
2497
            "border": null,
2498
            "bottom": null,
2499
            "display": null,
2500
            "flex": null,
2501
            "flex_flow": null,
2502
            "grid_area": null,
2503
            "grid_auto_columns": null,
2504
            "grid_auto_flow": null,
2505
            "grid_auto_rows": null,
2506
            "grid_column": null,
2507
            "grid_gap": null,
2508
            "grid_row": null,
2509
            "grid_template_areas": null,
2510
            "grid_template_columns": null,
2511
            "grid_template_rows": null,
2512
            "height": null,
2513
            "justify_content": null,
2514
            "justify_items": null,
2515
            "left": null,
2516
            "margin": null,
2517
            "max_height": null,
2518
            "max_width": null,
2519
            "min_height": null,
2520
            "min_width": null,
2521
            "object_fit": null,
2522
            "object_position": null,
2523
            "order": null,
2524
            "overflow": null,
2525
            "overflow_x": null,
2526
            "overflow_y": null,
2527
            "padding": null,
2528
            "right": null,
2529
            "top": null,
2530
            "visibility": "hidden",
2531
            "width": null
2532
          }
2533
        },
2534
        "34d21f61f6dc499a9d1504634e470bdd": {
2535
          "model_module": "@jupyter-widgets/controls",
2536
          "model_module_version": "1.5.0",
2537
          "model_name": "HTMLModel",
2538
          "state": {
2539
            "_dom_classes": [],
2540
            "_model_module": "@jupyter-widgets/controls",
2541
            "_model_module_version": "1.5.0",
2542
            "_model_name": "HTMLModel",
2543
            "_view_count": null,
2544
            "_view_module": "@jupyter-widgets/controls",
2545
            "_view_module_version": "1.5.0",
2546
            "_view_name": "HTMLView",
2547
            "description": "",
2548
            "description_tooltip": null,
2549
            "layout": "IPY_MODEL_482f891d61ab4c2080d95a9b84ea5c6d",
2550
            "placeholder": "​",
2551
            "style": "IPY_MODEL_622987b045e74a13b79553d3d062e72a",
2552
            "value": "Extracting data files: 100%"
2553
          }
2554
        },
2555
        "34e43d6a7a92453490c45e39498afd64": {
2556
          "model_module": "@jupyter-widgets/base",
2557
          "model_module_version": "1.2.0",
2558
          "model_name": "LayoutModel",
2559
          "state": {
2560
            "_model_module": "@jupyter-widgets/base",
2561
            "_model_module_version": "1.2.0",
2562
            "_model_name": "LayoutModel",
2563
            "_view_count": null,
2564
            "_view_module": "@jupyter-widgets/base",
2565
            "_view_module_version": "1.2.0",
2566
            "_view_name": "LayoutView",
2567
            "align_content": null,
2568
            "align_items": null,
2569
            "align_self": null,
2570
            "border": null,
2571
            "bottom": null,
2572
            "display": null,
2573
            "flex": null,
2574
            "flex_flow": null,
2575
            "grid_area": null,
2576
            "grid_auto_columns": null,
2577
            "grid_auto_flow": null,
2578
            "grid_auto_rows": null,
2579
            "grid_column": null,
2580
            "grid_gap": null,
2581
            "grid_row": null,
2582
            "grid_template_areas": null,
2583
            "grid_template_columns": null,
2584
            "grid_template_rows": null,
2585
            "height": null,
2586
            "justify_content": null,
2587
            "justify_items": null,
2588
            "left": null,
2589
            "margin": null,
2590
            "max_height": null,
2591
            "max_width": null,
2592
            "min_height": null,
2593
            "min_width": null,
2594
            "object_fit": null,
2595
            "object_position": null,
2596
            "order": null,
2597
            "overflow": null,
2598
            "overflow_x": null,
2599
            "overflow_y": null,
2600
            "padding": null,
2601
            "right": null,
2602
            "top": null,
2603
            "visibility": null,
2604
            "width": null
2605
          }
2606
        },
2607
        "390f06d63dd547d395dcf18f1ebe265d": {
2608
          "model_module": "@jupyter-widgets/base",
2609
          "model_module_version": "1.2.0",
2610
          "model_name": "LayoutModel",
2611
          "state": {
2612
            "_model_module": "@jupyter-widgets/base",
2613
            "_model_module_version": "1.2.0",
2614
            "_model_name": "LayoutModel",
2615
            "_view_count": null,
2616
            "_view_module": "@jupyter-widgets/base",
2617
            "_view_module_version": "1.2.0",
2618
            "_view_name": "LayoutView",
2619
            "align_content": null,
2620
            "align_items": null,
2621
            "align_self": null,
2622
            "border": null,
2623
            "bottom": null,
2624
            "display": null,
2625
            "flex": null,
2626
            "flex_flow": null,
2627
            "grid_area": null,
2628
            "grid_auto_columns": null,
2629
            "grid_auto_flow": null,
2630
            "grid_auto_rows": null,
2631
            "grid_column": null,
2632
            "grid_gap": null,
2633
            "grid_row": null,
2634
            "grid_template_areas": null,
2635
            "grid_template_columns": null,
2636
            "grid_template_rows": null,
2637
            "height": null,
2638
            "justify_content": null,
2639
            "justify_items": null,
2640
            "left": null,
2641
            "margin": null,
2642
            "max_height": null,
2643
            "max_width": null,
2644
            "min_height": null,
2645
            "min_width": null,
2646
            "object_fit": null,
2647
            "object_position": null,
2648
            "order": null,
2649
            "overflow": null,
2650
            "overflow_x": null,
2651
            "overflow_y": null,
2652
            "padding": null,
2653
            "right": null,
2654
            "top": null,
2655
            "visibility": null,
2656
            "width": null
2657
          }
2658
        },
2659
        "3b319c7a4f6f41ea9ea6e6268cd29343": {
2660
          "model_module": "@jupyter-widgets/base",
2661
          "model_module_version": "1.2.0",
2662
          "model_name": "LayoutModel",
2663
          "state": {
2664
            "_model_module": "@jupyter-widgets/base",
2665
            "_model_module_version": "1.2.0",
2666
            "_model_name": "LayoutModel",
2667
            "_view_count": null,
2668
            "_view_module": "@jupyter-widgets/base",
2669
            "_view_module_version": "1.2.0",
2670
            "_view_name": "LayoutView",
2671
            "align_content": null,
2672
            "align_items": null,
2673
            "align_self": null,
2674
            "border": null,
2675
            "bottom": null,
2676
            "display": null,
2677
            "flex": null,
2678
            "flex_flow": null,
2679
            "grid_area": null,
2680
            "grid_auto_columns": null,
2681
            "grid_auto_flow": null,
2682
            "grid_auto_rows": null,
2683
            "grid_column": null,
2684
            "grid_gap": null,
2685
            "grid_row": null,
2686
            "grid_template_areas": null,
2687
            "grid_template_columns": null,
2688
            "grid_template_rows": null,
2689
            "height": null,
2690
            "justify_content": null,
2691
            "justify_items": null,
2692
            "left": null,
2693
            "margin": null,
2694
            "max_height": null,
2695
            "max_width": null,
2696
            "min_height": null,
2697
            "min_width": null,
2698
            "object_fit": null,
2699
            "object_position": null,
2700
            "order": null,
2701
            "overflow": null,
2702
            "overflow_x": null,
2703
            "overflow_y": null,
2704
            "padding": null,
2705
            "right": null,
2706
            "top": null,
2707
            "visibility": null,
2708
            "width": null
2709
          }
2710
        },
2711
        "41920d8d2aa44511814576dab37d96e7": {
2712
          "model_module": "@jupyter-widgets/base",
2713
          "model_module_version": "1.2.0",
2714
          "model_name": "LayoutModel",
2715
          "state": {
2716
            "_model_module": "@jupyter-widgets/base",
2717
            "_model_module_version": "1.2.0",
2718
            "_model_name": "LayoutModel",
2719
            "_view_count": null,
2720
            "_view_module": "@jupyter-widgets/base",
2721
            "_view_module_version": "1.2.0",
2722
            "_view_name": "LayoutView",
2723
            "align_content": null,
2724
            "align_items": null,
2725
            "align_self": null,
2726
            "border": null,
2727
            "bottom": null,
2728
            "display": null,
2729
            "flex": null,
2730
            "flex_flow": null,
2731
            "grid_area": null,
2732
            "grid_auto_columns": null,
2733
            "grid_auto_flow": null,
2734
            "grid_auto_rows": null,
2735
            "grid_column": null,
2736
            "grid_gap": null,
2737
            "grid_row": null,
2738
            "grid_template_areas": null,
2739
            "grid_template_columns": null,
2740
            "grid_template_rows": null,
2741
            "height": null,
2742
            "justify_content": null,
2743
            "justify_items": null,
2744
            "left": null,
2745
            "margin": null,
2746
            "max_height": null,
2747
            "max_width": null,
2748
            "min_height": null,
2749
            "min_width": null,
2750
            "object_fit": null,
2751
            "object_position": null,
2752
            "order": null,
2753
            "overflow": null,
2754
            "overflow_x": null,
2755
            "overflow_y": null,
2756
            "padding": null,
2757
            "right": null,
2758
            "top": null,
2759
            "visibility": null,
2760
            "width": null
2761
          }
2762
        },
2763
        "45c7fb32593141abb8168b8077e31f59": {
2764
          "model_module": "@jupyter-widgets/controls",
2765
          "model_module_version": "1.5.0",
2766
          "model_name": "ProgressStyleModel",
2767
          "state": {
2768
            "_model_module": "@jupyter-widgets/controls",
2769
            "_model_module_version": "1.5.0",
2770
            "_model_name": "ProgressStyleModel",
2771
            "_view_count": null,
2772
            "_view_module": "@jupyter-widgets/base",
2773
            "_view_module_version": "1.2.0",
2774
            "_view_name": "StyleView",
2775
            "bar_color": null,
2776
            "description_width": ""
2777
          }
2778
        },
2779
        "482f891d61ab4c2080d95a9b84ea5c6d": {
2780
          "model_module": "@jupyter-widgets/base",
2781
          "model_module_version": "1.2.0",
2782
          "model_name": "LayoutModel",
2783
          "state": {
2784
            "_model_module": "@jupyter-widgets/base",
2785
            "_model_module_version": "1.2.0",
2786
            "_model_name": "LayoutModel",
2787
            "_view_count": null,
2788
            "_view_module": "@jupyter-widgets/base",
2789
            "_view_module_version": "1.2.0",
2790
            "_view_name": "LayoutView",
2791
            "align_content": null,
2792
            "align_items": null,
2793
            "align_self": null,
2794
            "border": null,
2795
            "bottom": null,
2796
            "display": null,
2797
            "flex": null,
2798
            "flex_flow": null,
2799
            "grid_area": null,
2800
            "grid_auto_columns": null,
2801
            "grid_auto_flow": null,
2802
            "grid_auto_rows": null,
2803
            "grid_column": null,
2804
            "grid_gap": null,
2805
            "grid_row": null,
2806
            "grid_template_areas": null,
2807
            "grid_template_columns": null,
2808
            "grid_template_rows": null,
2809
            "height": null,
2810
            "justify_content": null,
2811
            "justify_items": null,
2812
            "left": null,
2813
            "margin": null,
2814
            "max_height": null,
2815
            "max_width": null,
2816
            "min_height": null,
2817
            "min_width": null,
2818
            "object_fit": null,
2819
            "object_position": null,
2820
            "order": null,
2821
            "overflow": null,
2822
            "overflow_x": null,
2823
            "overflow_y": null,
2824
            "padding": null,
2825
            "right": null,
2826
            "top": null,
2827
            "visibility": null,
2828
            "width": null
2829
          }
2830
        },
2831
        "4b4cfb1a834342198c75a02d28448b57": {
2832
          "model_module": "@jupyter-widgets/controls",
2833
          "model_module_version": "1.5.0",
2834
          "model_name": "HTMLModel",
2835
          "state": {
2836
            "_dom_classes": [],
2837
            "_model_module": "@jupyter-widgets/controls",
2838
            "_model_module_version": "1.5.0",
2839
            "_model_name": "HTMLModel",
2840
            "_view_count": null,
2841
            "_view_module": "@jupyter-widgets/controls",
2842
            "_view_module_version": "1.5.0",
2843
            "_view_name": "HTMLView",
2844
            "description": "",
2845
            "description_tooltip": null,
2846
            "layout": "IPY_MODEL_bed2dd81769b4910831cb34a7b475c72",
2847
            "placeholder": "​",
2848
            "style": "IPY_MODEL_ccad7c2aec604ee29b41497ec0f37fa7",
2849
            "value": "Downloading data files: 100%"
2850
          }
2851
        },
2852
        "580e5dd4c9d9497caa40802d5918e75c": {
2853
          "model_module": "@jupyter-widgets/controls",
2854
          "model_module_version": "1.5.0",
2855
          "model_name": "HTMLModel",
2856
          "state": {
2857
            "_dom_classes": [],
2858
            "_model_module": "@jupyter-widgets/controls",
2859
            "_model_module_version": "1.5.0",
2860
            "_model_name": "HTMLModel",
2861
            "_view_count": null,
2862
            "_view_module": "@jupyter-widgets/controls",
2863
            "_view_module_version": "1.5.0",
2864
            "_view_name": "HTMLView",
2865
            "description": "",
2866
            "description_tooltip": null,
2867
            "layout": "IPY_MODEL_241b0de59e53465f8acad4ac74b17b57",
2868
            "placeholder": "​",
2869
            "style": "IPY_MODEL_05199362d95449699254c45c1d5cee94",
2870
            "value": " 1/1 [00:01&lt;00:00,  1.88s/it]"
2871
          }
2872
        },
2873
        "5961b9e44ce14a2a8eb65a9e5b6be90d": {
2874
          "model_module": "@jupyter-widgets/controls",
2875
          "model_module_version": "1.5.0",
2876
          "model_name": "FloatProgressModel",
2877
          "state": {
2878
            "_dom_classes": [],
2879
            "_model_module": "@jupyter-widgets/controls",
2880
            "_model_module_version": "1.5.0",
2881
            "_model_name": "FloatProgressModel",
2882
            "_view_count": null,
2883
            "_view_module": "@jupyter-widgets/controls",
2884
            "_view_module_version": "1.5.0",
2885
            "_view_name": "ProgressView",
2886
            "bar_style": "info",
2887
            "description": "",
2888
            "description_tooltip": null,
2889
            "layout": "IPY_MODEL_5ef6d125261b49679dcb4d886b3e382c",
2890
            "max": 1,
2891
            "min": 0,
2892
            "orientation": "horizontal",
2893
            "style": "IPY_MODEL_294d5fc4fa1e40429e08137934481ba2",
2894
            "value": 1
2895
          }
2896
        },
2897
        "5b14b2d018c74766954d580853eae7fc": {
2898
          "model_module": "@jupyter-widgets/controls",
2899
          "model_module_version": "1.5.0",
2900
          "model_name": "ProgressStyleModel",
2901
          "state": {
2902
            "_model_module": "@jupyter-widgets/controls",
2903
            "_model_module_version": "1.5.0",
2904
            "_model_name": "ProgressStyleModel",
2905
            "_view_count": null,
2906
            "_view_module": "@jupyter-widgets/base",
2907
            "_view_module_version": "1.2.0",
2908
            "_view_name": "StyleView",
2909
            "bar_color": null,
2910
            "description_width": ""
2911
          }
2912
        },
2913
        "5c0bb7407c844ae19479416752f66190": {
2914
          "model_module": "@jupyter-widgets/controls",
2915
          "model_module_version": "1.5.0",
2916
          "model_name": "DescriptionStyleModel",
2917
          "state": {
2918
            "_model_module": "@jupyter-widgets/controls",
2919
            "_model_module_version": "1.5.0",
2920
            "_model_name": "DescriptionStyleModel",
2921
            "_view_count": null,
2922
            "_view_module": "@jupyter-widgets/base",
2923
            "_view_module_version": "1.2.0",
2924
            "_view_name": "StyleView",
2925
            "description_width": ""
2926
          }
2927
        },
2928
        "5ef6d125261b49679dcb4d886b3e382c": {
2929
          "model_module": "@jupyter-widgets/base",
2930
          "model_module_version": "1.2.0",
2931
          "model_name": "LayoutModel",
2932
          "state": {
2933
            "_model_module": "@jupyter-widgets/base",
2934
            "_model_module_version": "1.2.0",
2935
            "_model_name": "LayoutModel",
2936
            "_view_count": null,
2937
            "_view_module": "@jupyter-widgets/base",
2938
            "_view_module_version": "1.2.0",
2939
            "_view_name": "LayoutView",
2940
            "align_content": null,
2941
            "align_items": null,
2942
            "align_self": null,
2943
            "border": null,
2944
            "bottom": null,
2945
            "display": null,
2946
            "flex": null,
2947
            "flex_flow": null,
2948
            "grid_area": null,
2949
            "grid_auto_columns": null,
2950
            "grid_auto_flow": null,
2951
            "grid_auto_rows": null,
2952
            "grid_column": null,
2953
            "grid_gap": null,
2954
            "grid_row": null,
2955
            "grid_template_areas": null,
2956
            "grid_template_columns": null,
2957
            "grid_template_rows": null,
2958
            "height": null,
2959
            "justify_content": null,
2960
            "justify_items": null,
2961
            "left": null,
2962
            "margin": null,
2963
            "max_height": null,
2964
            "max_width": null,
2965
            "min_height": null,
2966
            "min_width": null,
2967
            "object_fit": null,
2968
            "object_position": null,
2969
            "order": null,
2970
            "overflow": null,
2971
            "overflow_x": null,
2972
            "overflow_y": null,
2973
            "padding": null,
2974
            "right": null,
2975
            "top": null,
2976
            "visibility": null,
2977
            "width": "20px"
2978
          }
2979
        },
2980
        "5f15e4b12305489180e54c61769dcebe": {
2981
          "model_module": "@jupyter-widgets/controls",
2982
          "model_module_version": "1.5.0",
2983
          "model_name": "HTMLModel",
2984
          "state": {
2985
            "_dom_classes": [],
2986
            "_model_module": "@jupyter-widgets/controls",
2987
            "_model_module_version": "1.5.0",
2988
            "_model_name": "HTMLModel",
2989
            "_view_count": null,
2990
            "_view_module": "@jupyter-widgets/controls",
2991
            "_view_module_version": "1.5.0",
2992
            "_view_name": "HTMLView",
2993
            "description": "",
2994
            "description_tooltip": null,
2995
            "layout": "IPY_MODEL_f5d992e8c1224879be5e5464a424a3a4",
2996
            "placeholder": "​",
2997
            "style": "IPY_MODEL_7e828bf7b91e4029bc2093876128a78b",
2998
            "value": " 505/0 [00:00&lt;00:00, 5041.77 examples/s]"
2999
          }
3000
        },
3001
        "622987b045e74a13b79553d3d062e72a": {
3002
          "model_module": "@jupyter-widgets/controls",
3003
          "model_module_version": "1.5.0",
3004
          "model_name": "DescriptionStyleModel",
3005
          "state": {
3006
            "_model_module": "@jupyter-widgets/controls",
3007
            "_model_module_version": "1.5.0",
3008
            "_model_name": "DescriptionStyleModel",
3009
            "_view_count": null,
3010
            "_view_module": "@jupyter-widgets/base",
3011
            "_view_module_version": "1.2.0",
3012
            "_view_name": "StyleView",
3013
            "description_width": ""
3014
          }
3015
        },
3016
        "63de2154fea24b49a87bf4b8428fa630": {
3017
          "model_module": "@jupyter-widgets/controls",
3018
          "model_module_version": "1.5.0",
3019
          "model_name": "HBoxModel",
3020
          "state": {
3021
            "_dom_classes": [],
3022
            "_model_module": "@jupyter-widgets/controls",
3023
            "_model_module_version": "1.5.0",
3024
            "_model_name": "HBoxModel",
3025
            "_view_count": null,
3026
            "_view_module": "@jupyter-widgets/controls",
3027
            "_view_module_version": "1.5.0",
3028
            "_view_name": "HBoxView",
3029
            "box_style": "",
3030
            "children": [
3031
              "IPY_MODEL_4b4cfb1a834342198c75a02d28448b57",
3032
              "IPY_MODEL_a9d471008dc34f67a5307bbb26d6123c",
3033
              "IPY_MODEL_580e5dd4c9d9497caa40802d5918e75c"
3034
            ],
3035
            "layout": "IPY_MODEL_bd09981e486d461eaa2cf166b32921e1"
3036
          }
3037
        },
3038
        "64aae9675d394df48d233b31e5f0eb3c": {
3039
          "model_module": "@jupyter-widgets/controls",
3040
          "model_module_version": "1.5.0",
3041
          "model_name": "FloatProgressModel",
3042
          "state": {
3043
            "_dom_classes": [],
3044
            "_model_module": "@jupyter-widgets/controls",
3045
            "_model_module_version": "1.5.0",
3046
            "_model_name": "FloatProgressModel",
3047
            "_view_count": null,
3048
            "_view_module": "@jupyter-widgets/controls",
3049
            "_view_module_version": "1.5.0",
3050
            "_view_name": "ProgressView",
3051
            "bar_style": "success",
3052
            "description": "",
3053
            "description_tooltip": null,
3054
            "layout": "IPY_MODEL_6c7236b0655e4397b3a9d5f4d83c03fe",
3055
            "max": 1,
3056
            "min": 0,
3057
            "orientation": "horizontal",
3058
            "style": "IPY_MODEL_6f7e876e10fd4c58aa2d1f1ed4ff2762",
3059
            "value": 1
3060
          }
3061
        },
3062
        "6545006e51824be9b6cb5cdb2cb2ba5a": {
3063
          "model_module": "@jupyter-widgets/controls",
3064
          "model_module_version": "1.5.0",
3065
          "model_name": "ProgressStyleModel",
3066
          "state": {
3067
            "_model_module": "@jupyter-widgets/controls",
3068
            "_model_module_version": "1.5.0",
3069
            "_model_name": "ProgressStyleModel",
3070
            "_view_count": null,
3071
            "_view_module": "@jupyter-widgets/base",
3072
            "_view_module_version": "1.2.0",
3073
            "_view_name": "StyleView",
3074
            "bar_color": null,
3075
            "description_width": ""
3076
          }
3077
        },
3078
        "6881722e02fe4395a5fcaf668cb7ebcb": {
3079
          "model_module": "@jupyter-widgets/controls",
3080
          "model_module_version": "1.5.0",
3081
          "model_name": "HBoxModel",
3082
          "state": {
3083
            "_dom_classes": [],
3084
            "_model_module": "@jupyter-widgets/controls",
3085
            "_model_module_version": "1.5.0",
3086
            "_model_name": "HBoxModel",
3087
            "_view_count": null,
3088
            "_view_module": "@jupyter-widgets/controls",
3089
            "_view_module_version": "1.5.0",
3090
            "_view_name": "HBoxView",
3091
            "box_style": "",
3092
            "children": [
3093
              "IPY_MODEL_2b960a7f46444ad3bd3392517b415f2d",
3094
              "IPY_MODEL_a3e8499ed740449586ca31500038c7a8",
3095
              "IPY_MODEL_08c52a0369b74e7da99574ec29612189"
3096
            ],
3097
            "layout": "IPY_MODEL_ffb822b2f739434dbe99e8a992716c30"
3098
          }
3099
        },
3100
        "690ca50e9785402bb17fa266f8e40ea9": {
3101
          "model_module": "@jupyter-widgets/base",
3102
          "model_module_version": "1.2.0",
3103
          "model_name": "LayoutModel",
3104
          "state": {
3105
            "_model_module": "@jupyter-widgets/base",
3106
            "_model_module_version": "1.2.0",
3107
            "_model_name": "LayoutModel",
3108
            "_view_count": null,
3109
            "_view_module": "@jupyter-widgets/base",
3110
            "_view_module_version": "1.2.0",
3111
            "_view_name": "LayoutView",
3112
            "align_content": null,
3113
            "align_items": null,
3114
            "align_self": null,
3115
            "border": null,
3116
            "bottom": null,
3117
            "display": null,
3118
            "flex": null,
3119
            "flex_flow": null,
3120
            "grid_area": null,
3121
            "grid_auto_columns": null,
3122
            "grid_auto_flow": null,
3123
            "grid_auto_rows": null,
3124
            "grid_column": null,
3125
            "grid_gap": null,
3126
            "grid_row": null,
3127
            "grid_template_areas": null,
3128
            "grid_template_columns": null,
3129
            "grid_template_rows": null,
3130
            "height": null,
3131
            "justify_content": null,
3132
            "justify_items": null,
3133
            "left": null,
3134
            "margin": null,
3135
            "max_height": null,
3136
            "max_width": null,
3137
            "min_height": null,
3138
            "min_width": null,
3139
            "object_fit": null,
3140
            "object_position": null,
3141
            "order": null,
3142
            "overflow": null,
3143
            "overflow_x": null,
3144
            "overflow_y": null,
3145
            "padding": null,
3146
            "right": null,
3147
            "top": null,
3148
            "visibility": null,
3149
            "width": null
3150
          }
3151
        },
3152
        "6c7236b0655e4397b3a9d5f4d83c03fe": {
3153
          "model_module": "@jupyter-widgets/base",
3154
          "model_module_version": "1.2.0",
3155
          "model_name": "LayoutModel",
3156
          "state": {
3157
            "_model_module": "@jupyter-widgets/base",
3158
            "_model_module_version": "1.2.0",
3159
            "_model_name": "LayoutModel",
3160
            "_view_count": null,
3161
            "_view_module": "@jupyter-widgets/base",
3162
            "_view_module_version": "1.2.0",
3163
            "_view_name": "LayoutView",
3164
            "align_content": null,
3165
            "align_items": null,
3166
            "align_self": null,
3167
            "border": null,
3168
            "bottom": null,
3169
            "display": null,
3170
            "flex": null,
3171
            "flex_flow": null,
3172
            "grid_area": null,
3173
            "grid_auto_columns": null,
3174
            "grid_auto_flow": null,
3175
            "grid_auto_rows": null,
3176
            "grid_column": null,
3177
            "grid_gap": null,
3178
            "grid_row": null,
3179
            "grid_template_areas": null,
3180
            "grid_template_columns": null,
3181
            "grid_template_rows": null,
3182
            "height": null,
3183
            "justify_content": null,
3184
            "justify_items": null,
3185
            "left": null,
3186
            "margin": null,
3187
            "max_height": null,
3188
            "max_width": null,
3189
            "min_height": null,
3190
            "min_width": null,
3191
            "object_fit": null,
3192
            "object_position": null,
3193
            "order": null,
3194
            "overflow": null,
3195
            "overflow_x": null,
3196
            "overflow_y": null,
3197
            "padding": null,
3198
            "right": null,
3199
            "top": null,
3200
            "visibility": null,
3201
            "width": null
3202
          }
3203
        },
3204
        "6d110cd070fe4776b9449de74759dff3": {
3205
          "model_module": "@jupyter-widgets/base",
3206
          "model_module_version": "1.2.0",
3207
          "model_name": "LayoutModel",
3208
          "state": {
3209
            "_model_module": "@jupyter-widgets/base",
3210
            "_model_module_version": "1.2.0",
3211
            "_model_name": "LayoutModel",
3212
            "_view_count": null,
3213
            "_view_module": "@jupyter-widgets/base",
3214
            "_view_module_version": "1.2.0",
3215
            "_view_name": "LayoutView",
3216
            "align_content": null,
3217
            "align_items": null,
3218
            "align_self": null,
3219
            "border": null,
3220
            "bottom": null,
3221
            "display": null,
3222
            "flex": null,
3223
            "flex_flow": null,
3224
            "grid_area": null,
3225
            "grid_auto_columns": null,
3226
            "grid_auto_flow": null,
3227
            "grid_auto_rows": null,
3228
            "grid_column": null,
3229
            "grid_gap": null,
3230
            "grid_row": null,
3231
            "grid_template_areas": null,
3232
            "grid_template_columns": null,
3233
            "grid_template_rows": null,
3234
            "height": null,
3235
            "justify_content": null,
3236
            "justify_items": null,
3237
            "left": null,
3238
            "margin": null,
3239
            "max_height": null,
3240
            "max_width": null,
3241
            "min_height": null,
3242
            "min_width": null,
3243
            "object_fit": null,
3244
            "object_position": null,
3245
            "order": null,
3246
            "overflow": null,
3247
            "overflow_x": null,
3248
            "overflow_y": null,
3249
            "padding": null,
3250
            "right": null,
3251
            "top": null,
3252
            "visibility": null,
3253
            "width": null
3254
          }
3255
        },
3256
        "6f7e876e10fd4c58aa2d1f1ed4ff2762": {
3257
          "model_module": "@jupyter-widgets/controls",
3258
          "model_module_version": "1.5.0",
3259
          "model_name": "ProgressStyleModel",
3260
          "state": {
3261
            "_model_module": "@jupyter-widgets/controls",
3262
            "_model_module_version": "1.5.0",
3263
            "_model_name": "ProgressStyleModel",
3264
            "_view_count": null,
3265
            "_view_module": "@jupyter-widgets/base",
3266
            "_view_module_version": "1.2.0",
3267
            "_view_name": "StyleView",
3268
            "bar_color": null,
3269
            "description_width": ""
3270
          }
3271
        },
3272
        "760c608de89946298cb6845d5ff1b020": {
3273
          "model_module": "@jupyter-widgets/controls",
3274
          "model_module_version": "1.5.0",
3275
          "model_name": "HBoxModel",
3276
          "state": {
3277
            "_dom_classes": [],
3278
            "_model_module": "@jupyter-widgets/controls",
3279
            "_model_module_version": "1.5.0",
3280
            "_model_name": "HBoxModel",
3281
            "_view_count": null,
3282
            "_view_module": "@jupyter-widgets/controls",
3283
            "_view_module_version": "1.5.0",
3284
            "_view_name": "HBoxView",
3285
            "box_style": "",
3286
            "children": [
3287
              "IPY_MODEL_f6f7d673d7a145bda593848f7e87ca2c",
3288
              "IPY_MODEL_effb0c1b07574547aca5956963b371c8",
3289
              "IPY_MODEL_e6e0b0054fb5449c84ad745308510ddb"
3290
            ],
3291
            "layout": "IPY_MODEL_b1e6d4d46b334bcf96efcab6f57c7536"
3292
          }
3293
        },
3294
        "78fe5eb48ae748bda91ddc70f422212c": {
3295
          "model_module": "@jupyter-widgets/controls",
3296
          "model_module_version": "1.5.0",
3297
          "model_name": "DescriptionStyleModel",
3298
          "state": {
3299
            "_model_module": "@jupyter-widgets/controls",
3300
            "_model_module_version": "1.5.0",
3301
            "_model_name": "DescriptionStyleModel",
3302
            "_view_count": null,
3303
            "_view_module": "@jupyter-widgets/base",
3304
            "_view_module_version": "1.2.0",
3305
            "_view_name": "StyleView",
3306
            "description_width": ""
3307
          }
3308
        },
3309
        "7e2b88be1cae49da824e6c6c0782cb50": {
3310
          "model_module": "@jupyter-widgets/base",
3311
          "model_module_version": "1.2.0",
3312
          "model_name": "LayoutModel",
3313
          "state": {
3314
            "_model_module": "@jupyter-widgets/base",
3315
            "_model_module_version": "1.2.0",
3316
            "_model_name": "LayoutModel",
3317
            "_view_count": null,
3318
            "_view_module": "@jupyter-widgets/base",
3319
            "_view_module_version": "1.2.0",
3320
            "_view_name": "LayoutView",
3321
            "align_content": null,
3322
            "align_items": null,
3323
            "align_self": null,
3324
            "border": null,
3325
            "bottom": null,
3326
            "display": null,
3327
            "flex": null,
3328
            "flex_flow": null,
3329
            "grid_area": null,
3330
            "grid_auto_columns": null,
3331
            "grid_auto_flow": null,
3332
            "grid_auto_rows": null,
3333
            "grid_column": null,
3334
            "grid_gap": null,
3335
            "grid_row": null,
3336
            "grid_template_areas": null,
3337
            "grid_template_columns": null,
3338
            "grid_template_rows": null,
3339
            "height": null,
3340
            "justify_content": null,
3341
            "justify_items": null,
3342
            "left": null,
3343
            "margin": null,
3344
            "max_height": null,
3345
            "max_width": null,
3346
            "min_height": null,
3347
            "min_width": null,
3348
            "object_fit": null,
3349
            "object_position": null,
3350
            "order": null,
3351
            "overflow": null,
3352
            "overflow_x": null,
3353
            "overflow_y": null,
3354
            "padding": null,
3355
            "right": null,
3356
            "top": null,
3357
            "visibility": null,
3358
            "width": null
3359
          }
3360
        },
3361
        "7e828bf7b91e4029bc2093876128a78b": {
3362
          "model_module": "@jupyter-widgets/controls",
3363
          "model_module_version": "1.5.0",
3364
          "model_name": "DescriptionStyleModel",
3365
          "state": {
3366
            "_model_module": "@jupyter-widgets/controls",
3367
            "_model_module_version": "1.5.0",
3368
            "_model_name": "DescriptionStyleModel",
3369
            "_view_count": null,
3370
            "_view_module": "@jupyter-widgets/base",
3371
            "_view_module_version": "1.2.0",
3372
            "_view_name": "StyleView",
3373
            "description_width": ""
3374
          }
3375
        },
3376
        "894a9b32ecc3404eb1213a8fa9ea38e2": {
3377
          "model_module": "@jupyter-widgets/base",
3378
          "model_module_version": "1.2.0",
3379
          "model_name": "LayoutModel",
3380
          "state": {
3381
            "_model_module": "@jupyter-widgets/base",
3382
            "_model_module_version": "1.2.0",
3383
            "_model_name": "LayoutModel",
3384
            "_view_count": null,
3385
            "_view_module": "@jupyter-widgets/base",
3386
            "_view_module_version": "1.2.0",
3387
            "_view_name": "LayoutView",
3388
            "align_content": null,
3389
            "align_items": null,
3390
            "align_self": null,
3391
            "border": null,
3392
            "bottom": null,
3393
            "display": null,
3394
            "flex": null,
3395
            "flex_flow": null,
3396
            "grid_area": null,
3397
            "grid_auto_columns": null,
3398
            "grid_auto_flow": null,
3399
            "grid_auto_rows": null,
3400
            "grid_column": null,
3401
            "grid_gap": null,
3402
            "grid_row": null,
3403
            "grid_template_areas": null,
3404
            "grid_template_columns": null,
3405
            "grid_template_rows": null,
3406
            "height": null,
3407
            "justify_content": null,
3408
            "justify_items": null,
3409
            "left": null,
3410
            "margin": null,
3411
            "max_height": null,
3412
            "max_width": null,
3413
            "min_height": null,
3414
            "min_width": null,
3415
            "object_fit": null,
3416
            "object_position": null,
3417
            "order": null,
3418
            "overflow": null,
3419
            "overflow_x": null,
3420
            "overflow_y": null,
3421
            "padding": null,
3422
            "right": null,
3423
            "top": null,
3424
            "visibility": null,
3425
            "width": null
3426
          }
3427
        },
3428
        "908935a03fea42efbded99cd81de54c5": {
3429
          "model_module": "@jupyter-widgets/controls",
3430
          "model_module_version": "1.5.0",
3431
          "model_name": "ProgressStyleModel",
3432
          "state": {
3433
            "_model_module": "@jupyter-widgets/controls",
3434
            "_model_module_version": "1.5.0",
3435
            "_model_name": "ProgressStyleModel",
3436
            "_view_count": null,
3437
            "_view_module": "@jupyter-widgets/base",
3438
            "_view_module_version": "1.2.0",
3439
            "_view_name": "StyleView",
3440
            "bar_color": null,
3441
            "description_width": ""
3442
          }
3443
        },
3444
        "930601ee00454f71b1114c4aaff0175b": {
3445
          "model_module": "@jupyter-widgets/controls",
3446
          "model_module_version": "1.5.0",
3447
          "model_name": "HTMLModel",
3448
          "state": {
3449
            "_dom_classes": [],
3450
            "_model_module": "@jupyter-widgets/controls",
3451
            "_model_module_version": "1.5.0",
3452
            "_model_name": "HTMLModel",
3453
            "_view_count": null,
3454
            "_view_module": "@jupyter-widgets/controls",
3455
            "_view_module_version": "1.5.0",
3456
            "_view_name": "HTMLView",
3457
            "description": "",
3458
            "description_tooltip": null,
3459
            "layout": "IPY_MODEL_d670714b504847e3b72cd84510219ec7",
3460
            "placeholder": "​",
3461
            "style": "IPY_MODEL_037869180d9d4b1eb1bdbed67337e349",
3462
            "value": "100%"
3463
          }
3464
        },
3465
        "9a8b01998f8a4c6bb0bfe71e02b3352c": {
3466
          "model_module": "@jupyter-widgets/base",
3467
          "model_module_version": "1.2.0",
3468
          "model_name": "LayoutModel",
3469
          "state": {
3470
            "_model_module": "@jupyter-widgets/base",
3471
            "_model_module_version": "1.2.0",
3472
            "_model_name": "LayoutModel",
3473
            "_view_count": null,
3474
            "_view_module": "@jupyter-widgets/base",
3475
            "_view_module_version": "1.2.0",
3476
            "_view_name": "LayoutView",
3477
            "align_content": null,
3478
            "align_items": null,
3479
            "align_self": null,
3480
            "border": null,
3481
            "bottom": null,
3482
            "display": null,
3483
            "flex": null,
3484
            "flex_flow": null,
3485
            "grid_area": null,
3486
            "grid_auto_columns": null,
3487
            "grid_auto_flow": null,
3488
            "grid_auto_rows": null,
3489
            "grid_column": null,
3490
            "grid_gap": null,
3491
            "grid_row": null,
3492
            "grid_template_areas": null,
3493
            "grid_template_columns": null,
3494
            "grid_template_rows": null,
3495
            "height": null,
3496
            "justify_content": null,
3497
            "justify_items": null,
3498
            "left": null,
3499
            "margin": null,
3500
            "max_height": null,
3501
            "max_width": null,
3502
            "min_height": null,
3503
            "min_width": null,
3504
            "object_fit": null,
3505
            "object_position": null,
3506
            "order": null,
3507
            "overflow": null,
3508
            "overflow_x": null,
3509
            "overflow_y": null,
3510
            "padding": null,
3511
            "right": null,
3512
            "top": null,
3513
            "visibility": null,
3514
            "width": null
3515
          }
3516
        },
3517
        "9f4e9da63bb64d279ded5ee1730b5cba": {
3518
          "model_module": "@jupyter-widgets/controls",
3519
          "model_module_version": "1.5.0",
3520
          "model_name": "DescriptionStyleModel",
3521
          "state": {
3522
            "_model_module": "@jupyter-widgets/controls",
3523
            "_model_module_version": "1.5.0",
3524
            "_model_name": "DescriptionStyleModel",
3525
            "_view_count": null,
3526
            "_view_module": "@jupyter-widgets/base",
3527
            "_view_module_version": "1.2.0",
3528
            "_view_name": "StyleView",
3529
            "description_width": ""
3530
          }
3531
        },
3532
        "a3e8499ed740449586ca31500038c7a8": {
3533
          "model_module": "@jupyter-widgets/controls",
3534
          "model_module_version": "1.5.0",
3535
          "model_name": "FloatProgressModel",
3536
          "state": {
3537
            "_dom_classes": [],
3538
            "_model_module": "@jupyter-widgets/controls",
3539
            "_model_module_version": "1.5.0",
3540
            "_model_name": "FloatProgressModel",
3541
            "_view_count": null,
3542
            "_view_module": "@jupyter-widgets/controls",
3543
            "_view_module_version": "1.5.0",
3544
            "_view_name": "ProgressView",
3545
            "bar_style": "success",
3546
            "description": "",
3547
            "description_tooltip": null,
3548
            "layout": "IPY_MODEL_3b319c7a4f6f41ea9ea6e6268cd29343",
3549
            "max": 4681212,
3550
            "min": 0,
3551
            "orientation": "horizontal",
3552
            "style": "IPY_MODEL_908935a03fea42efbded99cd81de54c5",
3553
            "value": 4681212
3554
          }
3555
        },
3556
        "a532b2307c734cf188092d40299c40ad": {
3557
          "model_module": "@jupyter-widgets/controls",
3558
          "model_module_version": "1.5.0",
3559
          "model_name": "HBoxModel",
3560
          "state": {
3561
            "_dom_classes": [],
3562
            "_model_module": "@jupyter-widgets/controls",
3563
            "_model_module_version": "1.5.0",
3564
            "_model_name": "HBoxModel",
3565
            "_view_count": null,
3566
            "_view_module": "@jupyter-widgets/controls",
3567
            "_view_module_version": "1.5.0",
3568
            "_view_name": "HBoxView",
3569
            "box_style": "",
3570
            "children": [
3571
              "IPY_MODEL_fab781bfae4647968aa69f19ae6a5754",
3572
              "IPY_MODEL_5961b9e44ce14a2a8eb65a9e5b6be90d",
3573
              "IPY_MODEL_5f15e4b12305489180e54c61769dcebe"
3574
            ],
3575
            "layout": "IPY_MODEL_324465ed674740c2a18a88a2633f2093"
3576
          }
3577
        },
3578
        "a9d471008dc34f67a5307bbb26d6123c": {
3579
          "model_module": "@jupyter-widgets/controls",
3580
          "model_module_version": "1.5.0",
3581
          "model_name": "FloatProgressModel",
3582
          "state": {
3583
            "_dom_classes": [],
3584
            "_model_module": "@jupyter-widgets/controls",
3585
            "_model_module_version": "1.5.0",
3586
            "_model_name": "FloatProgressModel",
3587
            "_view_count": null,
3588
            "_view_module": "@jupyter-widgets/controls",
3589
            "_view_module_version": "1.5.0",
3590
            "_view_name": "ProgressView",
3591
            "bar_style": "success",
3592
            "description": "",
3593
            "description_tooltip": null,
3594
            "layout": "IPY_MODEL_390f06d63dd547d395dcf18f1ebe265d",
3595
            "max": 1,
3596
            "min": 0,
3597
            "orientation": "horizontal",
3598
            "style": "IPY_MODEL_6545006e51824be9b6cb5cdb2cb2ba5a",
3599
            "value": 1
3600
          }
3601
        },
3602
        "ae71cc7e26ee4b51b7eb67520f66c9bd": {
3603
          "model_module": "@jupyter-widgets/controls",
3604
          "model_module_version": "1.5.0",
3605
          "model_name": "DescriptionStyleModel",
3606
          "state": {
3607
            "_model_module": "@jupyter-widgets/controls",
3608
            "_model_module_version": "1.5.0",
3609
            "_model_name": "DescriptionStyleModel",
3610
            "_view_count": null,
3611
            "_view_module": "@jupyter-widgets/base",
3612
            "_view_module_version": "1.2.0",
3613
            "_view_name": "StyleView",
3614
            "description_width": ""
3615
          }
3616
        },
3617
        "b1e6d4d46b334bcf96efcab6f57c7536": {
3618
          "model_module": "@jupyter-widgets/base",
3619
          "model_module_version": "1.2.0",
3620
          "model_name": "LayoutModel",
3621
          "state": {
3622
            "_model_module": "@jupyter-widgets/base",
3623
            "_model_module_version": "1.2.0",
3624
            "_model_name": "LayoutModel",
3625
            "_view_count": null,
3626
            "_view_module": "@jupyter-widgets/base",
3627
            "_view_module_version": "1.2.0",
3628
            "_view_name": "LayoutView",
3629
            "align_content": null,
3630
            "align_items": null,
3631
            "align_self": null,
3632
            "border": null,
3633
            "bottom": null,
3634
            "display": null,
3635
            "flex": null,
3636
            "flex_flow": null,
3637
            "grid_area": null,
3638
            "grid_auto_columns": null,
3639
            "grid_auto_flow": null,
3640
            "grid_auto_rows": null,
3641
            "grid_column": null,
3642
            "grid_gap": null,
3643
            "grid_row": null,
3644
            "grid_template_areas": null,
3645
            "grid_template_columns": null,
3646
            "grid_template_rows": null,
3647
            "height": null,
3648
            "justify_content": null,
3649
            "justify_items": null,
3650
            "left": null,
3651
            "margin": null,
3652
            "max_height": null,
3653
            "max_width": null,
3654
            "min_height": null,
3655
            "min_width": null,
3656
            "object_fit": null,
3657
            "object_position": null,
3658
            "order": null,
3659
            "overflow": null,
3660
            "overflow_x": null,
3661
            "overflow_y": null,
3662
            "padding": null,
3663
            "right": null,
3664
            "top": null,
3665
            "visibility": null,
3666
            "width": null
3667
          }
3668
        },
3669
        "bd09981e486d461eaa2cf166b32921e1": {
3670
          "model_module": "@jupyter-widgets/base",
3671
          "model_module_version": "1.2.0",
3672
          "model_name": "LayoutModel",
3673
          "state": {
3674
            "_model_module": "@jupyter-widgets/base",
3675
            "_model_module_version": "1.2.0",
3676
            "_model_name": "LayoutModel",
3677
            "_view_count": null,
3678
            "_view_module": "@jupyter-widgets/base",
3679
            "_view_module_version": "1.2.0",
3680
            "_view_name": "LayoutView",
3681
            "align_content": null,
3682
            "align_items": null,
3683
            "align_self": null,
3684
            "border": null,
3685
            "bottom": null,
3686
            "display": null,
3687
            "flex": null,
3688
            "flex_flow": null,
3689
            "grid_area": null,
3690
            "grid_auto_columns": null,
3691
            "grid_auto_flow": null,
3692
            "grid_auto_rows": null,
3693
            "grid_column": null,
3694
            "grid_gap": null,
3695
            "grid_row": null,
3696
            "grid_template_areas": null,
3697
            "grid_template_columns": null,
3698
            "grid_template_rows": null,
3699
            "height": null,
3700
            "justify_content": null,
3701
            "justify_items": null,
3702
            "left": null,
3703
            "margin": null,
3704
            "max_height": null,
3705
            "max_width": null,
3706
            "min_height": null,
3707
            "min_width": null,
3708
            "object_fit": null,
3709
            "object_position": null,
3710
            "order": null,
3711
            "overflow": null,
3712
            "overflow_x": null,
3713
            "overflow_y": null,
3714
            "padding": null,
3715
            "right": null,
3716
            "top": null,
3717
            "visibility": null,
3718
            "width": null
3719
          }
3720
        },
3721
        "bed2dd81769b4910831cb34a7b475c72": {
3722
          "model_module": "@jupyter-widgets/base",
3723
          "model_module_version": "1.2.0",
3724
          "model_name": "LayoutModel",
3725
          "state": {
3726
            "_model_module": "@jupyter-widgets/base",
3727
            "_model_module_version": "1.2.0",
3728
            "_model_name": "LayoutModel",
3729
            "_view_count": null,
3730
            "_view_module": "@jupyter-widgets/base",
3731
            "_view_module_version": "1.2.0",
3732
            "_view_name": "LayoutView",
3733
            "align_content": null,
3734
            "align_items": null,
3735
            "align_self": null,
3736
            "border": null,
3737
            "bottom": null,
3738
            "display": null,
3739
            "flex": null,
3740
            "flex_flow": null,
3741
            "grid_area": null,
3742
            "grid_auto_columns": null,
3743
            "grid_auto_flow": null,
3744
            "grid_auto_rows": null,
3745
            "grid_column": null,
3746
            "grid_gap": null,
3747
            "grid_row": null,
3748
            "grid_template_areas": null,
3749
            "grid_template_columns": null,
3750
            "grid_template_rows": null,
3751
            "height": null,
3752
            "justify_content": null,
3753
            "justify_items": null,
3754
            "left": null,
3755
            "margin": null,
3756
            "max_height": null,
3757
            "max_width": null,
3758
            "min_height": null,
3759
            "min_width": null,
3760
            "object_fit": null,
3761
            "object_position": null,
3762
            "order": null,
3763
            "overflow": null,
3764
            "overflow_x": null,
3765
            "overflow_y": null,
3766
            "padding": null,
3767
            "right": null,
3768
            "top": null,
3769
            "visibility": null,
3770
            "width": null
3771
          }
3772
        },
3773
        "bf9b29814dd04a22a7ff4ca1c6160c21": {
3774
          "model_module": "@jupyter-widgets/controls",
3775
          "model_module_version": "1.5.0",
3776
          "model_name": "HTMLModel",
3777
          "state": {
3778
            "_dom_classes": [],
3779
            "_model_module": "@jupyter-widgets/controls",
3780
            "_model_module_version": "1.5.0",
3781
            "_model_name": "HTMLModel",
3782
            "_view_count": null,
3783
            "_view_module": "@jupyter-widgets/controls",
3784
            "_view_module_version": "1.5.0",
3785
            "_view_name": "HTMLView",
3786
            "description": "",
3787
            "description_tooltip": null,
3788
            "layout": "IPY_MODEL_41920d8d2aa44511814576dab37d96e7",
3789
            "placeholder": "​",
3790
            "style": "IPY_MODEL_d4c5704e6136468b910684e418074271",
3791
            "value": " 505/505 [00:06&lt;00:00, 213.93it/s]"
3792
          }
3793
        },
3794
        "ccad7c2aec604ee29b41497ec0f37fa7": {
3795
          "model_module": "@jupyter-widgets/controls",
3796
          "model_module_version": "1.5.0",
3797
          "model_name": "DescriptionStyleModel",
3798
          "state": {
3799
            "_model_module": "@jupyter-widgets/controls",
3800
            "_model_module_version": "1.5.0",
3801
            "_model_name": "DescriptionStyleModel",
3802
            "_view_count": null,
3803
            "_view_module": "@jupyter-widgets/base",
3804
            "_view_module_version": "1.2.0",
3805
            "_view_name": "StyleView",
3806
            "description_width": ""
3807
          }
3808
        },
3809
        "d1d3dde6ec3b483f8b14139a7d6a9ae0": {
3810
          "model_module": "@jupyter-widgets/controls",
3811
          "model_module_version": "1.5.0",
3812
          "model_name": "HTMLModel",
3813
          "state": {
3814
            "_dom_classes": [],
3815
            "_model_module": "@jupyter-widgets/controls",
3816
            "_model_module_version": "1.5.0",
3817
            "_model_name": "HTMLModel",
3818
            "_view_count": null,
3819
            "_view_module": "@jupyter-widgets/controls",
3820
            "_view_module_version": "1.5.0",
3821
            "_view_name": "HTMLView",
3822
            "description": "",
3823
            "description_tooltip": null,
3824
            "layout": "IPY_MODEL_9a8b01998f8a4c6bb0bfe71e02b3352c",
3825
            "placeholder": "​",
3826
            "style": "IPY_MODEL_ec224feb9828415eb018831e985d22c0",
3827
            "value": " 1/1 [00:00&lt;00:00, 37.16it/s]"
3828
          }
3829
        },
3830
        "d4c5704e6136468b910684e418074271": {
3831
          "model_module": "@jupyter-widgets/controls",
3832
          "model_module_version": "1.5.0",
3833
          "model_name": "DescriptionStyleModel",
3834
          "state": {
3835
            "_model_module": "@jupyter-widgets/controls",
3836
            "_model_module_version": "1.5.0",
3837
            "_model_name": "DescriptionStyleModel",
3838
            "_view_count": null,
3839
            "_view_module": "@jupyter-widgets/base",
3840
            "_view_module_version": "1.2.0",
3841
            "_view_name": "StyleView",
3842
            "description_width": ""
3843
          }
3844
        },
3845
        "d670714b504847e3b72cd84510219ec7": {
3846
          "model_module": "@jupyter-widgets/base",
3847
          "model_module_version": "1.2.0",
3848
          "model_name": "LayoutModel",
3849
          "state": {
3850
            "_model_module": "@jupyter-widgets/base",
3851
            "_model_module_version": "1.2.0",
3852
            "_model_name": "LayoutModel",
3853
            "_view_count": null,
3854
            "_view_module": "@jupyter-widgets/base",
3855
            "_view_module_version": "1.2.0",
3856
            "_view_name": "LayoutView",
3857
            "align_content": null,
3858
            "align_items": null,
3859
            "align_self": null,
3860
            "border": null,
3861
            "bottom": null,
3862
            "display": null,
3863
            "flex": null,
3864
            "flex_flow": null,
3865
            "grid_area": null,
3866
            "grid_auto_columns": null,
3867
            "grid_auto_flow": null,
3868
            "grid_auto_rows": null,
3869
            "grid_column": null,
3870
            "grid_gap": null,
3871
            "grid_row": null,
3872
            "grid_template_areas": null,
3873
            "grid_template_columns": null,
3874
            "grid_template_rows": null,
3875
            "height": null,
3876
            "justify_content": null,
3877
            "justify_items": null,
3878
            "left": null,
3879
            "margin": null,
3880
            "max_height": null,
3881
            "max_width": null,
3882
            "min_height": null,
3883
            "min_width": null,
3884
            "object_fit": null,
3885
            "object_position": null,
3886
            "order": null,
3887
            "overflow": null,
3888
            "overflow_x": null,
3889
            "overflow_y": null,
3890
            "padding": null,
3891
            "right": null,
3892
            "top": null,
3893
            "visibility": null,
3894
            "width": null
3895
          }
3896
        },
3897
        "d83b0b3089c34bb58ddb1272a240c2f9": {
3898
          "model_module": "@jupyter-widgets/controls",
3899
          "model_module_version": "1.5.0",
3900
          "model_name": "HBoxModel",
3901
          "state": {
3902
            "_dom_classes": [],
3903
            "_model_module": "@jupyter-widgets/controls",
3904
            "_model_module_version": "1.5.0",
3905
            "_model_name": "HBoxModel",
3906
            "_view_count": null,
3907
            "_view_module": "@jupyter-widgets/controls",
3908
            "_view_module_version": "1.5.0",
3909
            "_view_name": "HBoxView",
3910
            "box_style": "",
3911
            "children": [
3912
              "IPY_MODEL_34d21f61f6dc499a9d1504634e470bdd",
3913
              "IPY_MODEL_64aae9675d394df48d233b31e5f0eb3c",
3914
              "IPY_MODEL_d1d3dde6ec3b483f8b14139a7d6a9ae0"
3915
            ],
3916
            "layout": "IPY_MODEL_690ca50e9785402bb17fa266f8e40ea9"
3917
          }
3918
        },
3919
        "dd3ece4c242d4eae946f8bc4f95d1dbf": {
3920
          "model_module": "@jupyter-widgets/base",
3921
          "model_module_version": "1.2.0",
3922
          "model_name": "LayoutModel",
3923
          "state": {
3924
            "_model_module": "@jupyter-widgets/base",
3925
            "_model_module_version": "1.2.0",
3926
            "_model_name": "LayoutModel",
3927
            "_view_count": null,
3928
            "_view_module": "@jupyter-widgets/base",
3929
            "_view_module_version": "1.2.0",
3930
            "_view_name": "LayoutView",
3931
            "align_content": null,
3932
            "align_items": null,
3933
            "align_self": null,
3934
            "border": null,
3935
            "bottom": null,
3936
            "display": null,
3937
            "flex": null,
3938
            "flex_flow": null,
3939
            "grid_area": null,
3940
            "grid_auto_columns": null,
3941
            "grid_auto_flow": null,
3942
            "grid_auto_rows": null,
3943
            "grid_column": null,
3944
            "grid_gap": null,
3945
            "grid_row": null,
3946
            "grid_template_areas": null,
3947
            "grid_template_columns": null,
3948
            "grid_template_rows": null,
3949
            "height": null,
3950
            "justify_content": null,
3951
            "justify_items": null,
3952
            "left": null,
3953
            "margin": null,
3954
            "max_height": null,
3955
            "max_width": null,
3956
            "min_height": null,
3957
            "min_width": null,
3958
            "object_fit": null,
3959
            "object_position": null,
3960
            "order": null,
3961
            "overflow": null,
3962
            "overflow_x": null,
3963
            "overflow_y": null,
3964
            "padding": null,
3965
            "right": null,
3966
            "top": null,
3967
            "visibility": null,
3968
            "width": null
3969
          }
3970
        },
3971
        "e5a120d5b9494d14a142fbf519bcbbdf": {
3972
          "model_module": "@jupyter-widgets/base",
3973
          "model_module_version": "1.2.0",
3974
          "model_name": "LayoutModel",
3975
          "state": {
3976
            "_model_module": "@jupyter-widgets/base",
3977
            "_model_module_version": "1.2.0",
3978
            "_model_name": "LayoutModel",
3979
            "_view_count": null,
3980
            "_view_module": "@jupyter-widgets/base",
3981
            "_view_module_version": "1.2.0",
3982
            "_view_name": "LayoutView",
3983
            "align_content": null,
3984
            "align_items": null,
3985
            "align_self": null,
3986
            "border": null,
3987
            "bottom": null,
3988
            "display": null,
3989
            "flex": null,
3990
            "flex_flow": null,
3991
            "grid_area": null,
3992
            "grid_auto_columns": null,
3993
            "grid_auto_flow": null,
3994
            "grid_auto_rows": null,
3995
            "grid_column": null,
3996
            "grid_gap": null,
3997
            "grid_row": null,
3998
            "grid_template_areas": null,
3999
            "grid_template_columns": null,
4000
            "grid_template_rows": null,
4001
            "height": null,
4002
            "justify_content": null,
4003
            "justify_items": null,
4004
            "left": null,
4005
            "margin": null,
4006
            "max_height": null,
4007
            "max_width": null,
4008
            "min_height": null,
4009
            "min_width": null,
4010
            "object_fit": null,
4011
            "object_position": null,
4012
            "order": null,
4013
            "overflow": null,
4014
            "overflow_x": null,
4015
            "overflow_y": null,
4016
            "padding": null,
4017
            "right": null,
4018
            "top": null,
4019
            "visibility": null,
4020
            "width": null
4021
          }
4022
        },
4023
        "e6e0b0054fb5449c84ad745308510ddb": {
4024
          "model_module": "@jupyter-widgets/controls",
4025
          "model_module_version": "1.5.0",
4026
          "model_name": "HTMLModel",
4027
          "state": {
4028
            "_dom_classes": [],
4029
            "_model_module": "@jupyter-widgets/controls",
4030
            "_model_module_version": "1.5.0",
4031
            "_model_name": "HTMLModel",
4032
            "_view_count": null,
4033
            "_view_module": "@jupyter-widgets/controls",
4034
            "_view_module_version": "1.5.0",
4035
            "_view_name": "HTMLView",
4036
            "description": "",
4037
            "description_tooltip": null,
4038
            "layout": "IPY_MODEL_0ed96243151440a18994669e2f85e819",
4039
            "placeholder": "​",
4040
            "style": "IPY_MODEL_05a0a1ebc92f463d9f3e953e51742a85",
4041
            "value": " 25/25 [01:13&lt;00:00,  3.87s/it]"
4042
          }
4043
        },
4044
        "e976d05935374e47b86773ca852cfa9e": {
4045
          "model_module": "@jupyter-widgets/controls",
4046
          "model_module_version": "1.5.0",
4047
          "model_name": "FloatProgressModel",
4048
          "state": {
4049
            "_dom_classes": [],
4050
            "_model_module": "@jupyter-widgets/controls",
4051
            "_model_module_version": "1.5.0",
4052
            "_model_name": "FloatProgressModel",
4053
            "_view_count": null,
4054
            "_view_module": "@jupyter-widgets/controls",
4055
            "_view_module_version": "1.5.0",
4056
            "_view_name": "ProgressView",
4057
            "bar_style": "success",
4058
            "description": "",
4059
            "description_tooltip": null,
4060
            "layout": "IPY_MODEL_894a9b32ecc3404eb1213a8fa9ea38e2",
4061
            "max": 505,
4062
            "min": 0,
4063
            "orientation": "horizontal",
4064
            "style": "IPY_MODEL_5b14b2d018c74766954d580853eae7fc",
4065
            "value": 505
4066
          }
4067
        },
4068
        "ec224feb9828415eb018831e985d22c0": {
4069
          "model_module": "@jupyter-widgets/controls",
4070
          "model_module_version": "1.5.0",
4071
          "model_name": "DescriptionStyleModel",
4072
          "state": {
4073
            "_model_module": "@jupyter-widgets/controls",
4074
            "_model_module_version": "1.5.0",
4075
            "_model_name": "DescriptionStyleModel",
4076
            "_view_count": null,
4077
            "_view_module": "@jupyter-widgets/base",
4078
            "_view_module_version": "1.2.0",
4079
            "_view_name": "StyleView",
4080
            "description_width": ""
4081
          }
4082
        },
4083
        "effb0c1b07574547aca5956963b371c8": {
4084
          "model_module": "@jupyter-widgets/controls",
4085
          "model_module_version": "1.5.0",
4086
          "model_name": "FloatProgressModel",
4087
          "state": {
4088
            "_dom_classes": [],
4089
            "_model_module": "@jupyter-widgets/controls",
4090
            "_model_module_version": "1.5.0",
4091
            "_model_name": "FloatProgressModel",
4092
            "_view_count": null,
4093
            "_view_module": "@jupyter-widgets/controls",
4094
            "_view_module_version": "1.5.0",
4095
            "_view_name": "ProgressView",
4096
            "bar_style": "success",
4097
            "description": "",
4098
            "description_tooltip": null,
4099
            "layout": "IPY_MODEL_34e43d6a7a92453490c45e39498afd64",
4100
            "max": 25,
4101
            "min": 0,
4102
            "orientation": "horizontal",
4103
            "style": "IPY_MODEL_45c7fb32593141abb8168b8077e31f59",
4104
            "value": 25
4105
          }
4106
        },
4107
        "f5d992e8c1224879be5e5464a424a3a4": {
4108
          "model_module": "@jupyter-widgets/base",
4109
          "model_module_version": "1.2.0",
4110
          "model_name": "LayoutModel",
4111
          "state": {
4112
            "_model_module": "@jupyter-widgets/base",
4113
            "_model_module_version": "1.2.0",
4114
            "_model_name": "LayoutModel",
4115
            "_view_count": null,
4116
            "_view_module": "@jupyter-widgets/base",
4117
            "_view_module_version": "1.2.0",
4118
            "_view_name": "LayoutView",
4119
            "align_content": null,
4120
            "align_items": null,
4121
            "align_self": null,
4122
            "border": null,
4123
            "bottom": null,
4124
            "display": null,
4125
            "flex": null,
4126
            "flex_flow": null,
4127
            "grid_area": null,
4128
            "grid_auto_columns": null,
4129
            "grid_auto_flow": null,
4130
            "grid_auto_rows": null,
4131
            "grid_column": null,
4132
            "grid_gap": null,
4133
            "grid_row": null,
4134
            "grid_template_areas": null,
4135
            "grid_template_columns": null,
4136
            "grid_template_rows": null,
4137
            "height": null,
4138
            "justify_content": null,
4139
            "justify_items": null,
4140
            "left": null,
4141
            "margin": null,
4142
            "max_height": null,
4143
            "max_width": null,
4144
            "min_height": null,
4145
            "min_width": null,
4146
            "object_fit": null,
4147
            "object_position": null,
4148
            "order": null,
4149
            "overflow": null,
4150
            "overflow_x": null,
4151
            "overflow_y": null,
4152
            "padding": null,
4153
            "right": null,
4154
            "top": null,
4155
            "visibility": null,
4156
            "width": null
4157
          }
4158
        },
4159
        "f6f7d673d7a145bda593848f7e87ca2c": {
4160
          "model_module": "@jupyter-widgets/controls",
4161
          "model_module_version": "1.5.0",
4162
          "model_name": "HTMLModel",
4163
          "state": {
4164
            "_dom_classes": [],
4165
            "_model_module": "@jupyter-widgets/controls",
4166
            "_model_module_version": "1.5.0",
4167
            "_model_name": "HTMLModel",
4168
            "_view_count": null,
4169
            "_view_module": "@jupyter-widgets/controls",
4170
            "_view_module_version": "1.5.0",
4171
            "_view_name": "HTMLView",
4172
            "description": "",
4173
            "description_tooltip": null,
4174
            "layout": "IPY_MODEL_e5a120d5b9494d14a142fbf519bcbbdf",
4175
            "placeholder": "​",
4176
            "style": "IPY_MODEL_78fe5eb48ae748bda91ddc70f422212c",
4177
            "value": "100%"
4178
          }
4179
        },
4180
        "f82b21e87eba4e06a0531c791dc09b3f": {
4181
          "model_module": "@jupyter-widgets/base",
4182
          "model_module_version": "1.2.0",
4183
          "model_name": "LayoutModel",
4184
          "state": {
4185
            "_model_module": "@jupyter-widgets/base",
4186
            "_model_module_version": "1.2.0",
4187
            "_model_name": "LayoutModel",
4188
            "_view_count": null,
4189
            "_view_module": "@jupyter-widgets/base",
4190
            "_view_module_version": "1.2.0",
4191
            "_view_name": "LayoutView",
4192
            "align_content": null,
4193
            "align_items": null,
4194
            "align_self": null,
4195
            "border": null,
4196
            "bottom": null,
4197
            "display": null,
4198
            "flex": null,
4199
            "flex_flow": null,
4200
            "grid_area": null,
4201
            "grid_auto_columns": null,
4202
            "grid_auto_flow": null,
4203
            "grid_auto_rows": null,
4204
            "grid_column": null,
4205
            "grid_gap": null,
4206
            "grid_row": null,
4207
            "grid_template_areas": null,
4208
            "grid_template_columns": null,
4209
            "grid_template_rows": null,
4210
            "height": null,
4211
            "justify_content": null,
4212
            "justify_items": null,
4213
            "left": null,
4214
            "margin": null,
4215
            "max_height": null,
4216
            "max_width": null,
4217
            "min_height": null,
4218
            "min_width": null,
4219
            "object_fit": null,
4220
            "object_position": null,
4221
            "order": null,
4222
            "overflow": null,
4223
            "overflow_x": null,
4224
            "overflow_y": null,
4225
            "padding": null,
4226
            "right": null,
4227
            "top": null,
4228
            "visibility": null,
4229
            "width": null
4230
          }
4231
        },
4232
        "fab781bfae4647968aa69f19ae6a5754": {
4233
          "model_module": "@jupyter-widgets/controls",
4234
          "model_module_version": "1.5.0",
4235
          "model_name": "HTMLModel",
4236
          "state": {
4237
            "_dom_classes": [],
4238
            "_model_module": "@jupyter-widgets/controls",
4239
            "_model_module_version": "1.5.0",
4240
            "_model_name": "HTMLModel",
4241
            "_view_count": null,
4242
            "_view_module": "@jupyter-widgets/controls",
4243
            "_view_module_version": "1.5.0",
4244
            "_view_name": "HTMLView",
4245
            "description": "",
4246
            "description_tooltip": null,
4247
            "layout": "IPY_MODEL_f82b21e87eba4e06a0531c791dc09b3f",
4248
            "placeholder": "​",
4249
            "style": "IPY_MODEL_5c0bb7407c844ae19479416752f66190",
4250
            "value": "Generating train split: "
4251
          }
4252
        },
4253
        "ffb822b2f739434dbe99e8a992716c30": {
4254
          "model_module": "@jupyter-widgets/base",
4255
          "model_module_version": "1.2.0",
4256
          "model_name": "LayoutModel",
4257
          "state": {
4258
            "_model_module": "@jupyter-widgets/base",
4259
            "_model_module_version": "1.2.0",
4260
            "_model_name": "LayoutModel",
4261
            "_view_count": null,
4262
            "_view_module": "@jupyter-widgets/base",
4263
            "_view_module_version": "1.2.0",
4264
            "_view_name": "LayoutView",
4265
            "align_content": null,
4266
            "align_items": null,
4267
            "align_self": null,
4268
            "border": null,
4269
            "bottom": null,
4270
            "display": null,
4271
            "flex": null,
4272
            "flex_flow": null,
4273
            "grid_area": null,
4274
            "grid_auto_columns": null,
4275
            "grid_auto_flow": null,
4276
            "grid_auto_rows": null,
4277
            "grid_column": null,
4278
            "grid_gap": null,
4279
            "grid_row": null,
4280
            "grid_template_areas": null,
4281
            "grid_template_columns": null,
4282
            "grid_template_rows": null,
4283
            "height": null,
4284
            "justify_content": null,
4285
            "justify_items": null,
4286
            "left": null,
4287
            "margin": null,
4288
            "max_height": null,
4289
            "max_width": null,
4290
            "min_height": null,
4291
            "min_width": null,
4292
            "object_fit": null,
4293
            "object_position": null,
4294
            "order": null,
4295
            "overflow": null,
4296
            "overflow_x": null,
4297
            "overflow_y": null,
4298
            "padding": null,
4299
            "right": null,
4300
            "top": null,
4301
            "visibility": null,
4302
            "width": null
4303
          }
4304
        }
4305
      }
4306
    }
4307
  },
4308
  "nbformat": 4,
4309
  "nbformat_minor": 0
4310
}
4311

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.