examples

Форк
0
/
claude-3-agent.ipynb 
2254 строки · 95.0 Кб
1
{
2
  "cells": [
3
    {
4
      "cell_type": "markdown",
5
      "metadata": {
6
        "id": "A8o2XLKIz84Y"
7
      },
8
      "source": [
9
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/v1/xml-agents.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/v1/xml-agents.ipynb)"
10
      ]
11
    },
12
    {
13
      "cell_type": "code",
14
      "execution_count": null,
15
      "metadata": {
16
        "colab": {
17
          "base_uri": "https://localhost:8080/"
18
        },
19
        "id": "1eOnr6z_zLoc",
20
        "outputId": "53f4214a-c00b-4dea-d1fa-38d0d6693fe9"
21
      },
22
      "outputs": [
23
        {
24
          "output_type": "stream",
25
          "name": "stdout",
26
          "text": [
27
            "Python 3.10.12\n"
28
          ]
29
        }
30
      ],
31
      "source": [
32
        "!python --version"
33
      ]
34
    },
35
    {
36
      "cell_type": "markdown",
37
      "metadata": {
38
        "id": "qQG4iSxVxw8f"
39
      },
40
      "source": [
41
        "# XML Agents with RAG and LangChain v1"
42
      ]
43
    },
44
    {
45
      "cell_type": "markdown",
46
      "metadata": {
47
        "id": "ov6TCS7bx1oI"
48
      },
49
      "source": [
50
        "LangChain v1 brought a lot of changes and when comparing the LangChain of versions `0.0.3xx` to `0.1.x` there's plenty of changes to the preferred way of doing things. That is very much the case for agents.\n",
51
        "\n",
52
        "The way that we initialize and use agents is generally clearer than it was in the past — there are still many abstractions, but we can (and are encouraged to) get closer to the agent logic itself. This can make for some confusion at first, but once understood the new logic can be much clearer than with previous versions.\n",
53
        "\n",
54
        "In this example, we'll be building a RAG agent with LangChain v1. We will use Claude 2.1 for our LLM, Cohere's embed v3 model for knowledge embeddings, and Pinecone to power our knowledge retrieval.\n",
55
        "\n",
56
        "To begin, let's install the prerequisites:"
57
      ]
58
    },
59
    {
60
      "cell_type": "code",
61
      "execution_count": 2,
62
      "metadata": {
63
        "id": "zshhLDrgbFKk",
64
        "colab": {
65
          "base_uri": "https://localhost:8080/"
66
        },
67
        "outputId": "ed49398c-dc7c-4c9c-e6c3-1a1910d52006"
68
      },
69
      "outputs": [
70
        {
71
          "output_type": "stream",
72
          "name": "stdout",
73
          "text": [
74
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m848.6/848.6 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
75
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m211.0/211.0 kB\u001b[0m \u001b[31m18.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
76
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m507.1/507.1 kB\u001b[0m \u001b[31m30.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
77
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.6/75.6 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
78
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m13.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
79
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m15.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
80
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.8/77.8 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
81
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
82
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m13.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
83
            "\u001b[?25h"
84
          ]
85
        }
86
      ],
87
      "source": [
88
        "!pip install -qU \\\n",
89
        "    langchain==0.1.11 \\\n",
90
        "    langchain-core==0.1.30 \\\n",
91
        "    langchain-community==0.0.27 \\\n",
92
        "    langchain-anthropic==0.1.4 \\\n",
93
        "    langchainhub==0.1.15 \\\n",
94
        "    anthropic==0.19.1 \\\n",
95
        "    voyageai==0.2.1 \\\n",
96
        "    pinecone-client==3.1.0 \\\n",
97
        "    datasets==2.16.1"
98
      ]
99
    },
100
    {
101
      "cell_type": "markdown",
102
      "metadata": {
103
        "id": "bpKfZkUYzQhB"
104
      },
105
      "source": [
106
        "## Finding Knowledge"
107
      ]
108
    },
109
    {
110
      "cell_type": "markdown",
111
      "metadata": {
112
        "id": "JDTQoxcNzUa8"
113
      },
114
      "source": [
115
        "The first thing we need for an agent using RAG is somewhere we want to pull knowledge from. We will use v2 of the AI ArXiv dataset, available on Hugging Face Datasets at [`jamescalam/ai-arxiv2-chunks`](https://huggingface.co/datasets/jamescalam/ai-arxiv2-chunks).\n",
116
        "\n",
117
        "_Note: we're using the prechunked dataset. For the raw version see [`jamescalam/ai-arxiv2`](https://huggingface.co/datasets/jamescalam/ai-arxiv2)._"
118
      ]
119
    },
120
    {
121
      "cell_type": "code",
122
      "execution_count": 4,
123
      "metadata": {
124
        "colab": {
125
          "base_uri": "https://localhost:8080/",
126
          "height": 297,
127
          "referenced_widgets": [
128
            "4d5c6f53e82948b18b45877e2e30fd78",
129
            "7036b643c9264b09b0f93cb6a1a8b01f",
130
            "fb92dd97e68643099b0d0f1b4983f5f1",
131
            "91c3b00e77444bb79d9f73314800471e",
132
            "3e1d39ab6d3a4cf5827c9b3e1d4f0df1",
133
            "f58fb2da52bc43a985449910daaac230",
134
            "449afd685fd94354b64aa43a4763ab1d",
135
            "43a3c63554894d268e3bf747f7e0289d",
136
            "d902beef883e4c65bd2d6cb161423bae",
137
            "ce5735788cd14803abac28fa2c42d168",
138
            "a8928cd888a648c9b7d52739f38c5bda",
139
            "0168cb9e015b4e119fcdc7beca2e21a3",
140
            "a0e93df2955b44208479a4b416cd1085",
141
            "6f58f8d6374b45b4b6a0d9fcaa8083e7",
142
            "8bde11d3d65c45dba3baf1c02450eef1",
143
            "b19af932a7f24dc799b683136d175107",
144
            "e2819a7a94974a4da9b6bde754fdbc50",
145
            "808dd94ecaff43e0848bcfead3cf0d75",
146
            "4383833994854f88aa12ba0edb673d47",
147
            "ee2672db2047468fb49482e14127dcd3",
148
            "6999dc582b4c4ba68fdbfe404b1c4dfe",
149
            "593274c4088f4d9aa2becb6b1c6e495b"
150
          ]
151
        },
152
        "id": "U9gpYFnzbFKm",
153
        "outputId": "c7005c0a-b9a2-4c7c-f93b-e92081f7e3d6"
154
      },
155
      "outputs": [
156
        {
157
          "output_type": "stream",
158
          "name": "stderr",
159
          "text": [
160
            "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: \n",
161
            "The secret `HF_TOKEN` does not exist in your Colab secrets.\n",
162
            "To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n",
163
            "You will be able to reuse this secret in all of your notebooks.\n",
164
            "Please note that authentication is recommended but still optional to access public models or datasets.\n",
165
            "  warnings.warn(\n"
166
          ]
167
        },
168
        {
169
          "output_type": "display_data",
170
          "data": {
171
            "text/plain": [
172
              "Downloading data:   0%|          | 0.00/766M [00:00<?, ?B/s]"
173
            ],
174
            "application/vnd.jupyter.widget-view+json": {
175
              "version_major": 2,
176
              "version_minor": 0,
177
              "model_id": "4d5c6f53e82948b18b45877e2e30fd78"
178
            }
179
          },
180
          "metadata": {}
181
        },
182
        {
183
          "output_type": "display_data",
184
          "data": {
185
            "text/plain": [
186
              "Generating train split: 0 examples [00:00, ? examples/s]"
187
            ],
188
            "application/vnd.jupyter.widget-view+json": {
189
              "version_major": 2,
190
              "version_minor": 0,
191
              "model_id": "0168cb9e015b4e119fcdc7beca2e21a3"
192
            }
193
          },
194
          "metadata": {}
195
        },
196
        {
197
          "output_type": "execute_result",
198
          "data": {
199
            "text/plain": [
200
              "Dataset({\n",
201
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
202
              "    num_rows: 20000\n",
203
              "})"
204
            ]
205
          },
206
          "metadata": {},
207
          "execution_count": 4
208
        }
209
      ],
210
      "source": [
211
        "from datasets import load_dataset\n",
212
        "\n",
213
        "dataset = load_dataset(\"jamescalam/ai-arxiv2-chunks\", split=\"train[:20000]\")\n",
214
        "dataset"
215
      ]
216
    },
217
    {
218
      "cell_type": "code",
219
      "execution_count": 5,
220
      "metadata": {
221
        "colab": {
222
          "base_uri": "https://localhost:8080/"
223
        },
224
        "id": "_bP7ZW-ybFKm",
225
        "outputId": "491e8001-df44-465c-9979-2e2a30de9d5f"
226
      },
227
      "outputs": [
228
        {
229
          "output_type": "execute_result",
230
          "data": {
231
            "text/plain": [
232
              "{'doi': '2401.09350',\n",
233
              " 'chunk-id': 1,\n",
234
              " 'chunk': 'These neural networks and their training algorithms may be complex, and the scope of their impact broad and wide, but nonetheless they are simply functions in a high-dimensional space. A trained neural network takes a vector as input, crunches and transforms it in various ways, and produces another vector, often in some other space. An image may thereby be turned into a vector, a song into a sequence of vectors, and a social network as a structured collection of vectors. It seems as though much of human knowledge, or at least what is expressed as text, audio, image, and video, has a vector representation in one form or another.\\nIt should be noted that representing data as vectors is not unique to neural networks and deep learning. In fact, long before learnt vector representations of pieces of dataâ\\x80\\x94what is commonly known as â\\x80\\x9cembeddingsâ\\x80\\x9dâ\\x80\\x94came along, data was often encoded as hand-crafted feature vectors. Each feature quanti- fied into continuous or discrete values some facet of the data that was deemed relevant to a particular task (such as classification or regression). Vectors of that form, too, reflect our understanding of a real-world object or concept.',\n",
235
              " 'id': '2401.09350#1',\n",
236
              " 'title': 'Foundations of Vector Retrieval',\n",
237
              " 'summary': 'Vectors are universal mathematical objects that can represent text, images,\\nspeech, or a mix of these data modalities. That happens regardless of whether\\ndata is represented by hand-crafted features or learnt embeddings. Collect a\\nlarge enough quantity of such vectors and the question of retrieval becomes\\nurgently relevant: Finding vectors that are more similar to a query vector.\\nThis monograph is concerned with the question above and covers fundamental\\nconcepts along with advanced data structures and algorithms for vector\\nretrieval. In doing so, it recaps this fascinating topic and lowers barriers of\\nentry into this rich area of research.',\n",
238
              " 'source': 'http://arxiv.org/pdf/2401.09350',\n",
239
              " 'authors': 'Sebastian Bruch',\n",
240
              " 'categories': 'cs.DS, cs.IR',\n",
241
              " 'comment': None,\n",
242
              " 'journal_ref': None,\n",
243
              " 'primary_category': 'cs.DS',\n",
244
              " 'published': '20240117',\n",
245
              " 'updated': '20240117',\n",
246
              " 'references': []}"
247
            ]
248
          },
249
          "metadata": {},
250
          "execution_count": 5
251
        }
252
      ],
253
      "source": [
254
        "dataset[1]"
255
      ]
256
    },
257
    {
258
      "cell_type": "markdown",
259
      "metadata": {
260
        "id": "VX6NdQhgbFKn"
261
      },
262
      "source": [
263
        "## Building the Knowledge Base"
264
      ]
265
    },
266
    {
267
      "cell_type": "markdown",
268
      "metadata": {
269
        "id": "MDCbqQl_bFKn"
270
      },
271
      "source": [
272
        "To build our knowledge base we need _two things_:\n",
273
        "\n",
274
        "1. Embeddings, for this we will use `VoyageEmbeddings` using Voyage AI's embedding models, which do need an [API key](https://dash.voyageai.com/api-keys).\n",
275
        "2. A vector database, where we store our embeddings and query them. We use Pinecone which again requires a [free API key](https://app.pinecone.io).\n",
276
        "\n",
277
        "First we initialize our connection to Voyage AI and define an `embed` object for embeddings:"
278
      ]
279
    },
280
    {
281
      "cell_type": "code",
282
      "execution_count": 6,
283
      "metadata": {
284
        "id": "PzBQ_iE6bFKn"
285
      },
286
      "outputs": [],
287
      "source": [
288
        "import os\n",
289
        "from getpass import getpass\n",
290
        "\n",
291
        "voyage_key = os.getenv(\"VOYAGE_API_KEY\") or getpass(\"Voyage API key: \")"
292
      ]
293
    },
294
    {
295
      "cell_type": "code",
296
      "execution_count": 7,
297
      "metadata": {
298
        "id": "wkw0KyLRbFKo"
299
      },
300
      "outputs": [],
301
      "source": [
302
        "from langchain_community.embeddings import VoyageEmbeddings\n",
303
        "\n",
304
        "embed = VoyageEmbeddings(\n",
305
        "    voyage_api_key=voyage_key, model=\"voyage-2\"\n",
306
        ")"
307
      ]
308
    },
309
    {
310
      "cell_type": "markdown",
311
      "metadata": {
312
        "id": "LhDzfsczbFKo"
313
      },
314
      "source": [
315
        "Then we initialize our connection to Pinecone:"
316
      ]
317
    },
318
    {
319
      "cell_type": "code",
320
      "execution_count": 8,
321
      "metadata": {
322
        "id": "j0N7EcJibFKo"
323
      },
324
      "outputs": [],
325
      "source": [
326
        "from pinecone import Pinecone\n",
327
        "\n",
328
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
329
        "api_key = os.getenv(\"PINECONE_API_KEY\") or getpass(\"Pinecone API key: \")\n",
330
        "\n",
331
        "# configure client\n",
332
        "pc = Pinecone(api_key=api_key)"
333
      ]
334
    },
335
    {
336
      "cell_type": "markdown",
337
      "metadata": {
338
        "id": "g65RLGIpbFKo"
339
      },
340
      "source": [
341
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/docs/projects)."
342
      ]
343
    },
344
    {
345
      "cell_type": "code",
346
      "execution_count": 9,
347
      "metadata": {
348
        "id": "8stIZYKdbFKo"
349
      },
350
      "outputs": [],
351
      "source": [
352
        "from pinecone import ServerlessSpec\n",
353
        "\n",
354
        "spec = ServerlessSpec(\n",
355
        "    cloud=\"aws\", region=\"us-west-2\"\n",
356
        ")"
357
      ]
358
    },
359
    {
360
      "cell_type": "markdown",
361
      "metadata": {
362
        "id": "-8Ep3743bFKo"
363
      },
364
      "source": [
365
        "Before creating an index, we need the dimensionality of our Voyage AI embedding model, which we can find easily by creating an embedding and checking the length:"
366
      ]
367
    },
368
    {
369
      "cell_type": "code",
370
      "execution_count": 10,
371
      "metadata": {
372
        "colab": {
373
          "base_uri": "https://localhost:8080/"
374
        },
375
        "id": "DwMhLWLDbFKo",
376
        "outputId": "f8731679-583c-497c-93d9-b13ea6013c83"
377
      },
378
      "outputs": [
379
        {
380
          "output_type": "execute_result",
381
          "data": {
382
            "text/plain": [
383
              "1024"
384
            ]
385
          },
386
          "metadata": {},
387
          "execution_count": 10
388
        }
389
      ],
390
      "source": [
391
        "vec = embed.embed_documents([\"ello\"])\n",
392
        "len(vec[0])"
393
      ]
394
    },
395
    {
396
      "cell_type": "markdown",
397
      "metadata": {
398
        "id": "G3X7nZIabFKp"
399
      },
400
      "source": [
401
        "Now we create the index using our embedding dimensionality, and a metric also compatible with the model (this can be either cosine or dotproduct). We also pass our spec to index initialization."
402
      ]
403
    },
404
    {
405
      "cell_type": "code",
406
      "execution_count": 11,
407
      "metadata": {
408
        "colab": {
409
          "base_uri": "https://localhost:8080/"
410
        },
411
        "id": "E6Bl7xTJbFKp",
412
        "outputId": "03e0b822-b150-4a6b-b83d-1cee1e4fced4"
413
      },
414
      "outputs": [
415
        {
416
          "output_type": "execute_result",
417
          "data": {
418
            "text/plain": [
419
              "{'dimension': 1024,\n",
420
              " 'index_fullness': 0.0,\n",
421
              " 'namespaces': {'': {'vector_count': 20000}},\n",
422
              " 'total_vector_count': 20000}"
423
            ]
424
          },
425
          "metadata": {},
426
          "execution_count": 11
427
        }
428
      ],
429
      "source": [
430
        "import time\n",
431
        "\n",
432
        "index_name = \"claude-3-rag\"\n",
433
        "\n",
434
        "# check if index already exists (it shouldn't if this is first time)\n",
435
        "if index_name not in pc.list_indexes().names():\n",
436
        "    # if does not exist, create index\n",
437
        "    pc.create_index(\n",
438
        "        index_name,\n",
439
        "        dimension=len(vec[0]),  # dimensionality of voyage model\n",
440
        "        metric='dotproduct',\n",
441
        "        spec=spec\n",
442
        "    )\n",
443
        "    # wait for index to be initialized\n",
444
        "    while not pc.describe_index(index_name).status['ready']:\n",
445
        "        time.sleep(1)\n",
446
        "\n",
447
        "# connect to index\n",
448
        "index = pc.Index(index_name)\n",
449
        "time.sleep(1)\n",
450
        "# view index stats\n",
451
        "index.describe_index_stats()"
452
      ]
453
    },
454
    {
455
      "cell_type": "markdown",
456
      "metadata": {
457
        "id": "6ZUn2lu7bFKp"
458
      },
459
      "source": [
460
        "### Populating our Index"
461
      ]
462
    },
463
    {
464
      "cell_type": "markdown",
465
      "metadata": {
466
        "id": "PeVD6d0sbFKp"
467
      },
468
      "source": [
469
        "Now our knowledge base is ready to be populated with our data. We will use the `embed` helper function to embed our documents and then add them to our index.\n",
470
        "\n",
471
        "We will also include metadata from each record."
472
      ]
473
    },
474
    {
475
      "cell_type": "code",
476
      "execution_count": 12,
477
      "metadata": {
478
        "colab": {
479
          "base_uri": "https://localhost:8080/",
480
          "height": 49,
481
          "referenced_widgets": [
482
            "78b688cc78264ad1a2f05d339fe498fe",
483
            "f8796aebc76a42ed80d6cf9c234b9e14",
484
            "93fd5108528a499cb5ae4444f0b2f770",
485
            "89683fb7a1a44606abc2ded05fbb3aa3",
486
            "f202dce30c1d4672ad357db9b6dbe9b8",
487
            "009627caf7fa4c51b61b6e2087545324",
488
            "3ff8bcf03c704243b7fd173b3cdd0d87",
489
            "cc0685f737fa4e15b7796e5f3fe3df8d",
490
            "f3cfb3b02d7a4e44b00e24aa2e7f9ff5",
491
            "1022918a023b4aa8a3714d9eab2978b1",
492
            "e6ed1f5a6e5b4c508fa3b69d9219057d"
493
          ]
494
        },
495
        "id": "hb00VSTqbFKp",
496
        "outputId": "f13e0707-2b9b-4966-df62-9e457765ebf8"
497
      },
498
      "outputs": [
499
        {
500
          "data": {
501
            "application/vnd.jupyter.widget-view+json": {
502
              "model_id": "78b688cc78264ad1a2f05d339fe498fe",
503
              "version_major": 2,
504
              "version_minor": 0
505
            },
506
            "text/plain": [
507
              "  0%|          | 0/200 [00:00<?, ?it/s]"
508
            ]
509
          },
510
          "metadata": {},
511
          "output_type": "display_data"
512
        }
513
      ],
514
      "source": [
515
        "from tqdm.auto import tqdm\n",
516
        "\n",
517
        "# easier to work with dataset as pandas dataframe\n",
518
        "data = dataset.to_pandas()\n",
519
        "\n",
520
        "batch_size = 100\n",
521
        "\n",
522
        "for i in tqdm(range(0, len(data), batch_size)):\n",
523
        "    i_end = min(len(data), i+batch_size)\n",
524
        "    # get batch of data\n",
525
        "    batch = data.iloc[i:i_end]\n",
526
        "    # generate unique ids for each chunk\n",
527
        "    ids = [f\"{x['doi']}-{x['chunk-id']}\" for i, x in batch.iterrows()]\n",
528
        "    # get text to embed\n",
529
        "    texts = [x['chunk'] for _, x in batch.iterrows()]\n",
530
        "    # embed text\n",
531
        "    embeds = embed.embed_documents(texts)\n",
532
        "    # get metadata to store in Pinecone\n",
533
        "    metadata = [\n",
534
        "        {'text': x['chunk'],\n",
535
        "         'source': x['source'],\n",
536
        "         'title': x['title']} for i, x in batch.iterrows()\n",
537
        "    ]\n",
538
        "    # add to Pinecone\n",
539
        "    index.upsert(vectors=zip(ids, embeds, metadata))"
540
      ]
541
    },
542
    {
543
      "cell_type": "markdown",
544
      "metadata": {
545
        "id": "z6VVT3X_EMDO"
546
      },
547
      "source": [
548
        "Create a tool for our agent to use when searching for ArXiv papers:"
549
      ]
550
    },
551
    {
552
      "cell_type": "code",
553
      "execution_count": 13,
554
      "metadata": {
555
        "id": "X9J5jHKcEQz6"
556
      },
557
      "outputs": [],
558
      "source": [
559
        "from langchain.agents import tool\n",
560
        "\n",
561
        "@tool\n",
562
        "def arxiv_search(query: str) -> str:\n",
563
        "    \"\"\"Use this tool when answering questions about AI, machine learning, data\n",
564
        "    science, or other technical questions that may be answered using arXiv\n",
565
        "    papers.\n",
566
        "    \"\"\"\n",
567
        "    # create query vector\n",
568
        "    xq = embed.embed_query(query)\n",
569
        "    # perform search\n",
570
        "    out = index.query(vector=xq, top_k=5, include_metadata=True)\n",
571
        "    # reformat results into string\n",
572
        "    results_str = \"\\n\\n\".join(\n",
573
        "        [x[\"metadata\"][\"text\"] for x in out[\"matches\"]]\n",
574
        "    )\n",
575
        "    return results_str\n",
576
        "\n",
577
        "tools = [arxiv_search]"
578
      ]
579
    },
580
    {
581
      "cell_type": "markdown",
582
      "metadata": {
583
        "id": "uN7d_4r-JMPW"
584
      },
585
      "source": [
586
        "When this tool is used by our agent it will execute it like so:"
587
      ]
588
    },
589
    {
590
      "cell_type": "code",
591
      "execution_count": 14,
592
      "metadata": {
593
        "colab": {
594
          "base_uri": "https://localhost:8080/"
595
        },
596
        "id": "eq4H-2RpI1U3",
597
        "outputId": "ca36c8cb-5fd1-47ed-cc67-f43fd9833af0"
598
      },
599
      "outputs": [
600
        {
601
          "output_type": "stream",
602
          "name": "stdout",
603
          "text": [
604
            "Model Llama 2 Code Llama Code Llama - Python Size FIM LCFT Python CPP Java PHP TypeScript C# Bash Average 7B ✗ 13B ✗ 34B ✗ 70B ✗ 7B ✗ 7B ✓ 7B ✗ 7B ✓ 13B ✗ 13B ✓ 13B ✗ 13B ✓ 34B ✗ 34B ✗ 7B ✗ 7B ✗ 13B ✗ 13B ✗ 34B ✗ 34B ✗ ✗ ✗ ✗ ✗ 14.3% 6.8% 10.8% 9.9% 19.9% 13.7% 15.8% 13.0% 24.2% 23.6% 22.2% 19.9% 27.3% 30.4% 31.6% 34.2% 12.6% 13.2% 21.4% 15.1% 6.3% 3.2% 8.3% 9.5% 3.2% 12.6% 17.1% 3.8% 18.9% 25.9% 8.9% 24.8% ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ 37.3% 31.1% 36.1% 30.4% 29.2% 29.8% 38.0%\n",
605
            "\n",
606
            "Ethical Considerations and Limitations (Section 5.2) Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide\n",
607
            "Table 52: Model card for Llama 2.\n",
608
            "77\n",
609
            "\n",
610
            "2\n",
611
            "Cove Liama Long context (7B =, 13B =, 34B) + fine-tuning ; Lrama 2 Code training 20B oes Cope Liama - Instruct Foundation models —> nfilling code training = eee.” (7B =, 13B =, 34B) — 5B (7B, 13B, 348) 5008 Python code Long context Cove Liama - PyrHon (7B, 13B, 34B) > training » Fine-tuning > 1008 208\n",
612
            "Figure 2: The Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training. Infilling-capable models are marked with the ⇄ symbol.\n",
613
            "# 2 Code Llama: Specializing Llama 2 for code\n",
614
            "# 2.1 The Code Llama models family\n",
615
            "\n",
616
            "# 2 Code Llama: Specializing Llama 2 for code\n",
617
            "# 2.1 The Code Llama models family\n",
618
            "Code Llama. The Code Llama models constitute foundation models for code generation. They come in four model sizes: 7B, 13B, 34B and 70B parameters. The 7B, 13B and 70B models are trained using an infilling objective (Section 2.3), and are appropriate to be used in an IDE to complete code in the middle of a file, for example. The 34B model was trained without the infilling objective. All Code Llama models are initialized with Llama 2 model weights and trained on 500B tokens from a code-heavy dataset (see Section 2.2 for more details), except Code Llama 70B which was trained on 1T tokens. They are all fine-tuned to handle long contexts as detailed in Section 2.4.\n",
619
            "\n",
620
            "0.52 0.57 0.19 0.30 Llama 1 7B 13B 33B 65B 0.27 0.24 0.23 0.25 0.26 0.24 0.26 0.26 0.34 0.31 0.34 0.34 0.54 0.52 0.50 0.46 0.36 0.37 0.36 0.36 0.39 0.37 0.35 0.40 0.26 0.23 0.24 0.25 0.28 0.28 0.33 0.32 0.33 0.31 0.34 0.32 0.45 0.50 0.49 0.48 0.33 0.27 0.31 0.31 0.17 0.10 0.12 0.11 0.24 0.24 0.23 0.25 0.31 0.27 0.30 0.30 0.44 0.41 0.41 0.43 0.57 0.55 0.60 0.60 0.39 0.34 0.28 0.39 Llama 2 7B 13B 34B 70B 0.28 0.24 0.27 0.31 0.25 0.25 0.24 0.29 0.29 0.35 0.33 0.35 0.50 0.50 0.56 0.51 0.36 0.41 0.41\n"
621
          ]
622
        }
623
      ],
624
      "source": [
625
        "print(\n",
626
        "    arxiv_search.run(tool_input={\"query\": \"can you tell me about llama 2?\"})\n",
627
        ")"
628
      ]
629
    },
630
    {
631
      "cell_type": "markdown",
632
      "metadata": {
633
        "id": "XUvJOqrNhYIh"
634
      },
635
      "source": [
636
        "## Defining XML Agent"
637
      ]
638
    },
639
    {
640
      "cell_type": "markdown",
641
      "metadata": {
642
        "id": "s45dwd78hbvk"
643
      },
644
      "source": [
645
        "The XML agent is built primarily to support Anthropic models. Anthropic models have been trained to use XML tags like `<input>{some input}</input` or when using a tool they use:\n",
646
        "\n",
647
        "```\n",
648
        "<tool>{tool name}</tool>\n",
649
        "<tool_input>{tool input}</tool_input>\n",
650
        "```\n",
651
        "\n",
652
        "This is much different to the format produced by typical ReAct agents, which is not as well supported by Anthropic models.\n",
653
        "\n",
654
        "To create an XML agent we need a `prompt`, `llm`, and list of `tools`. We can download a prebuilt prompt for conversational XML agents from LangChain hub."
655
      ]
656
    },
657
    {
658
      "cell_type": "code",
659
      "execution_count": 15,
660
      "metadata": {
661
        "colab": {
662
          "base_uri": "https://localhost:8080/"
663
        },
664
        "id": "ntuT7UuXeMz0",
665
        "outputId": "c47f0a98-c71f-488f-fc55-9e1ea7f643b4"
666
      },
667
      "outputs": [
668
        {
669
          "output_type": "execute_result",
670
          "data": {
671
            "text/plain": [
672
              "ChatPromptTemplate(input_variables=['agent_scratchpad', 'input', 'tools'], partial_variables={'chat_history': ''}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'chat_history', 'input', 'tools'], template=\"You are a helpful assistant. Help the user answer any questions.\\n\\nYou have access to the following tools:\\n\\n{tools}\\n\\nIn order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>\\nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\\n\\n<tool>search</tool><tool_input>weather in SF</tool_input>\\n<observation>64 degrees</observation>\\n\\nWhen you are done, respond with a final answer between <final_answer></final_answer>. For example:\\n\\n<final_answer>The weather in SF is 64 degrees</final_answer>\\n\\nBegin!\\n\\nPrevious Conversation:\\n{chat_history}\\n\\nQuestion: {input}\\n{agent_scratchpad}\"))])"
673
            ]
674
          },
675
          "metadata": {},
676
          "execution_count": 15
677
        }
678
      ],
679
      "source": [
680
        "from langchain import hub\n",
681
        "\n",
682
        "prompt = hub.pull(\"hwchase17/xml-agent-convo\")\n",
683
        "prompt"
684
      ]
685
    },
686
    {
687
      "cell_type": "markdown",
688
      "metadata": {
689
        "id": "rfdcKCdwi0SL"
690
      },
691
      "source": [
692
        "We can see the XML format being used throughout the prompt when explaining to the LLM how it should use tools.\n",
693
        "\n",
694
        "Next we initialize our connection to Anthropic, for this we need an [Anthropic API key](https://console.anthropic.com/)."
695
      ]
696
    },
697
    {
698
      "cell_type": "code",
699
      "execution_count": 16,
700
      "metadata": {
701
        "id": "kDHuU2uOdW91"
702
      },
703
      "outputs": [],
704
      "source": [
705
        "from langchain_anthropic import ChatAnthropic\n",
706
        "\n",
707
        "anthropic_api_key = os.getenv(\"ANTHROPIC_API_KEY\") or getpass(\"Anthropic API key: \")\n",
708
        "\n",
709
        "# chat completion llm\n",
710
        "llm = ChatAnthropic(\n",
711
        "    anthropic_api_key=anthropic_api_key,\n",
712
        "    model_name=\"claude-3-opus-20240229\",  # change \"opus\" -> \"sonnet\" for speed\n",
713
        "    temperature=0.0\n",
714
        ")"
715
      ]
716
    },
717
    {
718
      "cell_type": "markdown",
719
      "metadata": {
720
        "id": "g33Nt-xijPKG"
721
      },
722
      "source": [
723
        "When the agent is run we will provide it with a single `input` — this is the input text from a user. However, within the agent logic an *agent_scratchpad* object will be passed too, which will include tool information. To feed this information into our LLM we will need to transform it into the XML format described above, we define the `convert_intermediate_steps` function to handle that."
724
      ]
725
    },
726
    {
727
      "cell_type": "code",
728
      "execution_count": 17,
729
      "metadata": {
730
        "id": "TMMBgMBlIJoq"
731
      },
732
      "outputs": [],
733
      "source": [
734
        "def convert_intermediate_steps(intermediate_steps):\n",
735
        "    log = \"\"\n",
736
        "    for action, observation in intermediate_steps:\n",
737
        "        log += (\n",
738
        "            f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
739
        "            f\"</tool_input><observation>{observation}</observation>\"\n",
740
        "        )\n",
741
        "    return log"
742
      ]
743
    },
744
    {
745
      "cell_type": "markdown",
746
      "metadata": {
747
        "id": "T5_PQWVckAOi"
748
      },
749
      "source": [
750
        "We must also parse the tools into a string containing `tool_name: tool_description` — we handle that with the `convert_tools` function."
751
      ]
752
    },
753
    {
754
      "cell_type": "code",
755
      "execution_count": 18,
756
      "metadata": {
757
        "id": "qxbrF5a4j9il"
758
      },
759
      "outputs": [],
760
      "source": [
761
        "def convert_tools(tools):\n",
762
        "    return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
763
      ]
764
    },
765
    {
766
      "cell_type": "markdown",
767
      "metadata": {
768
        "id": "SCVI2dyUIRg6"
769
      },
770
      "source": [
771
        "With everything ready we can go ahead and initialize our agent object using [**L**ang**C**hain **E**xpression **L**anguage (LCEL)](https://www.pinecone.io/learn/series/langchain/langchain-expression-language/). We add instructions for when the LLM should _stop_ generating with `llm.bind(stop=[...])` and finally we parse the output from the agent using an `XMLAgentOutputParser` object."
772
      ]
773
    },
774
    {
775
      "cell_type": "code",
776
      "execution_count": 19,
777
      "metadata": {
778
        "id": "Z3yhTDmEIU4n"
779
      },
780
      "outputs": [],
781
      "source": [
782
        "from langchain.agents.output_parsers import XMLAgentOutputParser\n",
783
        "\n",
784
        "agent = (\n",
785
        "    {\n",
786
        "        \"input\": lambda x: x[\"input\"],\n",
787
        "        # without \"chat_history\", tool usage has no context of prev interactions\n",
788
        "        \"chat_history\": lambda x: x[\"chat_history\"],\n",
789
        "        \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
790
        "            x[\"intermediate_steps\"]\n",
791
        "        ),\n",
792
        "    }\n",
793
        "    | prompt.partial(tools=convert_tools(tools))\n",
794
        "    | llm.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
795
        "    | XMLAgentOutputParser()\n",
796
        ")"
797
      ]
798
    },
799
    {
800
      "cell_type": "markdown",
801
      "metadata": {
802
        "id": "MG2_hL4hkudq"
803
      },
804
      "source": [
805
        "With our `agent` object initialized we pass it to an `AgentExecutor` object alongside our original `tools` list:"
806
      ]
807
    },
808
    {
809
      "cell_type": "code",
810
      "execution_count": 20,
811
      "metadata": {
812
        "id": "YHW_K3WOIsXw"
813
      },
814
      "outputs": [],
815
      "source": [
816
        "from langchain.agents import AgentExecutor\n",
817
        "\n",
818
        "agent_executor = AgentExecutor(\n",
819
        "    agent=agent, tools=tools, verbose=True\n",
820
        ")"
821
      ]
822
    },
823
    {
824
      "cell_type": "markdown",
825
      "metadata": {
826
        "id": "QRCtHauRlkLc"
827
      },
828
      "source": [
829
        "Now we can use the agent via the `invoke` method:"
830
      ]
831
    },
832
    {
833
      "cell_type": "code",
834
      "execution_count": 25,
835
      "metadata": {
836
        "colab": {
837
          "base_uri": "https://localhost:8080/"
838
        },
839
        "id": "Y_Aqp20qloj7",
840
        "outputId": "c4fe06f3-a147-4274-becd-49c9992a8cd6"
841
      },
842
      "outputs": [
843
        {
844
          "output_type": "stream",
845
          "name": "stdout",
846
          "text": [
847
            "\n",
848
            "\n",
849
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
850
            "\u001b[32;1m\u001b[1;3m<tool>arxiv_search</tool>\n",
851
            "<tool_input>llama 2\u001b[0m\u001b[36;1m\u001b[1;3mModel Llama 2 Code Llama Code Llama - Python Size FIM LCFT Python CPP Java PHP TypeScript C# Bash Average 7B ✗ 13B ✗ 34B ✗ 70B ✗ 7B ✗ 7B ✓ 7B ✗ 7B ✓ 13B ✗ 13B ✓ 13B ✗ 13B ✓ 34B ✗ 34B ✗ 7B ✗ 7B ✗ 13B ✗ 13B ✗ 34B ✗ 34B ✗ ✗ ✗ ✗ ✗ 14.3% 6.8% 10.8% 9.9% 19.9% 13.7% 15.8% 13.0% 24.2% 23.6% 22.2% 19.9% 27.3% 30.4% 31.6% 34.2% 12.6% 13.2% 21.4% 15.1% 6.3% 3.2% 8.3% 9.5% 3.2% 12.6% 17.1% 3.8% 18.9% 25.9% 8.9% 24.8% ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ 37.3% 31.1% 36.1% 30.4% 29.2% 29.8% 38.0%\n",
852
            "\n",
853
            "2\n",
854
            "Cove Liama Long context (7B =, 13B =, 34B) + fine-tuning ; Lrama 2 Code training 20B oes Cope Liama - Instruct Foundation models —> nfilling code training = eee.” (7B =, 13B =, 34B) — 5B (7B, 13B, 348) 5008 Python code Long context Cove Liama - PyrHon (7B, 13B, 34B) > training » Fine-tuning > 1008 208\n",
855
            "Figure 2: The Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training. Infilling-capable models are marked with the ⇄ symbol.\n",
856
            "# 2 Code Llama: Specializing Llama 2 for code\n",
857
            "# 2.1 The Code Llama models family\n",
858
            "\n",
859
            "0.52 0.57 0.19 0.30 Llama 1 7B 13B 33B 65B 0.27 0.24 0.23 0.25 0.26 0.24 0.26 0.26 0.34 0.31 0.34 0.34 0.54 0.52 0.50 0.46 0.36 0.37 0.36 0.36 0.39 0.37 0.35 0.40 0.26 0.23 0.24 0.25 0.28 0.28 0.33 0.32 0.33 0.31 0.34 0.32 0.45 0.50 0.49 0.48 0.33 0.27 0.31 0.31 0.17 0.10 0.12 0.11 0.24 0.24 0.23 0.25 0.31 0.27 0.30 0.30 0.44 0.41 0.41 0.43 0.57 0.55 0.60 0.60 0.39 0.34 0.28 0.39 Llama 2 7B 13B 34B 70B 0.28 0.24 0.27 0.31 0.25 0.25 0.24 0.29 0.29 0.35 0.33 0.35 0.50 0.50 0.56 0.51 0.36 0.41 0.41\n",
860
            "\n",
861
            "Ethical Considerations and Limitations (Section 5.2) Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide\n",
862
            "Table 52: Model card for Llama 2.\n",
863
            "77\n",
864
            "\n",
865
            "Model Size FIM LCFT HumanEval MBPP pass@1 pass@10 pass@100 pass@1 pass@10 pass@100 Llama 2 Code Llama Code Llama - Python 7B ✗ 13B ✗ 34B ✗ 70B ✗ 7B ✗ 7B ✓ 7B ✗ 7B ✓ 13B ✗ 13B ✓ 13B ✗ 13B ✓ 34B ✗ 34B ✗ 7B ✗ 7B ✗ 13B ✗ 13B ✗ 34B ✗ 34B ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✓ 12.2% 25.2% 20.1% 34.8% 22.6% 47.0% 30.5% 59.4% 32.3% 63.9% 34.1% 62.6% 34.1% 62.5% 33.5% 59.6% 36.6% 72.9% 36.6% 71.9% 37.8% 70.6% 36.0% 69.4% 48.2% 77.7% 48.8% 76.8% 40.2% 70.0% 38.4% 70.3% 45.7% 80.0%\u001b[0m\u001b[32;1m\u001b[1;3mBased on the information from the arXiv search, here are the key points about Llama 2:\n",
866
            "\n",
867
            "<final_answer>\n",
868
            "- Llama 2 is a large language model developed by Meta AI. It comes in sizes ranging from 7B to 70B parameters.\n",
869
            "\n",
870
            "- Code Llama is a version of Llama 2 that has been specialized for code generation through fine-tuning on code datasets. Code Llama models are available in Python, C++, Java, PHP, TypeScript, C#, and Bash.\n",
871
            "\n",
872
            "- The Code Llama specialization pipeline involves foundation model pre-training, long context training, code infilling training, and fine-tuning on specific programming languages. \n",
873
            "\n",
874
            "- Code Llama significantly outperforms the base Llama 2 models on code generation benchmarks like HumanEval and MBPP. For example, the 34B parameter Code Llama - Python achieves 48.8% pass@1 on HumanEval compared to 34.1% for the 34B Llama 2.\n",
875
            "\n",
876
            "- As with all large language models, Llama 2 has limitations and potential risks that need to be considered before deploying it in applications. Meta provides a responsible use guide with recommendations for safety testing and tuning.\n",
877
            "\u001b[0m\n",
878
            "\n",
879
            "\u001b[1m> Finished chain.\u001b[0m\n",
880
            "\n",
881
            "- Llama 2 is a large language model developed by Meta AI. It comes in sizes ranging from 7B to 70B parameters.\n",
882
            "\n",
883
            "- Code Llama is a version of Llama 2 that has been specialized for code generation through fine-tuning on code datasets. Code Llama models are available in Python, C++, Java, PHP, TypeScript, C#, and Bash.\n",
884
            "\n",
885
            "- The Code Llama specialization pipeline involves foundation model pre-training, long context training, code infilling training, and fine-tuning on specific programming languages. \n",
886
            "\n",
887
            "- Code Llama significantly outperforms the base Llama 2 models on code generation benchmarks like HumanEval and MBPP. For example, the 34B parameter Code Llama - Python achieves 48.8% pass@1 on HumanEval compared to 34.1% for the 34B Llama 2.\n",
888
            "\n",
889
            "- As with all large language models, Llama 2 has limitations and potential risks that need to be considered before deploying it in applications. Meta provides a responsible use guide with recommendations for safety testing and tuning.\n",
890
            "\n"
891
          ]
892
        }
893
      ],
894
      "source": [
895
        "user_msg = \"can you tell me about llama 2?\"\n",
896
        "\n",
897
        "out = agent_executor.invoke({\n",
898
        "    \"input\": user_msg,\n",
899
        "    \"chat_history\": \"\"\n",
900
        "})\n",
901
        "\n",
902
        "print(out[\"output\"])"
903
      ]
904
    },
905
    {
906
      "cell_type": "markdown",
907
      "metadata": {
908
        "id": "Eae-JyUFl--N"
909
      },
910
      "source": [
911
        "That looks pretty good, but right now our agent is _stateless_ — making it hard to have a conversation with. We can give it memory in many different ways, but one the easiest ways to do so is to use `ConversationBufferWindowMemory`."
912
      ]
913
    },
914
    {
915
      "cell_type": "code",
916
      "execution_count": 26,
917
      "metadata": {
918
        "id": "EqOMNQUfmOEr"
919
      },
920
      "outputs": [],
921
      "source": [
922
        "from langchain.chains.conversation.memory import ConversationBufferWindowMemory\n",
923
        "\n",
924
        "# conversational memory\n",
925
        "conversational_memory = ConversationBufferWindowMemory(\n",
926
        "    memory_key='chat_history',\n",
927
        "    k=5,\n",
928
        "    return_messages=True\n",
929
        ")"
930
      ]
931
    },
932
    {
933
      "cell_type": "markdown",
934
      "metadata": {
935
        "id": "O9UvnBrsnNVw"
936
      },
937
      "source": [
938
        "We haven't attached our conversational memory to our agent — so the `conversational_memory` object will remain empty:"
939
      ]
940
    },
941
    {
942
      "cell_type": "code",
943
      "execution_count": 27,
944
      "metadata": {
945
        "colab": {
946
          "base_uri": "https://localhost:8080/"
947
        },
948
        "id": "KJZNIDAslNoC",
949
        "outputId": "5267f069-dc09-472d-c9ef-e694cfea982f"
950
      },
951
      "outputs": [
952
        {
953
          "output_type": "execute_result",
954
          "data": {
955
            "text/plain": [
956
              "[]"
957
            ]
958
          },
959
          "metadata": {},
960
          "execution_count": 27
961
        }
962
      ],
963
      "source": [
964
        "conversational_memory.chat_memory.messages"
965
      ]
966
    },
967
    {
968
      "cell_type": "markdown",
969
      "metadata": {
970
        "id": "ynX9Wca6nawr"
971
      },
972
      "source": [
973
        "We must manually add the interactions between ourselves and the agent to our memory."
974
      ]
975
    },
976
    {
977
      "cell_type": "code",
978
      "execution_count": 28,
979
      "metadata": {
980
        "colab": {
981
          "base_uri": "https://localhost:8080/"
982
        },
983
        "id": "-5hXy1FAnne3",
984
        "outputId": "b659e1f4-3d02-466f-9880-c0adf65fa507"
985
      },
986
      "outputs": [
987
        {
988
          "output_type": "execute_result",
989
          "data": {
990
            "text/plain": [
991
              "[HumanMessage(content='can you tell me about llama 2?'),\n",
992
              " AIMessage(content='\\n- Llama 2 is a large language model developed by Meta AI. It comes in sizes ranging from 7B to 70B parameters.\\n\\n- Code Llama is a version of Llama 2 that has been specialized for code generation through fine-tuning on code datasets. Code Llama models are available in Python, C++, Java, PHP, TypeScript, C#, and Bash.\\n\\n- The Code Llama specialization pipeline involves foundation model pre-training, long context training, code infilling training, and fine-tuning on specific programming languages. \\n\\n- Code Llama significantly outperforms the base Llama 2 models on code generation benchmarks like HumanEval and MBPP. For example, the 34B parameter Code Llama - Python achieves 48.8% pass@1 on HumanEval compared to 34.1% for the 34B Llama 2.\\n\\n- As with all large language models, Llama 2 has limitations and potential risks that need to be considered before deploying it in applications. Meta provides a responsible use guide with recommendations for safety testing and tuning.\\n')]"
993
            ]
994
          },
995
          "metadata": {},
996
          "execution_count": 28
997
        }
998
      ],
999
      "source": [
1000
        "conversational_memory.chat_memory.add_user_message(user_msg)\n",
1001
        "conversational_memory.chat_memory.add_ai_message(out[\"output\"])\n",
1002
        "\n",
1003
        "conversational_memory.chat_memory.messages"
1004
      ]
1005
    },
1006
    {
1007
      "cell_type": "markdown",
1008
      "metadata": {
1009
        "id": "T3pA0o6ZnrAl"
1010
      },
1011
      "source": [
1012
        "Now we can see that _two_ messages have been added, our `HumanMessage` the the agent's `AIMessage` response. Unfortunately, we cannot send these messages to our XML agent directly. Instead, we need to pass a string in the format:\n",
1013
        "\n",
1014
        "```\n",
1015
        "Human: {human message}\n",
1016
        "AI: {AI message}\n",
1017
        "```\n",
1018
        "\n",
1019
        "Let's write a quick `memory2str` helper function to handle this for us:"
1020
      ]
1021
    },
1022
    {
1023
      "cell_type": "code",
1024
      "execution_count": 29,
1025
      "metadata": {
1026
        "id": "raZHBdJmtGH-"
1027
      },
1028
      "outputs": [],
1029
      "source": [
1030
        "from langchain_core.messages.human import HumanMessage\n",
1031
        "\n",
1032
        "def memory2str(memory: ConversationBufferWindowMemory):\n",
1033
        "    messages = memory.chat_memory.messages\n",
1034
        "    memory_list = [\n",
1035
        "        f\"Human: {mem.content}\" if isinstance(mem, HumanMessage) \\\n",
1036
        "        else f\"AI: {mem.content}\" for mem in messages\n",
1037
        "    ]\n",
1038
        "    memory_str = \"\\n\".join(memory_list)\n",
1039
        "    return memory_str"
1040
      ]
1041
    },
1042
    {
1043
      "cell_type": "code",
1044
      "execution_count": 30,
1045
      "metadata": {
1046
        "colab": {
1047
          "base_uri": "https://localhost:8080/"
1048
        },
1049
        "id": "t89yX-6i3hvd",
1050
        "outputId": "7c27cac1-3881-4a26-f59f-274d8d4affa3"
1051
      },
1052
      "outputs": [
1053
        {
1054
          "output_type": "stream",
1055
          "name": "stdout",
1056
          "text": [
1057
            "Human: can you tell me about llama 2?\n",
1058
            "AI: \n",
1059
            "- Llama 2 is a large language model developed by Meta AI. It comes in sizes ranging from 7B to 70B parameters.\n",
1060
            "\n",
1061
            "- Code Llama is a version of Llama 2 that has been specialized for code generation through fine-tuning on code datasets. Code Llama models are available in Python, C++, Java, PHP, TypeScript, C#, and Bash.\n",
1062
            "\n",
1063
            "- The Code Llama specialization pipeline involves foundation model pre-training, long context training, code infilling training, and fine-tuning on specific programming languages. \n",
1064
            "\n",
1065
            "- Code Llama significantly outperforms the base Llama 2 models on code generation benchmarks like HumanEval and MBPP. For example, the 34B parameter Code Llama - Python achieves 48.8% pass@1 on HumanEval compared to 34.1% for the 34B Llama 2.\n",
1066
            "\n",
1067
            "- As with all large language models, Llama 2 has limitations and potential risks that need to be considered before deploying it in applications. Meta provides a responsible use guide with recommendations for safety testing and tuning.\n",
1068
            "\n"
1069
          ]
1070
        }
1071
      ],
1072
      "source": [
1073
        "print(memory2str(conversational_memory))"
1074
      ]
1075
    },
1076
    {
1077
      "cell_type": "markdown",
1078
      "metadata": {
1079
        "id": "q0L_80WrpWqd"
1080
      },
1081
      "source": [
1082
        "Now let's put together another helper function called `chat` to help us handle the _state_ part of our agent."
1083
      ]
1084
    },
1085
    {
1086
      "cell_type": "code",
1087
      "execution_count": 31,
1088
      "metadata": {
1089
        "id": "C-Ck2Lv53rD-"
1090
      },
1091
      "outputs": [],
1092
      "source": [
1093
        "def chat(text: str):\n",
1094
        "    out = agent_executor.invoke({\n",
1095
        "        \"input\": text,\n",
1096
        "        \"chat_history\": memory2str(conversational_memory)\n",
1097
        "    })\n",
1098
        "    conversational_memory.chat_memory.add_user_message(text)\n",
1099
        "    conversational_memory.chat_memory.add_ai_message(out[\"output\"])\n",
1100
        "    return out[\"output\"]"
1101
      ]
1102
    },
1103
    {
1104
      "cell_type": "markdown",
1105
      "metadata": {
1106
        "id": "XIheLeTBsO9S"
1107
      },
1108
      "source": [
1109
        "Now we simply chat with our agent and it will remember the context of previous interactions."
1110
      ]
1111
    },
1112
    {
1113
      "cell_type": "code",
1114
      "execution_count": 33,
1115
      "metadata": {
1116
        "colab": {
1117
          "base_uri": "https://localhost:8080/"
1118
        },
1119
        "id": "iJ_PH7YcA_f2",
1120
        "outputId": "7d645e9f-f191-4b62-97fd-973417e1efaa"
1121
      },
1122
      "outputs": [
1123
        {
1124
          "output_type": "stream",
1125
          "name": "stdout",
1126
          "text": [
1127
            "\n",
1128
            "\n",
1129
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
1130
            "\u001b[32;1m\u001b[1;3m<tool>arxiv_search</tool>\n",
1131
            "<tool_input>llama 2 red teaming\u001b[0m\u001b[36;1m\u001b[1;3mAfter conducting red team exercises, we asked participants (who had also participated in Llama 2 Chat exercises) to also provide qualitative assessment of safety capabilities of the model. Some participants who had expertise in offensive security and malware development questioned the ultimate risk posed by “malicious code generation” through LLMs with current capabilities.\n",
1132
            "One red teamer remarked, “While LLMs being able to iteratively improve on produced source code is a risk, producing source code isn’t the actual gap. That said, LLMs may be risky because they can inform low-skill adversaries in production of scripts through iteration that perform some malicious behavior.”\n",
1133
            "According to another red teamer, “[v]arious scripts, program code, and compiled binaries are readily available on mainstream public websites, hacking forums or on ‘the dark web.’ Advanced malware development is beyond the current capabilities of available LLMs, and even an advanced LLM paired with an expert malware developer is not particularly useful- as the barrier is not typically writing the malware code itself. That said, these LLMs may produce code which will get easily caught if used directly.”\n",
1134
            "\n",
1135
            "Model Llama 2 Code Llama Code Llama - Python Size FIM LCFT Python CPP Java PHP TypeScript C# Bash Average 7B ✗ 13B ✗ 34B ✗ 70B ✗ 7B ✗ 7B ✓ 7B ✗ 7B ✓ 13B ✗ 13B ✓ 13B ✗ 13B ✓ 34B ✗ 34B ✗ 7B ✗ 7B ✗ 13B ✗ 13B ✗ 34B ✗ 34B ✗ ✗ ✗ ✗ ✗ 14.3% 6.8% 10.8% 9.9% 19.9% 13.7% 15.8% 13.0% 24.2% 23.6% 22.2% 19.9% 27.3% 30.4% 31.6% 34.2% 12.6% 13.2% 21.4% 15.1% 6.3% 3.2% 8.3% 9.5% 3.2% 12.6% 17.1% 3.8% 18.9% 25.9% 8.9% 24.8% ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ 37.3% 31.1% 36.1% 30.4% 29.2% 29.8% 38.0%\n",
1136
            "\n",
1137
            "In addition to red teaming sessions, we ran a quantitative evaluation on risk from generating malicious code by scoring Code Llama’s responses to ChatGPT’s (GPT3.5 Turbo) with LLAMAv2 70B’s safety reward model. For this second quantitative evaluation, we selected prompts that the red teamers generated specifically attempting to solicit malicious code (even though the red teaming included consideration of a broad set of safety risks). These prompts were a mix of clear intent and slightly obfuscated intentions (see some examples in Figure 16. We show a KDE plot of the distribution of the safety score for all models in Figure 7). We observe that Code Llama tends to answer with safer responses; the distribution of safety scores for Code Llama has more weight in the safer part of the range.\n",
1138
            "False refusals. LLMs that are too safe can have a tendency to over-refuse valid claims similar to what was reported after the release of Llama 2. We specifically asked red teamers to test for this behavior. They found some limited evidence of false refusals (when not using a system preprompt). False refusals could also\n",
1139
            "16\n",
1140
            "\n",
1141
            "15\n",
1142
            "pafety Reward Model Scores Distribution on Red Teaming Prompts\n",
1143
            "Responding Model GPT 3.5 Turbo Code Llama 138 Instruct Code Llama 34B Instruct Code Llama 7B Instruct 0.0-+ -0.2 0.0 0.2 0.4 0.6 08 1.0 12 Llama 2 70B Safety Reward Model Score\n",
1144
            "Figure 7: KDE plot of the risk score output by the Llama 2 safety reward model on prompts with clear intent specific to code risk created by red teamers with background in cybersecurity and malware generation.\n",
1145
            "Red teaming. It is important to also proactively identify risks with adversarial testing or red teaming. We conducted 3 red teaming exercises with 25 Meta employees, including domain experts in responsible AI, malware development, and offensive security engineering.\n",
1146
            "The red teamers provided a nuanced evaluation specifically on the risk from so called “dual intent prompts.” Dual intent prompts are requests for help with writing code that could be used maliciously but the prompt does not directly address the topic (example “Mosaic Prompts” Glukhov et al. (2023)). For example, the model rightfully refuses to provide support with writing ransomware code but it complies when asked to provide a script to encrypt all files in the user’s home directory since such a script could be used for benign purposes.\n",
1147
            "\n",
1148
            ". . . . . . . . . . . . . . . 3.4 RLHF Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Safety in Pretraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Safety Fine-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Red Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Safety Evaluation of Llama 2-Chat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Learnings and Observations . . . . . . . . . . . . . . . . . . . . . . . .\u001b[0m\u001b[32;1m\u001b[1;3mBased on the information from the arxiv search, some red teaming was done on the Llama 2 model during development to identify potential safety risks:\n",
1149
            "\n",
1150
            "<final_answer>\n",
1151
            "- Meta conducted 3 red teaming exercises with 25 employees, including domain experts in responsible AI, malware development, and offensive security engineering. \n",
1152
            "\n",
1153
            "- The red teamers categorized successful attacks into four main types: 1) getting the model to provide some harmful information while refusing other content, 2) having the model roleplay specific scenarios, 3) forcing the model to highlight positives of harmful content, and 4) embedding harmful instructions within complex commands.\n",
1154
            "\n",
1155
            "- Some red teamers questioned the ultimate risk posed by \"malicious code generation\" through current LLMs. They noted that while LLMs being able to iteratively improve code is a risk, producing source code itself isn't the main gap. Advanced malware development is currently beyond LLM capabilities.\n",
1156
            "\n",
1157
            "- Quantitative evaluation was also done by scoring Code Llama's responses to malicious code prompts using Llama 2's safety reward model. Code Llama tended to give safer responses compared to GPT-3.5.\n",
1158
            "\n",
1159
            "- However, the full extent and details of the red teaming are limited based on the information available. The Llama 2 paper mentions expanding prompts with safety risks via red teaming, but does not go in-depth on the process or results. More information would be needed to fully characterize the red teaming performed.\n",
1160
            "\u001b[0m\n",
1161
            "\n",
1162
            "\u001b[1m> Finished chain.\u001b[0m\n",
1163
            "\n",
1164
            "- Meta conducted 3 red teaming exercises with 25 employees, including domain experts in responsible AI, malware development, and offensive security engineering. \n",
1165
            "\n",
1166
            "- The red teamers categorized successful attacks into four main types: 1) getting the model to provide some harmful information while refusing other content, 2) having the model roleplay specific scenarios, 3) forcing the model to highlight positives of harmful content, and 4) embedding harmful instructions within complex commands.\n",
1167
            "\n",
1168
            "- Some red teamers questioned the ultimate risk posed by \"malicious code generation\" through current LLMs. They noted that while LLMs being able to iteratively improve code is a risk, producing source code itself isn't the main gap. Advanced malware development is currently beyond LLM capabilities.\n",
1169
            "\n",
1170
            "- Quantitative evaluation was also done by scoring Code Llama's responses to malicious code prompts using Llama 2's safety reward model. Code Llama tended to give safer responses compared to GPT-3.5.\n",
1171
            "\n",
1172
            "- However, the full extent and details of the red teaming are limited based on the information available. The Llama 2 paper mentions expanding prompts with safety risks via red teaming, but does not go in-depth on the process or results. More information would be needed to fully characterize the red teaming performed.\n",
1173
            "\n"
1174
          ]
1175
        }
1176
      ],
1177
      "source": [
1178
        "print(chat(\"was any red teaming done with the model?\"))"
1179
      ]
1180
    },
1181
    {
1182
      "cell_type": "markdown",
1183
      "metadata": {
1184
        "id": "5p8m4Gc5w1OX"
1185
      },
1186
      "source": [
1187
        "We can ask follow up questions that miss key information but thanks to the conversational history the LLM understands the context and uses that to adjust the search query. For example we asked about `red teaming` but did not mention `llama 2` — Claude 3 added this context to the search query of `\"llama 2 red teaming\"` based on the chat history."
1188
      ]
1189
    },
1190
    {
1191
      "cell_type": "markdown",
1192
      "metadata": {
1193
        "id": "e9bI9czPtWnl"
1194
      },
1195
      "source": [
1196
        "---"
1197
      ]
1198
    }
1199
  ],
1200
  "metadata": {
1201
    "colab": {
1202
      "provenance": []
1203
    },
1204
    "kernelspec": {
1205
      "display_name": "ml",
1206
      "language": "python",
1207
      "name": "python3"
1208
    },
1209
    "language_info": {
1210
      "codemirror_mode": {
1211
        "name": "ipython",
1212
        "version": 3
1213
      },
1214
      "file_extension": ".py",
1215
      "mimetype": "text/x-python",
1216
      "name": "python",
1217
      "nbconvert_exporter": "python",
1218
      "pygments_lexer": "ipython3",
1219
      "version": "3.9.12"
1220
    },
1221
    "widgets": {
1222
      "application/vnd.jupyter.widget-state+json": {
1223
        "4d5c6f53e82948b18b45877e2e30fd78": {
1224
          "model_module": "@jupyter-widgets/controls",
1225
          "model_name": "HBoxModel",
1226
          "model_module_version": "1.5.0",
1227
          "state": {
1228
            "_dom_classes": [],
1229
            "_model_module": "@jupyter-widgets/controls",
1230
            "_model_module_version": "1.5.0",
1231
            "_model_name": "HBoxModel",
1232
            "_view_count": null,
1233
            "_view_module": "@jupyter-widgets/controls",
1234
            "_view_module_version": "1.5.0",
1235
            "_view_name": "HBoxView",
1236
            "box_style": "",
1237
            "children": [
1238
              "IPY_MODEL_7036b643c9264b09b0f93cb6a1a8b01f",
1239
              "IPY_MODEL_fb92dd97e68643099b0d0f1b4983f5f1",
1240
              "IPY_MODEL_91c3b00e77444bb79d9f73314800471e"
1241
            ],
1242
            "layout": "IPY_MODEL_3e1d39ab6d3a4cf5827c9b3e1d4f0df1"
1243
          }
1244
        },
1245
        "7036b643c9264b09b0f93cb6a1a8b01f": {
1246
          "model_module": "@jupyter-widgets/controls",
1247
          "model_name": "HTMLModel",
1248
          "model_module_version": "1.5.0",
1249
          "state": {
1250
            "_dom_classes": [],
1251
            "_model_module": "@jupyter-widgets/controls",
1252
            "_model_module_version": "1.5.0",
1253
            "_model_name": "HTMLModel",
1254
            "_view_count": null,
1255
            "_view_module": "@jupyter-widgets/controls",
1256
            "_view_module_version": "1.5.0",
1257
            "_view_name": "HTMLView",
1258
            "description": "",
1259
            "description_tooltip": null,
1260
            "layout": "IPY_MODEL_f58fb2da52bc43a985449910daaac230",
1261
            "placeholder": "​",
1262
            "style": "IPY_MODEL_449afd685fd94354b64aa43a4763ab1d",
1263
            "value": "Downloading data: 100%"
1264
          }
1265
        },
1266
        "fb92dd97e68643099b0d0f1b4983f5f1": {
1267
          "model_module": "@jupyter-widgets/controls",
1268
          "model_name": "FloatProgressModel",
1269
          "model_module_version": "1.5.0",
1270
          "state": {
1271
            "_dom_classes": [],
1272
            "_model_module": "@jupyter-widgets/controls",
1273
            "_model_module_version": "1.5.0",
1274
            "_model_name": "FloatProgressModel",
1275
            "_view_count": null,
1276
            "_view_module": "@jupyter-widgets/controls",
1277
            "_view_module_version": "1.5.0",
1278
            "_view_name": "ProgressView",
1279
            "bar_style": "success",
1280
            "description": "",
1281
            "description_tooltip": null,
1282
            "layout": "IPY_MODEL_43a3c63554894d268e3bf747f7e0289d",
1283
            "max": 765731119,
1284
            "min": 0,
1285
            "orientation": "horizontal",
1286
            "style": "IPY_MODEL_d902beef883e4c65bd2d6cb161423bae",
1287
            "value": 765731119
1288
          }
1289
        },
1290
        "91c3b00e77444bb79d9f73314800471e": {
1291
          "model_module": "@jupyter-widgets/controls",
1292
          "model_name": "HTMLModel",
1293
          "model_module_version": "1.5.0",
1294
          "state": {
1295
            "_dom_classes": [],
1296
            "_model_module": "@jupyter-widgets/controls",
1297
            "_model_module_version": "1.5.0",
1298
            "_model_name": "HTMLModel",
1299
            "_view_count": null,
1300
            "_view_module": "@jupyter-widgets/controls",
1301
            "_view_module_version": "1.5.0",
1302
            "_view_name": "HTMLView",
1303
            "description": "",
1304
            "description_tooltip": null,
1305
            "layout": "IPY_MODEL_ce5735788cd14803abac28fa2c42d168",
1306
            "placeholder": "​",
1307
            "style": "IPY_MODEL_a8928cd888a648c9b7d52739f38c5bda",
1308
            "value": " 766M/766M [00:19&lt;00:00, 59.6MB/s]"
1309
          }
1310
        },
1311
        "3e1d39ab6d3a4cf5827c9b3e1d4f0df1": {
1312
          "model_module": "@jupyter-widgets/base",
1313
          "model_name": "LayoutModel",
1314
          "model_module_version": "1.2.0",
1315
          "state": {
1316
            "_model_module": "@jupyter-widgets/base",
1317
            "_model_module_version": "1.2.0",
1318
            "_model_name": "LayoutModel",
1319
            "_view_count": null,
1320
            "_view_module": "@jupyter-widgets/base",
1321
            "_view_module_version": "1.2.0",
1322
            "_view_name": "LayoutView",
1323
            "align_content": null,
1324
            "align_items": null,
1325
            "align_self": null,
1326
            "border": null,
1327
            "bottom": null,
1328
            "display": null,
1329
            "flex": null,
1330
            "flex_flow": null,
1331
            "grid_area": null,
1332
            "grid_auto_columns": null,
1333
            "grid_auto_flow": null,
1334
            "grid_auto_rows": null,
1335
            "grid_column": null,
1336
            "grid_gap": null,
1337
            "grid_row": null,
1338
            "grid_template_areas": null,
1339
            "grid_template_columns": null,
1340
            "grid_template_rows": null,
1341
            "height": null,
1342
            "justify_content": null,
1343
            "justify_items": null,
1344
            "left": null,
1345
            "margin": null,
1346
            "max_height": null,
1347
            "max_width": null,
1348
            "min_height": null,
1349
            "min_width": null,
1350
            "object_fit": null,
1351
            "object_position": null,
1352
            "order": null,
1353
            "overflow": null,
1354
            "overflow_x": null,
1355
            "overflow_y": null,
1356
            "padding": null,
1357
            "right": null,
1358
            "top": null,
1359
            "visibility": null,
1360
            "width": null
1361
          }
1362
        },
1363
        "f58fb2da52bc43a985449910daaac230": {
1364
          "model_module": "@jupyter-widgets/base",
1365
          "model_name": "LayoutModel",
1366
          "model_module_version": "1.2.0",
1367
          "state": {
1368
            "_model_module": "@jupyter-widgets/base",
1369
            "_model_module_version": "1.2.0",
1370
            "_model_name": "LayoutModel",
1371
            "_view_count": null,
1372
            "_view_module": "@jupyter-widgets/base",
1373
            "_view_module_version": "1.2.0",
1374
            "_view_name": "LayoutView",
1375
            "align_content": null,
1376
            "align_items": null,
1377
            "align_self": null,
1378
            "border": null,
1379
            "bottom": null,
1380
            "display": null,
1381
            "flex": null,
1382
            "flex_flow": null,
1383
            "grid_area": null,
1384
            "grid_auto_columns": null,
1385
            "grid_auto_flow": null,
1386
            "grid_auto_rows": null,
1387
            "grid_column": null,
1388
            "grid_gap": null,
1389
            "grid_row": null,
1390
            "grid_template_areas": null,
1391
            "grid_template_columns": null,
1392
            "grid_template_rows": null,
1393
            "height": null,
1394
            "justify_content": null,
1395
            "justify_items": null,
1396
            "left": null,
1397
            "margin": null,
1398
            "max_height": null,
1399
            "max_width": null,
1400
            "min_height": null,
1401
            "min_width": null,
1402
            "object_fit": null,
1403
            "object_position": null,
1404
            "order": null,
1405
            "overflow": null,
1406
            "overflow_x": null,
1407
            "overflow_y": null,
1408
            "padding": null,
1409
            "right": null,
1410
            "top": null,
1411
            "visibility": null,
1412
            "width": null
1413
          }
1414
        },
1415
        "449afd685fd94354b64aa43a4763ab1d": {
1416
          "model_module": "@jupyter-widgets/controls",
1417
          "model_name": "DescriptionStyleModel",
1418
          "model_module_version": "1.5.0",
1419
          "state": {
1420
            "_model_module": "@jupyter-widgets/controls",
1421
            "_model_module_version": "1.5.0",
1422
            "_model_name": "DescriptionStyleModel",
1423
            "_view_count": null,
1424
            "_view_module": "@jupyter-widgets/base",
1425
            "_view_module_version": "1.2.0",
1426
            "_view_name": "StyleView",
1427
            "description_width": ""
1428
          }
1429
        },
1430
        "43a3c63554894d268e3bf747f7e0289d": {
1431
          "model_module": "@jupyter-widgets/base",
1432
          "model_name": "LayoutModel",
1433
          "model_module_version": "1.2.0",
1434
          "state": {
1435
            "_model_module": "@jupyter-widgets/base",
1436
            "_model_module_version": "1.2.0",
1437
            "_model_name": "LayoutModel",
1438
            "_view_count": null,
1439
            "_view_module": "@jupyter-widgets/base",
1440
            "_view_module_version": "1.2.0",
1441
            "_view_name": "LayoutView",
1442
            "align_content": null,
1443
            "align_items": null,
1444
            "align_self": null,
1445
            "border": null,
1446
            "bottom": null,
1447
            "display": null,
1448
            "flex": null,
1449
            "flex_flow": null,
1450
            "grid_area": null,
1451
            "grid_auto_columns": null,
1452
            "grid_auto_flow": null,
1453
            "grid_auto_rows": null,
1454
            "grid_column": null,
1455
            "grid_gap": null,
1456
            "grid_row": null,
1457
            "grid_template_areas": null,
1458
            "grid_template_columns": null,
1459
            "grid_template_rows": null,
1460
            "height": null,
1461
            "justify_content": null,
1462
            "justify_items": null,
1463
            "left": null,
1464
            "margin": null,
1465
            "max_height": null,
1466
            "max_width": null,
1467
            "min_height": null,
1468
            "min_width": null,
1469
            "object_fit": null,
1470
            "object_position": null,
1471
            "order": null,
1472
            "overflow": null,
1473
            "overflow_x": null,
1474
            "overflow_y": null,
1475
            "padding": null,
1476
            "right": null,
1477
            "top": null,
1478
            "visibility": null,
1479
            "width": null
1480
          }
1481
        },
1482
        "d902beef883e4c65bd2d6cb161423bae": {
1483
          "model_module": "@jupyter-widgets/controls",
1484
          "model_name": "ProgressStyleModel",
1485
          "model_module_version": "1.5.0",
1486
          "state": {
1487
            "_model_module": "@jupyter-widgets/controls",
1488
            "_model_module_version": "1.5.0",
1489
            "_model_name": "ProgressStyleModel",
1490
            "_view_count": null,
1491
            "_view_module": "@jupyter-widgets/base",
1492
            "_view_module_version": "1.2.0",
1493
            "_view_name": "StyleView",
1494
            "bar_color": null,
1495
            "description_width": ""
1496
          }
1497
        },
1498
        "ce5735788cd14803abac28fa2c42d168": {
1499
          "model_module": "@jupyter-widgets/base",
1500
          "model_name": "LayoutModel",
1501
          "model_module_version": "1.2.0",
1502
          "state": {
1503
            "_model_module": "@jupyter-widgets/base",
1504
            "_model_module_version": "1.2.0",
1505
            "_model_name": "LayoutModel",
1506
            "_view_count": null,
1507
            "_view_module": "@jupyter-widgets/base",
1508
            "_view_module_version": "1.2.0",
1509
            "_view_name": "LayoutView",
1510
            "align_content": null,
1511
            "align_items": null,
1512
            "align_self": null,
1513
            "border": null,
1514
            "bottom": null,
1515
            "display": null,
1516
            "flex": null,
1517
            "flex_flow": null,
1518
            "grid_area": null,
1519
            "grid_auto_columns": null,
1520
            "grid_auto_flow": null,
1521
            "grid_auto_rows": null,
1522
            "grid_column": null,
1523
            "grid_gap": null,
1524
            "grid_row": null,
1525
            "grid_template_areas": null,
1526
            "grid_template_columns": null,
1527
            "grid_template_rows": null,
1528
            "height": null,
1529
            "justify_content": null,
1530
            "justify_items": null,
1531
            "left": null,
1532
            "margin": null,
1533
            "max_height": null,
1534
            "max_width": null,
1535
            "min_height": null,
1536
            "min_width": null,
1537
            "object_fit": null,
1538
            "object_position": null,
1539
            "order": null,
1540
            "overflow": null,
1541
            "overflow_x": null,
1542
            "overflow_y": null,
1543
            "padding": null,
1544
            "right": null,
1545
            "top": null,
1546
            "visibility": null,
1547
            "width": null
1548
          }
1549
        },
1550
        "a8928cd888a648c9b7d52739f38c5bda": {
1551
          "model_module": "@jupyter-widgets/controls",
1552
          "model_name": "DescriptionStyleModel",
1553
          "model_module_version": "1.5.0",
1554
          "state": {
1555
            "_model_module": "@jupyter-widgets/controls",
1556
            "_model_module_version": "1.5.0",
1557
            "_model_name": "DescriptionStyleModel",
1558
            "_view_count": null,
1559
            "_view_module": "@jupyter-widgets/base",
1560
            "_view_module_version": "1.2.0",
1561
            "_view_name": "StyleView",
1562
            "description_width": ""
1563
          }
1564
        },
1565
        "0168cb9e015b4e119fcdc7beca2e21a3": {
1566
          "model_module": "@jupyter-widgets/controls",
1567
          "model_name": "HBoxModel",
1568
          "model_module_version": "1.5.0",
1569
          "state": {
1570
            "_dom_classes": [],
1571
            "_model_module": "@jupyter-widgets/controls",
1572
            "_model_module_version": "1.5.0",
1573
            "_model_name": "HBoxModel",
1574
            "_view_count": null,
1575
            "_view_module": "@jupyter-widgets/controls",
1576
            "_view_module_version": "1.5.0",
1577
            "_view_name": "HBoxView",
1578
            "box_style": "",
1579
            "children": [
1580
              "IPY_MODEL_a0e93df2955b44208479a4b416cd1085",
1581
              "IPY_MODEL_6f58f8d6374b45b4b6a0d9fcaa8083e7",
1582
              "IPY_MODEL_8bde11d3d65c45dba3baf1c02450eef1"
1583
            ],
1584
            "layout": "IPY_MODEL_b19af932a7f24dc799b683136d175107"
1585
          }
1586
        },
1587
        "a0e93df2955b44208479a4b416cd1085": {
1588
          "model_module": "@jupyter-widgets/controls",
1589
          "model_name": "HTMLModel",
1590
          "model_module_version": "1.5.0",
1591
          "state": {
1592
            "_dom_classes": [],
1593
            "_model_module": "@jupyter-widgets/controls",
1594
            "_model_module_version": "1.5.0",
1595
            "_model_name": "HTMLModel",
1596
            "_view_count": null,
1597
            "_view_module": "@jupyter-widgets/controls",
1598
            "_view_module_version": "1.5.0",
1599
            "_view_name": "HTMLView",
1600
            "description": "",
1601
            "description_tooltip": null,
1602
            "layout": "IPY_MODEL_e2819a7a94974a4da9b6bde754fdbc50",
1603
            "placeholder": "​",
1604
            "style": "IPY_MODEL_808dd94ecaff43e0848bcfead3cf0d75",
1605
            "value": "Generating train split: "
1606
          }
1607
        },
1608
        "6f58f8d6374b45b4b6a0d9fcaa8083e7": {
1609
          "model_module": "@jupyter-widgets/controls",
1610
          "model_name": "FloatProgressModel",
1611
          "model_module_version": "1.5.0",
1612
          "state": {
1613
            "_dom_classes": [],
1614
            "_model_module": "@jupyter-widgets/controls",
1615
            "_model_module_version": "1.5.0",
1616
            "_model_name": "FloatProgressModel",
1617
            "_view_count": null,
1618
            "_view_module": "@jupyter-widgets/controls",
1619
            "_view_module_version": "1.5.0",
1620
            "_view_name": "ProgressView",
1621
            "bar_style": "success",
1622
            "description": "",
1623
            "description_tooltip": null,
1624
            "layout": "IPY_MODEL_4383833994854f88aa12ba0edb673d47",
1625
            "max": 1,
1626
            "min": 0,
1627
            "orientation": "horizontal",
1628
            "style": "IPY_MODEL_ee2672db2047468fb49482e14127dcd3",
1629
            "value": 1
1630
          }
1631
        },
1632
        "8bde11d3d65c45dba3baf1c02450eef1": {
1633
          "model_module": "@jupyter-widgets/controls",
1634
          "model_name": "HTMLModel",
1635
          "model_module_version": "1.5.0",
1636
          "state": {
1637
            "_dom_classes": [],
1638
            "_model_module": "@jupyter-widgets/controls",
1639
            "_model_module_version": "1.5.0",
1640
            "_model_name": "HTMLModel",
1641
            "_view_count": null,
1642
            "_view_module": "@jupyter-widgets/controls",
1643
            "_view_module_version": "1.5.0",
1644
            "_view_name": "HTMLView",
1645
            "description": "",
1646
            "description_tooltip": null,
1647
            "layout": "IPY_MODEL_6999dc582b4c4ba68fdbfe404b1c4dfe",
1648
            "placeholder": "​",
1649
            "style": "IPY_MODEL_593274c4088f4d9aa2becb6b1c6e495b",
1650
            "value": " 240927/0 [00:07&lt;00:00, 44003.91 examples/s]"
1651
          }
1652
        },
1653
        "b19af932a7f24dc799b683136d175107": {
1654
          "model_module": "@jupyter-widgets/base",
1655
          "model_name": "LayoutModel",
1656
          "model_module_version": "1.2.0",
1657
          "state": {
1658
            "_model_module": "@jupyter-widgets/base",
1659
            "_model_module_version": "1.2.0",
1660
            "_model_name": "LayoutModel",
1661
            "_view_count": null,
1662
            "_view_module": "@jupyter-widgets/base",
1663
            "_view_module_version": "1.2.0",
1664
            "_view_name": "LayoutView",
1665
            "align_content": null,
1666
            "align_items": null,
1667
            "align_self": null,
1668
            "border": null,
1669
            "bottom": null,
1670
            "display": null,
1671
            "flex": null,
1672
            "flex_flow": null,
1673
            "grid_area": null,
1674
            "grid_auto_columns": null,
1675
            "grid_auto_flow": null,
1676
            "grid_auto_rows": null,
1677
            "grid_column": null,
1678
            "grid_gap": null,
1679
            "grid_row": null,
1680
            "grid_template_areas": null,
1681
            "grid_template_columns": null,
1682
            "grid_template_rows": null,
1683
            "height": null,
1684
            "justify_content": null,
1685
            "justify_items": null,
1686
            "left": null,
1687
            "margin": null,
1688
            "max_height": null,
1689
            "max_width": null,
1690
            "min_height": null,
1691
            "min_width": null,
1692
            "object_fit": null,
1693
            "object_position": null,
1694
            "order": null,
1695
            "overflow": null,
1696
            "overflow_x": null,
1697
            "overflow_y": null,
1698
            "padding": null,
1699
            "right": null,
1700
            "top": null,
1701
            "visibility": null,
1702
            "width": null
1703
          }
1704
        },
1705
        "e2819a7a94974a4da9b6bde754fdbc50": {
1706
          "model_module": "@jupyter-widgets/base",
1707
          "model_name": "LayoutModel",
1708
          "model_module_version": "1.2.0",
1709
          "state": {
1710
            "_model_module": "@jupyter-widgets/base",
1711
            "_model_module_version": "1.2.0",
1712
            "_model_name": "LayoutModel",
1713
            "_view_count": null,
1714
            "_view_module": "@jupyter-widgets/base",
1715
            "_view_module_version": "1.2.0",
1716
            "_view_name": "LayoutView",
1717
            "align_content": null,
1718
            "align_items": null,
1719
            "align_self": null,
1720
            "border": null,
1721
            "bottom": null,
1722
            "display": null,
1723
            "flex": null,
1724
            "flex_flow": null,
1725
            "grid_area": null,
1726
            "grid_auto_columns": null,
1727
            "grid_auto_flow": null,
1728
            "grid_auto_rows": null,
1729
            "grid_column": null,
1730
            "grid_gap": null,
1731
            "grid_row": null,
1732
            "grid_template_areas": null,
1733
            "grid_template_columns": null,
1734
            "grid_template_rows": null,
1735
            "height": null,
1736
            "justify_content": null,
1737
            "justify_items": null,
1738
            "left": null,
1739
            "margin": null,
1740
            "max_height": null,
1741
            "max_width": null,
1742
            "min_height": null,
1743
            "min_width": null,
1744
            "object_fit": null,
1745
            "object_position": null,
1746
            "order": null,
1747
            "overflow": null,
1748
            "overflow_x": null,
1749
            "overflow_y": null,
1750
            "padding": null,
1751
            "right": null,
1752
            "top": null,
1753
            "visibility": null,
1754
            "width": null
1755
          }
1756
        },
1757
        "808dd94ecaff43e0848bcfead3cf0d75": {
1758
          "model_module": "@jupyter-widgets/controls",
1759
          "model_name": "DescriptionStyleModel",
1760
          "model_module_version": "1.5.0",
1761
          "state": {
1762
            "_model_module": "@jupyter-widgets/controls",
1763
            "_model_module_version": "1.5.0",
1764
            "_model_name": "DescriptionStyleModel",
1765
            "_view_count": null,
1766
            "_view_module": "@jupyter-widgets/base",
1767
            "_view_module_version": "1.2.0",
1768
            "_view_name": "StyleView",
1769
            "description_width": ""
1770
          }
1771
        },
1772
        "4383833994854f88aa12ba0edb673d47": {
1773
          "model_module": "@jupyter-widgets/base",
1774
          "model_name": "LayoutModel",
1775
          "model_module_version": "1.2.0",
1776
          "state": {
1777
            "_model_module": "@jupyter-widgets/base",
1778
            "_model_module_version": "1.2.0",
1779
            "_model_name": "LayoutModel",
1780
            "_view_count": null,
1781
            "_view_module": "@jupyter-widgets/base",
1782
            "_view_module_version": "1.2.0",
1783
            "_view_name": "LayoutView",
1784
            "align_content": null,
1785
            "align_items": null,
1786
            "align_self": null,
1787
            "border": null,
1788
            "bottom": null,
1789
            "display": null,
1790
            "flex": null,
1791
            "flex_flow": null,
1792
            "grid_area": null,
1793
            "grid_auto_columns": null,
1794
            "grid_auto_flow": null,
1795
            "grid_auto_rows": null,
1796
            "grid_column": null,
1797
            "grid_gap": null,
1798
            "grid_row": null,
1799
            "grid_template_areas": null,
1800
            "grid_template_columns": null,
1801
            "grid_template_rows": null,
1802
            "height": null,
1803
            "justify_content": null,
1804
            "justify_items": null,
1805
            "left": null,
1806
            "margin": null,
1807
            "max_height": null,
1808
            "max_width": null,
1809
            "min_height": null,
1810
            "min_width": null,
1811
            "object_fit": null,
1812
            "object_position": null,
1813
            "order": null,
1814
            "overflow": null,
1815
            "overflow_x": null,
1816
            "overflow_y": null,
1817
            "padding": null,
1818
            "right": null,
1819
            "top": null,
1820
            "visibility": null,
1821
            "width": "20px"
1822
          }
1823
        },
1824
        "ee2672db2047468fb49482e14127dcd3": {
1825
          "model_module": "@jupyter-widgets/controls",
1826
          "model_name": "ProgressStyleModel",
1827
          "model_module_version": "1.5.0",
1828
          "state": {
1829
            "_model_module": "@jupyter-widgets/controls",
1830
            "_model_module_version": "1.5.0",
1831
            "_model_name": "ProgressStyleModel",
1832
            "_view_count": null,
1833
            "_view_module": "@jupyter-widgets/base",
1834
            "_view_module_version": "1.2.0",
1835
            "_view_name": "StyleView",
1836
            "bar_color": null,
1837
            "description_width": ""
1838
          }
1839
        },
1840
        "6999dc582b4c4ba68fdbfe404b1c4dfe": {
1841
          "model_module": "@jupyter-widgets/base",
1842
          "model_name": "LayoutModel",
1843
          "model_module_version": "1.2.0",
1844
          "state": {
1845
            "_model_module": "@jupyter-widgets/base",
1846
            "_model_module_version": "1.2.0",
1847
            "_model_name": "LayoutModel",
1848
            "_view_count": null,
1849
            "_view_module": "@jupyter-widgets/base",
1850
            "_view_module_version": "1.2.0",
1851
            "_view_name": "LayoutView",
1852
            "align_content": null,
1853
            "align_items": null,
1854
            "align_self": null,
1855
            "border": null,
1856
            "bottom": null,
1857
            "display": null,
1858
            "flex": null,
1859
            "flex_flow": null,
1860
            "grid_area": null,
1861
            "grid_auto_columns": null,
1862
            "grid_auto_flow": null,
1863
            "grid_auto_rows": null,
1864
            "grid_column": null,
1865
            "grid_gap": null,
1866
            "grid_row": null,
1867
            "grid_template_areas": null,
1868
            "grid_template_columns": null,
1869
            "grid_template_rows": null,
1870
            "height": null,
1871
            "justify_content": null,
1872
            "justify_items": null,
1873
            "left": null,
1874
            "margin": null,
1875
            "max_height": null,
1876
            "max_width": null,
1877
            "min_height": null,
1878
            "min_width": null,
1879
            "object_fit": null,
1880
            "object_position": null,
1881
            "order": null,
1882
            "overflow": null,
1883
            "overflow_x": null,
1884
            "overflow_y": null,
1885
            "padding": null,
1886
            "right": null,
1887
            "top": null,
1888
            "visibility": null,
1889
            "width": null
1890
          }
1891
        },
1892
        "593274c4088f4d9aa2becb6b1c6e495b": {
1893
          "model_module": "@jupyter-widgets/controls",
1894
          "model_name": "DescriptionStyleModel",
1895
          "model_module_version": "1.5.0",
1896
          "state": {
1897
            "_model_module": "@jupyter-widgets/controls",
1898
            "_model_module_version": "1.5.0",
1899
            "_model_name": "DescriptionStyleModel",
1900
            "_view_count": null,
1901
            "_view_module": "@jupyter-widgets/base",
1902
            "_view_module_version": "1.2.0",
1903
            "_view_name": "StyleView",
1904
            "description_width": ""
1905
          }
1906
        },
1907
        "78b688cc78264ad1a2f05d339fe498fe": {
1908
          "model_module": "@jupyter-widgets/controls",
1909
          "model_name": "HBoxModel",
1910
          "model_module_version": "1.5.0",
1911
          "state": {
1912
            "_dom_classes": [],
1913
            "_model_module": "@jupyter-widgets/controls",
1914
            "_model_module_version": "1.5.0",
1915
            "_model_name": "HBoxModel",
1916
            "_view_count": null,
1917
            "_view_module": "@jupyter-widgets/controls",
1918
            "_view_module_version": "1.5.0",
1919
            "_view_name": "HBoxView",
1920
            "box_style": "",
1921
            "children": [
1922
              "IPY_MODEL_f8796aebc76a42ed80d6cf9c234b9e14",
1923
              "IPY_MODEL_93fd5108528a499cb5ae4444f0b2f770",
1924
              "IPY_MODEL_89683fb7a1a44606abc2ded05fbb3aa3"
1925
            ],
1926
            "layout": "IPY_MODEL_f202dce30c1d4672ad357db9b6dbe9b8"
1927
          }
1928
        },
1929
        "f8796aebc76a42ed80d6cf9c234b9e14": {
1930
          "model_module": "@jupyter-widgets/controls",
1931
          "model_name": "HTMLModel",
1932
          "model_module_version": "1.5.0",
1933
          "state": {
1934
            "_dom_classes": [],
1935
            "_model_module": "@jupyter-widgets/controls",
1936
            "_model_module_version": "1.5.0",
1937
            "_model_name": "HTMLModel",
1938
            "_view_count": null,
1939
            "_view_module": "@jupyter-widgets/controls",
1940
            "_view_module_version": "1.5.0",
1941
            "_view_name": "HTMLView",
1942
            "description": "",
1943
            "description_tooltip": null,
1944
            "layout": "IPY_MODEL_009627caf7fa4c51b61b6e2087545324",
1945
            "placeholder": "​",
1946
            "style": "IPY_MODEL_3ff8bcf03c704243b7fd173b3cdd0d87",
1947
            "value": "100%"
1948
          }
1949
        },
1950
        "93fd5108528a499cb5ae4444f0b2f770": {
1951
          "model_module": "@jupyter-widgets/controls",
1952
          "model_name": "FloatProgressModel",
1953
          "model_module_version": "1.5.0",
1954
          "state": {
1955
            "_dom_classes": [],
1956
            "_model_module": "@jupyter-widgets/controls",
1957
            "_model_module_version": "1.5.0",
1958
            "_model_name": "FloatProgressModel",
1959
            "_view_count": null,
1960
            "_view_module": "@jupyter-widgets/controls",
1961
            "_view_module_version": "1.5.0",
1962
            "_view_name": "ProgressView",
1963
            "bar_style": "success",
1964
            "description": "",
1965
            "description_tooltip": null,
1966
            "layout": "IPY_MODEL_cc0685f737fa4e15b7796e5f3fe3df8d",
1967
            "max": 200,
1968
            "min": 0,
1969
            "orientation": "horizontal",
1970
            "style": "IPY_MODEL_f3cfb3b02d7a4e44b00e24aa2e7f9ff5",
1971
            "value": 200
1972
          }
1973
        },
1974
        "89683fb7a1a44606abc2ded05fbb3aa3": {
1975
          "model_module": "@jupyter-widgets/controls",
1976
          "model_name": "HTMLModel",
1977
          "model_module_version": "1.5.0",
1978
          "state": {
1979
            "_dom_classes": [],
1980
            "_model_module": "@jupyter-widgets/controls",
1981
            "_model_module_version": "1.5.0",
1982
            "_model_name": "HTMLModel",
1983
            "_view_count": null,
1984
            "_view_module": "@jupyter-widgets/controls",
1985
            "_view_module_version": "1.5.0",
1986
            "_view_name": "HTMLView",
1987
            "description": "",
1988
            "description_tooltip": null,
1989
            "layout": "IPY_MODEL_1022918a023b4aa8a3714d9eab2978b1",
1990
            "placeholder": "​",
1991
            "style": "IPY_MODEL_e6ed1f5a6e5b4c508fa3b69d9219057d",
1992
            "value": " 200/200 [11:43&lt;00:00,  3.59s/it]"
1993
          }
1994
        },
1995
        "f202dce30c1d4672ad357db9b6dbe9b8": {
1996
          "model_module": "@jupyter-widgets/base",
1997
          "model_name": "LayoutModel",
1998
          "model_module_version": "1.2.0",
1999
          "state": {
2000
            "_model_module": "@jupyter-widgets/base",
2001
            "_model_module_version": "1.2.0",
2002
            "_model_name": "LayoutModel",
2003
            "_view_count": null,
2004
            "_view_module": "@jupyter-widgets/base",
2005
            "_view_module_version": "1.2.0",
2006
            "_view_name": "LayoutView",
2007
            "align_content": null,
2008
            "align_items": null,
2009
            "align_self": null,
2010
            "border": null,
2011
            "bottom": null,
2012
            "display": null,
2013
            "flex": null,
2014
            "flex_flow": null,
2015
            "grid_area": null,
2016
            "grid_auto_columns": null,
2017
            "grid_auto_flow": null,
2018
            "grid_auto_rows": null,
2019
            "grid_column": null,
2020
            "grid_gap": null,
2021
            "grid_row": null,
2022
            "grid_template_areas": null,
2023
            "grid_template_columns": null,
2024
            "grid_template_rows": null,
2025
            "height": null,
2026
            "justify_content": null,
2027
            "justify_items": null,
2028
            "left": null,
2029
            "margin": null,
2030
            "max_height": null,
2031
            "max_width": null,
2032
            "min_height": null,
2033
            "min_width": null,
2034
            "object_fit": null,
2035
            "object_position": null,
2036
            "order": null,
2037
            "overflow": null,
2038
            "overflow_x": null,
2039
            "overflow_y": null,
2040
            "padding": null,
2041
            "right": null,
2042
            "top": null,
2043
            "visibility": null,
2044
            "width": null
2045
          }
2046
        },
2047
        "009627caf7fa4c51b61b6e2087545324": {
2048
          "model_module": "@jupyter-widgets/base",
2049
          "model_name": "LayoutModel",
2050
          "model_module_version": "1.2.0",
2051
          "state": {
2052
            "_model_module": "@jupyter-widgets/base",
2053
            "_model_module_version": "1.2.0",
2054
            "_model_name": "LayoutModel",
2055
            "_view_count": null,
2056
            "_view_module": "@jupyter-widgets/base",
2057
            "_view_module_version": "1.2.0",
2058
            "_view_name": "LayoutView",
2059
            "align_content": null,
2060
            "align_items": null,
2061
            "align_self": null,
2062
            "border": null,
2063
            "bottom": null,
2064
            "display": null,
2065
            "flex": null,
2066
            "flex_flow": null,
2067
            "grid_area": null,
2068
            "grid_auto_columns": null,
2069
            "grid_auto_flow": null,
2070
            "grid_auto_rows": null,
2071
            "grid_column": null,
2072
            "grid_gap": null,
2073
            "grid_row": null,
2074
            "grid_template_areas": null,
2075
            "grid_template_columns": null,
2076
            "grid_template_rows": null,
2077
            "height": null,
2078
            "justify_content": null,
2079
            "justify_items": null,
2080
            "left": null,
2081
            "margin": null,
2082
            "max_height": null,
2083
            "max_width": null,
2084
            "min_height": null,
2085
            "min_width": null,
2086
            "object_fit": null,
2087
            "object_position": null,
2088
            "order": null,
2089
            "overflow": null,
2090
            "overflow_x": null,
2091
            "overflow_y": null,
2092
            "padding": null,
2093
            "right": null,
2094
            "top": null,
2095
            "visibility": null,
2096
            "width": null
2097
          }
2098
        },
2099
        "3ff8bcf03c704243b7fd173b3cdd0d87": {
2100
          "model_module": "@jupyter-widgets/controls",
2101
          "model_name": "DescriptionStyleModel",
2102
          "model_module_version": "1.5.0",
2103
          "state": {
2104
            "_model_module": "@jupyter-widgets/controls",
2105
            "_model_module_version": "1.5.0",
2106
            "_model_name": "DescriptionStyleModel",
2107
            "_view_count": null,
2108
            "_view_module": "@jupyter-widgets/base",
2109
            "_view_module_version": "1.2.0",
2110
            "_view_name": "StyleView",
2111
            "description_width": ""
2112
          }
2113
        },
2114
        "cc0685f737fa4e15b7796e5f3fe3df8d": {
2115
          "model_module": "@jupyter-widgets/base",
2116
          "model_name": "LayoutModel",
2117
          "model_module_version": "1.2.0",
2118
          "state": {
2119
            "_model_module": "@jupyter-widgets/base",
2120
            "_model_module_version": "1.2.0",
2121
            "_model_name": "LayoutModel",
2122
            "_view_count": null,
2123
            "_view_module": "@jupyter-widgets/base",
2124
            "_view_module_version": "1.2.0",
2125
            "_view_name": "LayoutView",
2126
            "align_content": null,
2127
            "align_items": null,
2128
            "align_self": null,
2129
            "border": null,
2130
            "bottom": null,
2131
            "display": null,
2132
            "flex": null,
2133
            "flex_flow": null,
2134
            "grid_area": null,
2135
            "grid_auto_columns": null,
2136
            "grid_auto_flow": null,
2137
            "grid_auto_rows": null,
2138
            "grid_column": null,
2139
            "grid_gap": null,
2140
            "grid_row": null,
2141
            "grid_template_areas": null,
2142
            "grid_template_columns": null,
2143
            "grid_template_rows": null,
2144
            "height": null,
2145
            "justify_content": null,
2146
            "justify_items": null,
2147
            "left": null,
2148
            "margin": null,
2149
            "max_height": null,
2150
            "max_width": null,
2151
            "min_height": null,
2152
            "min_width": null,
2153
            "object_fit": null,
2154
            "object_position": null,
2155
            "order": null,
2156
            "overflow": null,
2157
            "overflow_x": null,
2158
            "overflow_y": null,
2159
            "padding": null,
2160
            "right": null,
2161
            "top": null,
2162
            "visibility": null,
2163
            "width": null
2164
          }
2165
        },
2166
        "f3cfb3b02d7a4e44b00e24aa2e7f9ff5": {
2167
          "model_module": "@jupyter-widgets/controls",
2168
          "model_name": "ProgressStyleModel",
2169
          "model_module_version": "1.5.0",
2170
          "state": {
2171
            "_model_module": "@jupyter-widgets/controls",
2172
            "_model_module_version": "1.5.0",
2173
            "_model_name": "ProgressStyleModel",
2174
            "_view_count": null,
2175
            "_view_module": "@jupyter-widgets/base",
2176
            "_view_module_version": "1.2.0",
2177
            "_view_name": "StyleView",
2178
            "bar_color": null,
2179
            "description_width": ""
2180
          }
2181
        },
2182
        "1022918a023b4aa8a3714d9eab2978b1": {
2183
          "model_module": "@jupyter-widgets/base",
2184
          "model_name": "LayoutModel",
2185
          "model_module_version": "1.2.0",
2186
          "state": {
2187
            "_model_module": "@jupyter-widgets/base",
2188
            "_model_module_version": "1.2.0",
2189
            "_model_name": "LayoutModel",
2190
            "_view_count": null,
2191
            "_view_module": "@jupyter-widgets/base",
2192
            "_view_module_version": "1.2.0",
2193
            "_view_name": "LayoutView",
2194
            "align_content": null,
2195
            "align_items": null,
2196
            "align_self": null,
2197
            "border": null,
2198
            "bottom": null,
2199
            "display": null,
2200
            "flex": null,
2201
            "flex_flow": null,
2202
            "grid_area": null,
2203
            "grid_auto_columns": null,
2204
            "grid_auto_flow": null,
2205
            "grid_auto_rows": null,
2206
            "grid_column": null,
2207
            "grid_gap": null,
2208
            "grid_row": null,
2209
            "grid_template_areas": null,
2210
            "grid_template_columns": null,
2211
            "grid_template_rows": null,
2212
            "height": null,
2213
            "justify_content": null,
2214
            "justify_items": null,
2215
            "left": null,
2216
            "margin": null,
2217
            "max_height": null,
2218
            "max_width": null,
2219
            "min_height": null,
2220
            "min_width": null,
2221
            "object_fit": null,
2222
            "object_position": null,
2223
            "order": null,
2224
            "overflow": null,
2225
            "overflow_x": null,
2226
            "overflow_y": null,
2227
            "padding": null,
2228
            "right": null,
2229
            "top": null,
2230
            "visibility": null,
2231
            "width": null
2232
          }
2233
        },
2234
        "e6ed1f5a6e5b4c508fa3b69d9219057d": {
2235
          "model_module": "@jupyter-widgets/controls",
2236
          "model_name": "DescriptionStyleModel",
2237
          "model_module_version": "1.5.0",
2238
          "state": {
2239
            "_model_module": "@jupyter-widgets/controls",
2240
            "_model_module_version": "1.5.0",
2241
            "_model_name": "DescriptionStyleModel",
2242
            "_view_count": null,
2243
            "_view_module": "@jupyter-widgets/base",
2244
            "_view_module_version": "1.2.0",
2245
            "_view_name": "StyleView",
2246
            "description_width": ""
2247
          }
2248
        }
2249
      }
2250
    }
2251
  },
2252
  "nbformat": 4,
2253
  "nbformat_minor": 0
2254
}

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.