examples

Форк
0
/
xml-agents.ipynb 
1573 строки · 72.6 Кб
1
{
2
  "cells": [
3
    {
4
      "cell_type": "markdown",
5
      "metadata": {},
6
      "source": [
7
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/v1/xml-agents.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/v1/xml-agents.ipynb)"
8
      ]
9
    },
10
    {
11
      "cell_type": "code",
12
      "execution_count": 1,
13
      "metadata": {
14
        "colab": {
15
          "base_uri": "https://localhost:8080/"
16
        },
17
        "id": "1eOnr6z_zLoc",
18
        "outputId": "67999690-7f39-48c2-99d7-bf841f26e3d9"
19
      },
20
      "outputs": [
21
        {
22
          "name": "stdout",
23
          "output_type": "stream",
24
          "text": [
25
            "Python 3.10.12\n"
26
          ]
27
        }
28
      ],
29
      "source": [
30
        "!python --version"
31
      ]
32
    },
33
    {
34
      "cell_type": "markdown",
35
      "metadata": {
36
        "id": "qQG4iSxVxw8f"
37
      },
38
      "source": [
39
        "# XML Agents with RAG and LangChain v1"
40
      ]
41
    },
42
    {
43
      "cell_type": "markdown",
44
      "metadata": {
45
        "id": "ov6TCS7bx1oI"
46
      },
47
      "source": [
48
        "LangChain v1 brought a lot of changes and when comparing the LangChain of versions `0.0.3xx` to `0.1.x` there's plenty of changes to the preferred way of doing things. That is very much the case for agents.\n",
49
        "\n",
50
        "The way that we initialize and use agents is generally clearer than it was in the past — there are still many abstractions, but we can (and are encouraged to) get closer to the agent logic itself. This can make for some confusion at first, but once understood the new logic can be much clearer than with previous versions.\n",
51
        "\n",
52
        "In this example, we'll be building a RAG agent with LangChain v1. We will use Claude 2.1 for our LLM, Cohere's embed v3 model for knowledge embeddings, and Pinecone to power our knowledge retrieval.\n",
53
        "\n",
54
        "To begin, let's install the prerequisites:"
55
      ]
56
    },
57
    {
58
      "cell_type": "code",
59
      "execution_count": 2,
60
      "metadata": {
61
        "id": "zshhLDrgbFKk"
62
      },
63
      "outputs": [],
64
      "source": [
65
        "!pip install -qU \\\n",
66
        "    langchain==0.1.1 \\\n",
67
        "    langchain-community==0.0.13 \\\n",
68
        "    langchainhub==0.1.14 \\\n",
69
        "    anthropic==0.14.0 \\\n",
70
        "    cohere==4.45 \\\n",
71
        "    pinecone-client==3.1.0 \\\n",
72
        "    datasets==2.16.1"
73
      ]
74
    },
75
    {
76
      "cell_type": "markdown",
77
      "metadata": {
78
        "id": "bpKfZkUYzQhB"
79
      },
80
      "source": [
81
        "## Finding Knowledge"
82
      ]
83
    },
84
    {
85
      "cell_type": "markdown",
86
      "metadata": {
87
        "id": "JDTQoxcNzUa8"
88
      },
89
      "source": [
90
        "The first thing we need for an agent using RAG is somewhere we want to pull knowledge from. We will use v2 of the AI ArXiv dataset, available on Hugging Face Datasets at [`jamescalam/ai-arxiv2-chunks`](https://huggingface.co/datasets/jamescalam/ai-arxiv2-chunks).\n",
91
        "\n",
92
        "_Note: we're using the prechunked dataset. For the raw version see [`jamescalam/ai-arxiv2`](https://huggingface.co/datasets/jamescalam/ai-arxiv2)._"
93
      ]
94
    },
95
    {
96
      "cell_type": "code",
97
      "execution_count": 3,
98
      "metadata": {
99
        "colab": {
100
          "base_uri": "https://localhost:8080/"
101
        },
102
        "id": "U9gpYFnzbFKm",
103
        "outputId": "6c85bc8e-58ce-49d0-9027-0151ebdea0e1"
104
      },
105
      "outputs": [
106
        {
107
          "name": "stderr",
108
          "output_type": "stream",
109
          "text": [
110
            "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: \n",
111
            "The secret `HF_TOKEN` does not exist in your Colab secrets.\n",
112
            "To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n",
113
            "You will be able to reuse this secret in all of your notebooks.\n",
114
            "Please note that authentication is recommended but still optional to access public models or datasets.\n",
115
            "  warnings.warn(\n"
116
          ]
117
        },
118
        {
119
          "data": {
120
            "text/plain": [
121
              "Dataset({\n",
122
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
123
              "    num_rows: 20000\n",
124
              "})"
125
            ]
126
          },
127
          "execution_count": 3,
128
          "metadata": {},
129
          "output_type": "execute_result"
130
        }
131
      ],
132
      "source": [
133
        "from datasets import load_dataset\n",
134
        "\n",
135
        "dataset = load_dataset(\"jamescalam/ai-arxiv2-chunks\", split=\"train[:20000]\")\n",
136
        "dataset"
137
      ]
138
    },
139
    {
140
      "cell_type": "code",
141
      "execution_count": 4,
142
      "metadata": {
143
        "colab": {
144
          "base_uri": "https://localhost:8080/"
145
        },
146
        "id": "_bP7ZW-ybFKm",
147
        "outputId": "77253d1f-05f0-4601-ddd0-7d15fbcee793"
148
      },
149
      "outputs": [
150
        {
151
          "data": {
152
            "text/plain": [
153
              "{'doi': '2401.09350',\n",
154
              " 'chunk-id': 1,\n",
155
              " 'chunk': 'These neural networks and their training algorithms may be complex, and the scope of their impact broad and wide, but nonetheless they are simply functions in a high-dimensional space. A trained neural network takes a vector as input, crunches and transforms it in various ways, and produces another vector, often in some other space. An image may thereby be turned into a vector, a song into a sequence of vectors, and a social network as a structured collection of vectors. It seems as though much of human knowledge, or at least what is expressed as text, audio, image, and video, has a vector representation in one form or another.\\nIt should be noted that representing data as vectors is not unique to neural networks and deep learning. In fact, long before learnt vector representations of pieces of dataâ\\x80\\x94what is commonly known as â\\x80\\x9cembeddingsâ\\x80\\x9dâ\\x80\\x94came along, data was often encoded as hand-crafted feature vectors. Each feature quanti- fied into continuous or discrete values some facet of the data that was deemed relevant to a particular task (such as classification or regression). Vectors of that form, too, reflect our understanding of a real-world object or concept.',\n",
156
              " 'id': '2401.09350#1',\n",
157
              " 'title': 'Foundations of Vector Retrieval',\n",
158
              " 'summary': 'Vectors are universal mathematical objects that can represent text, images,\\nspeech, or a mix of these data modalities. That happens regardless of whether\\ndata is represented by hand-crafted features or learnt embeddings. Collect a\\nlarge enough quantity of such vectors and the question of retrieval becomes\\nurgently relevant: Finding vectors that are more similar to a query vector.\\nThis monograph is concerned with the question above and covers fundamental\\nconcepts along with advanced data structures and algorithms for vector\\nretrieval. In doing so, it recaps this fascinating topic and lowers barriers of\\nentry into this rich area of research.',\n",
159
              " 'source': 'http://arxiv.org/pdf/2401.09350',\n",
160
              " 'authors': 'Sebastian Bruch',\n",
161
              " 'categories': 'cs.DS, cs.IR',\n",
162
              " 'comment': None,\n",
163
              " 'journal_ref': None,\n",
164
              " 'primary_category': 'cs.DS',\n",
165
              " 'published': '20240117',\n",
166
              " 'updated': '20240117',\n",
167
              " 'references': []}"
168
            ]
169
          },
170
          "execution_count": 4,
171
          "metadata": {},
172
          "output_type": "execute_result"
173
        }
174
      ],
175
      "source": [
176
        "dataset[1]"
177
      ]
178
    },
179
    {
180
      "cell_type": "markdown",
181
      "metadata": {
182
        "id": "VX6NdQhgbFKn"
183
      },
184
      "source": [
185
        "## Building the Knowledge Base"
186
      ]
187
    },
188
    {
189
      "cell_type": "markdown",
190
      "metadata": {
191
        "id": "MDCbqQl_bFKn"
192
      },
193
      "source": [
194
        "To build our knowledge base we need _two things_:\n",
195
        "\n",
196
        "1. Embeddings, for this we will use `CohereEmbeddings` using Cohere's embedding models, which do need an [API key](https://dashboard.cohere.com/api-keys).\n",
197
        "2. A vector database, where we store our embeddings and query them. We use Pinecone which again requires a [free API key](https://app.pinecone.io).\n",
198
        "\n",
199
        "First we initialize our connection to Cohere and define an `embed` helper function:"
200
      ]
201
    },
202
    {
203
      "cell_type": "code",
204
      "execution_count": 6,
205
      "metadata": {
206
        "id": "PzBQ_iE6bFKn"
207
      },
208
      "outputs": [],
209
      "source": [
210
        "import os\n",
211
        "from getpass import getpass\n",
212
        "\n",
213
        "cohere_key = os.getenv(\"COHERE_API_KEY\") or getpass(\"Cohere API key: \")"
214
      ]
215
    },
216
    {
217
      "cell_type": "code",
218
      "execution_count": 7,
219
      "metadata": {
220
        "id": "wkw0KyLRbFKo"
221
      },
222
      "outputs": [],
223
      "source": [
224
        "from langchain_community.embeddings import CohereEmbeddings\n",
225
        "\n",
226
        "embed = CohereEmbeddings(model=\"embed-english-v3.0\", cohere_api_key=cohere_key)"
227
      ]
228
    },
229
    {
230
      "cell_type": "markdown",
231
      "metadata": {
232
        "id": "LhDzfsczbFKo"
233
      },
234
      "source": [
235
        "Then we initialize our connection to Pinecone:"
236
      ]
237
    },
238
    {
239
      "cell_type": "code",
240
      "execution_count": 8,
241
      "metadata": {
242
        "id": "j0N7EcJibFKo"
243
      },
244
      "outputs": [],
245
      "source": [
246
        "from pinecone import Pinecone\n",
247
        "\n",
248
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
249
        "api_key = os.getenv(\"PINECONE_API_KEY\") or getpass(\"Pinecone API key: \")\n",
250
        "\n",
251
        "# configure client\n",
252
        "pc = Pinecone(api_key=api_key)"
253
      ]
254
    },
255
    {
256
      "cell_type": "markdown",
257
      "metadata": {
258
        "id": "g65RLGIpbFKo"
259
      },
260
      "source": [
261
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/docs/projects)."
262
      ]
263
    },
264
    {
265
      "cell_type": "code",
266
      "execution_count": 9,
267
      "metadata": {
268
        "id": "8stIZYKdbFKo"
269
      },
270
      "outputs": [],
271
      "source": [
272
        "from pinecone import ServerlessSpec\n",
273
        "\n",
274
        "spec = ServerlessSpec(\n",
275
        "    cloud=\"aws\", region=\"us-west-2\"\n",
276
        ")"
277
      ]
278
    },
279
    {
280
      "cell_type": "markdown",
281
      "metadata": {
282
        "id": "-8Ep3743bFKo"
283
      },
284
      "source": [
285
        "Before creating an index, we need the dimensionality of our Cohere embedding model, which we can find easily by creating an embedding and checking the length:"
286
      ]
287
    },
288
    {
289
      "cell_type": "code",
290
      "execution_count": 10,
291
      "metadata": {
292
        "colab": {
293
          "base_uri": "https://localhost:8080/"
294
        },
295
        "id": "DwMhLWLDbFKo",
296
        "outputId": "63ce2962-d8f5-48c0-fa26-f065e33854e5"
297
      },
298
      "outputs": [
299
        {
300
          "data": {
301
            "text/plain": [
302
              "1024"
303
            ]
304
          },
305
          "execution_count": 10,
306
          "metadata": {},
307
          "output_type": "execute_result"
308
        }
309
      ],
310
      "source": [
311
        "vec = embed.embed_documents([\"ello\"])\n",
312
        "len(vec[0])\n"
313
      ]
314
    },
315
    {
316
      "cell_type": "markdown",
317
      "metadata": {
318
        "id": "G3X7nZIabFKp"
319
      },
320
      "source": [
321
        "Now we create the index using our embedding dimensionality, and a metric also compatible with the model (this can be either cosine or dotproduct). We also pass our spec to index initialization."
322
      ]
323
    },
324
    {
325
      "cell_type": "code",
326
      "execution_count": 11,
327
      "metadata": {
328
        "colab": {
329
          "base_uri": "https://localhost:8080/"
330
        },
331
        "id": "E6Bl7xTJbFKp",
332
        "outputId": "6bdfb7b6-ed1d-4dfb-fe6c-eace78a41fd0"
333
      },
334
      "outputs": [
335
        {
336
          "data": {
337
            "text/plain": [
338
              "{'dimension': 1024,\n",
339
              " 'index_fullness': 0.0,\n",
340
              " 'namespaces': {},\n",
341
              " 'total_vector_count': 0}"
342
            ]
343
          },
344
          "execution_count": 11,
345
          "metadata": {},
346
          "output_type": "execute_result"
347
        }
348
      ],
349
      "source": [
350
        "import time\n",
351
        "\n",
352
        "index_name = \"xml-agents\"\n",
353
        "\n",
354
        "# check if index already exists (it shouldn't if this is first time)\n",
355
        "if index_name not in pc.list_indexes().names():\n",
356
        "    # if does not exist, create index\n",
357
        "    pc.create_index(\n",
358
        "        index_name,\n",
359
        "        dimension=len(vec[0]),  # dimensionality of cohere v3\n",
360
        "        metric='dotproduct',\n",
361
        "        spec=spec\n",
362
        "    )\n",
363
        "    # wait for index to be initialized\n",
364
        "    while not pc.describe_index(index_name).status['ready']:\n",
365
        "        time.sleep(1)\n",
366
        "\n",
367
        "# connect to index\n",
368
        "index = pc.Index(index_name)\n",
369
        "time.sleep(1)\n",
370
        "# view index stats\n",
371
        "index.describe_index_stats()"
372
      ]
373
    },
374
    {
375
      "cell_type": "markdown",
376
      "metadata": {
377
        "id": "6ZUn2lu7bFKp"
378
      },
379
      "source": [
380
        "### Populating our Index"
381
      ]
382
    },
383
    {
384
      "cell_type": "markdown",
385
      "metadata": {
386
        "id": "PeVD6d0sbFKp"
387
      },
388
      "source": [
389
        "Now our knowledge base is ready to be populated with our data. We will use the `embed` helper function to embed our documents and then add them to our index.\n",
390
        "\n",
391
        "We will also include metadata from each record."
392
      ]
393
    },
394
    {
395
      "cell_type": "code",
396
      "execution_count": 12,
397
      "metadata": {
398
        "colab": {
399
          "base_uri": "https://localhost:8080/",
400
          "height": 49,
401
          "referenced_widgets": [
402
            "0c54f36fde0f48b1971bed4fd8f17c25",
403
            "cff5612716b8491f9e7e319d037a4532",
404
            "2ba98509dedb4a0ca6a8b50275e892f3",
405
            "36ee54cc891f45e699eb23c63c3bae28",
406
            "b44c08bcc5604e259c4b7c54a8ab53e7",
407
            "5d225cd9d5384c8c95744b8facdb174d",
408
            "aaab14fba4a548848fa731dd2235c8f5",
409
            "704efb62e6054a608fe93f2a3fc9b587",
410
            "55c292a3d8cb40239e122a764b581e29",
411
            "2b6a9a456f264bd8be154d980116d85a",
412
            "bb55cfbf693d4b7a988f5395826e5e3e"
413
          ]
414
        },
415
        "id": "hb00VSTqbFKp",
416
        "outputId": "902da9d1-4d27-48a0-93c9-8cd917c5b92d"
417
      },
418
      "outputs": [
419
        {
420
          "data": {
421
            "application/vnd.jupyter.widget-view+json": {
422
              "model_id": "0c54f36fde0f48b1971bed4fd8f17c25",
423
              "version_major": 2,
424
              "version_minor": 0
425
            },
426
            "text/plain": [
427
              "  0%|          | 0/200 [00:00<?, ?it/s]"
428
            ]
429
          },
430
          "metadata": {},
431
          "output_type": "display_data"
432
        }
433
      ],
434
      "source": [
435
        "from tqdm.auto import tqdm\n",
436
        "\n",
437
        "# easier to work with dataset as pandas dataframe\n",
438
        "data = dataset.to_pandas()\n",
439
        "\n",
440
        "batch_size = 100\n",
441
        "\n",
442
        "for i in tqdm(range(0, len(data), batch_size)):\n",
443
        "    i_end = min(len(data), i+batch_size)\n",
444
        "    # get batch of data\n",
445
        "    batch = data.iloc[i:i_end]\n",
446
        "    # generate unique ids for each chunk\n",
447
        "    ids = [f\"{x['doi']}-{x['chunk-id']}\" for i, x in batch.iterrows()]\n",
448
        "    # get text to embed\n",
449
        "    texts = [x['chunk'] for _, x in batch.iterrows()]\n",
450
        "    # embed text\n",
451
        "    embeds = embed.embed_documents(texts)\n",
452
        "    # get metadata to store in Pinecone\n",
453
        "    metadata = [\n",
454
        "        {'text': x['chunk'],\n",
455
        "         'source': x['source'],\n",
456
        "         'title': x['title']} for i, x in batch.iterrows()\n",
457
        "    ]\n",
458
        "    # add to Pinecone\n",
459
        "    index.upsert(vectors=zip(ids, embeds, metadata))"
460
      ]
461
    },
462
    {
463
      "cell_type": "markdown",
464
      "metadata": {
465
        "id": "z6VVT3X_EMDO"
466
      },
467
      "source": [
468
        "Create a tool for our agent to use when searching for ArXiv papers:"
469
      ]
470
    },
471
    {
472
      "cell_type": "code",
473
      "execution_count": 13,
474
      "metadata": {
475
        "id": "X9J5jHKcEQz6"
476
      },
477
      "outputs": [],
478
      "source": [
479
        "from langchain.agents import tool\n",
480
        "\n",
481
        "@tool\n",
482
        "def arxiv_search(query: str) -> str:\n",
483
        "    \"\"\"Use this tool when answering questions about AI, machine learning, data\n",
484
        "    science, or other technical questions that may be answered using arXiv\n",
485
        "    papers.\n",
486
        "    \"\"\"\n",
487
        "    # create query vector\n",
488
        "    xq = embed.embed_query(query)\n",
489
        "    # perform search\n",
490
        "    out = index.query(vector=xq, top_k=5, include_metadata=True)\n",
491
        "    # reformat results into string\n",
492
        "    results_str = \"\\n\\n\".join(\n",
493
        "        [x[\"metadata\"][\"text\"] for x in out[\"matches\"]]\n",
494
        "    )\n",
495
        "    return results_str\n",
496
        "\n",
497
        "tools = [arxiv_search]"
498
      ]
499
    },
500
    {
501
      "cell_type": "markdown",
502
      "metadata": {
503
        "id": "uN7d_4r-JMPW"
504
      },
505
      "source": [
506
        "When this tool is used by our agent it will execute it like so:"
507
      ]
508
    },
509
    {
510
      "cell_type": "code",
511
      "execution_count": 14,
512
      "metadata": {
513
        "colab": {
514
          "base_uri": "https://localhost:8080/"
515
        },
516
        "id": "eq4H-2RpI1U3",
517
        "outputId": "eea6c5e6-0a49-4be2-8a58-2f069191f07a"
518
      },
519
      "outputs": [
520
        {
521
          "name": "stdout",
522
          "output_type": "stream",
523
          "text": [
524
            "Ethical Considerations and Limitations (Section 5.2) Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide\n",
525
            "Table 52: Model card for Llama 2.\n",
526
            "77\n",
527
            "\n",
528
            "Model Developers Meta AI Variations Llama 2 comes in a range of parameter sizes—7B, 13B, and 70B—as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforce- ment learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License Where to send com- ments A custom commercial models-and-libraries/llama-downloads/ Instructions on how to provide feedback or comments on the model can be found in the model README, or by opening an issue in the GitHub repository (https://github.com/facebookresearch/llama/). license is available at: ai.meta.com/resources/ Intended Use Intended Use Cases Llama 2 is intended for commercial and research use in\n",
529
            "\n",
530
            "We believe that the open release of LLMs, when done safely, will be a net benefit to society. Like all LLMs, Llama 2 is a new technology that carries potential risks with use (Bender et al., 2021b; Weidinger et al., 2021; Solaiman et al., 2023). Testing conducted to date has been in English and has not — and could not — cover all scenarios. Therefore, before deploying any applications of Llama 2-Chat, developers should perform safety testing and tuning tailored to their specific applications of the model. We provide a responsible use guide¶ and code examples‖ to facilitate the safe deployment of Llama 2 and Llama 2-Chat. More details of our responsible release strategy can be found in Section 5.3.\n",
531
            "The remainder of this paper describes our pretraining methodology (Section 2), fine-tuning methodology (Section 3), approach to model safety (Section 4), key observations and insights (Section 5), relevant related work (Section 6), and conclusions (Section 7).\n",
532
            "‡https://ai.meta.com/resources/models-and-libraries/llama/ §We are delaying the release of the 34B model due to a lack of time to sufficiently red team. ¶https://ai.meta.com/llama ‖https://github.com/facebookresearch/llama\n",
533
            "4\n",
534
            "\n",
535
            "We are releasing the following models to the general public for research and commercial use‡:\n",
536
            "1. Llama 2, an updated version of Llama 1, trained on a new mix of publicly available data. We also increased the size of the pretraining corpus by 40%, doubled the context length of the model, and adopted grouped-query attention (Ainslie et al., 2023). We are releasing variants of Llama 2 with 7B, 13B, and 70B parameters. We have also trained 34B variants, which we report on in this paper but are not releasing.§\n",
537
            "2. Llama 2-Chat, a fine-tuned version of Llama 2 that is optimized for dialogue use cases. We release variants of this model with 7B, 13B, and 70B parameters as well.\n",
538
            "\n",
539
            "In this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, Llama 2 and Llama 2-Chat, at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested, Llama 2-Chat models generally perform better than existing open-source models. They also appear to be on par with some of the closed-source models, at least on the human evaluations we performed (see Figures 1 and 3). We have taken measures to increase the safety of these models, using safety-specific data annotation and tuning, as well as conducting red-teaming and employing iterative evaluations. Additionally, this paper contributes a thorough description of our fine-tuning methodology and approach to improving LLM safety. We hope that this openness will enable the community to reproduce fine-tuned LLMs and continue to improve the safety of those models, paving the way for more responsible development of LLMs. We also share novel observations we made during the development of Llama 2 and Llama 2-Chat, such as the emergence of tool usage and temporal organization of knowledge.\n",
540
            "3\n"
541
          ]
542
        }
543
      ],
544
      "source": [
545
        "print(\n",
546
        "    arxiv_search.run(tool_input={\"query\": \"can you tell me about llama 2?\"})\n",
547
        ")"
548
      ]
549
    },
550
    {
551
      "cell_type": "markdown",
552
      "metadata": {
553
        "id": "XUvJOqrNhYIh"
554
      },
555
      "source": [
556
        "## Defining XML Agent"
557
      ]
558
    },
559
    {
560
      "cell_type": "markdown",
561
      "metadata": {
562
        "id": "s45dwd78hbvk"
563
      },
564
      "source": [
565
        "The XML agent is built primarily to support Anthropic models. Anthropic models have been trained to use XML tags like `<input>{some input}</input` or when using a tool they use:\n",
566
        "\n",
567
        "```\n",
568
        "<tool>{tool name}</tool>\n",
569
        "<tool_input>{tool input}</tool_input>\n",
570
        "```\n",
571
        "\n",
572
        "This is much different to the format produced by typical ReAct agents, which is not as well supported by Anthropic models.\n",
573
        "\n",
574
        "To create an XML agent we need a `prompt`, `llm`, and list of `tools`. We can download a prebuilt prompt for conversational XML agents from LangChain hub."
575
      ]
576
    },
577
    {
578
      "cell_type": "code",
579
      "execution_count": 15,
580
      "metadata": {
581
        "colab": {
582
          "base_uri": "https://localhost:8080/"
583
        },
584
        "id": "ntuT7UuXeMz0",
585
        "outputId": "afa6e6dd-d12c-43e1-8b07-0a161c15d66c"
586
      },
587
      "outputs": [
588
        {
589
          "data": {
590
            "text/plain": [
591
              "ChatPromptTemplate(input_variables=['agent_scratchpad', 'input', 'tools'], partial_variables={'chat_history': ''}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'chat_history', 'input', 'tools'], template=\"You are a helpful assistant. Help the user answer any questions.\\n\\nYou have access to the following tools:\\n\\n{tools}\\n\\nIn order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>\\nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\\n\\n<tool>search</tool><tool_input>weather in SF</tool_input>\\n<observation>64 degrees</observation>\\n\\nWhen you are done, respond with a final answer between <final_answer></final_answer>. For example:\\n\\n<final_answer>The weather in SF is 64 degrees</final_answer>\\n\\nBegin!\\n\\nPrevious Conversation:\\n{chat_history}\\n\\nQuestion: {input}\\n{agent_scratchpad}\"))])"
592
            ]
593
          },
594
          "execution_count": 15,
595
          "metadata": {},
596
          "output_type": "execute_result"
597
        }
598
      ],
599
      "source": [
600
        "from langchain import hub\n",
601
        "\n",
602
        "prompt = hub.pull(\"hwchase17/xml-agent-convo\")\n",
603
        "prompt"
604
      ]
605
    },
606
    {
607
      "cell_type": "markdown",
608
      "metadata": {
609
        "id": "rfdcKCdwi0SL"
610
      },
611
      "source": [
612
        "We can see the XML format being used throughout the prompt when explaining to the LLM how it should use tools."
613
      ]
614
    },
615
    {
616
      "cell_type": "code",
617
      "execution_count": 16,
618
      "metadata": {
619
        "id": "kDHuU2uOdW91"
620
      },
621
      "outputs": [],
622
      "source": [
623
        "from langchain_community.chat_models import ChatAnthropic\n",
624
        "\n",
625
        "anthropic_api_key = os.getenv(\"ANTHROPIC_API_KEY\") or getpass(\"Anthropic API key: \")\n",
626
        "\n",
627
        "# chat completion llm\n",
628
        "llm = ChatAnthropic(\n",
629
        "    anthropic_api_key=anthropic_api_key,\n",
630
        "    model_name='claude-2.1',\n",
631
        "    temperature=0.0\n",
632
        ")"
633
      ]
634
    },
635
    {
636
      "cell_type": "markdown",
637
      "metadata": {
638
        "id": "g33Nt-xijPKG"
639
      },
640
      "source": [
641
        "When the agent is run we will provide it with a single `input` — this is the input text from a user. However, within the agent logic an *agent_scratchpad* object will be passed too, which will include tool information. To feed this information into our LLM we will need to transform it into the XML format described above, we define the `convert_intermediate_steps` function to handle that."
642
      ]
643
    },
644
    {
645
      "cell_type": "code",
646
      "execution_count": 17,
647
      "metadata": {
648
        "id": "TMMBgMBlIJoq"
649
      },
650
      "outputs": [],
651
      "source": [
652
        "def convert_intermediate_steps(intermediate_steps):\n",
653
        "    log = \"\"\n",
654
        "    for action, observation in intermediate_steps:\n",
655
        "        log += (\n",
656
        "            f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
657
        "            f\"</tool_input><observation>{observation}</observation>\"\n",
658
        "        )\n",
659
        "    return log"
660
      ]
661
    },
662
    {
663
      "cell_type": "markdown",
664
      "metadata": {
665
        "id": "T5_PQWVckAOi"
666
      },
667
      "source": [
668
        "We must also parse the tools into a string containing `tool_name: tool_description` — we handle that with the `convert_tools` function."
669
      ]
670
    },
671
    {
672
      "cell_type": "code",
673
      "execution_count": 18,
674
      "metadata": {
675
        "id": "qxbrF5a4j9il"
676
      },
677
      "outputs": [],
678
      "source": [
679
        "def convert_tools(tools):\n",
680
        "    return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
681
      ]
682
    },
683
    {
684
      "cell_type": "markdown",
685
      "metadata": {
686
        "id": "SCVI2dyUIRg6"
687
      },
688
      "source": [
689
        "With everything ready we can go ahead and initialize our agent object using [**L**ang**C**hain **E**xpression **L**anguage (LCEL)](https://www.pinecone.io/learn/series/langchain/langchain-expression-language/). We add instructions for when the LLM should _stop_ generating with `llm.bind(stop=[...])` and finally we parse the output from the agent using an `XMLAgentOutputParser` object."
690
      ]
691
    },
692
    {
693
      "cell_type": "code",
694
      "execution_count": 19,
695
      "metadata": {
696
        "id": "Z3yhTDmEIU4n"
697
      },
698
      "outputs": [],
699
      "source": [
700
        "from langchain.agents.output_parsers import XMLAgentOutputParser\n",
701
        "\n",
702
        "agent = (\n",
703
        "    {\n",
704
        "        \"input\": lambda x: x[\"input\"],\n",
705
        "        # without \"chat_history\", tool usage has no context of prev interactions\n",
706
        "        \"chat_history\": lambda x: x[\"chat_history\"],\n",
707
        "        \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
708
        "            x[\"intermediate_steps\"]\n",
709
        "        ),\n",
710
        "    }\n",
711
        "    | prompt.partial(tools=convert_tools(tools))\n",
712
        "    | llm.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
713
        "    | XMLAgentOutputParser()\n",
714
        ")"
715
      ]
716
    },
717
    {
718
      "cell_type": "markdown",
719
      "metadata": {
720
        "id": "MG2_hL4hkudq"
721
      },
722
      "source": [
723
        "With our `agent` object initialized we pass it to an `AgentExecutor` object alongside our original `tools` list:"
724
      ]
725
    },
726
    {
727
      "cell_type": "code",
728
      "execution_count": 20,
729
      "metadata": {
730
        "id": "YHW_K3WOIsXw"
731
      },
732
      "outputs": [],
733
      "source": [
734
        "from langchain.agents import AgentExecutor\n",
735
        "\n",
736
        "agent_executor = AgentExecutor(\n",
737
        "    agent=agent, tools=tools, verbose=True\n",
738
        ")"
739
      ]
740
    },
741
    {
742
      "cell_type": "markdown",
743
      "metadata": {
744
        "id": "QRCtHauRlkLc"
745
      },
746
      "source": [
747
        "Now we can use the agent via the `invoke` method:"
748
      ]
749
    },
750
    {
751
      "cell_type": "code",
752
      "execution_count": 21,
753
      "metadata": {
754
        "colab": {
755
          "base_uri": "https://localhost:8080/"
756
        },
757
        "id": "Y_Aqp20qloj7",
758
        "outputId": "2bc17c74-775d-4dd7-f491-4e413cb9e929"
759
      },
760
      "outputs": [
761
        {
762
          "name": "stdout",
763
          "output_type": "stream",
764
          "text": [
765
            "\n",
766
            "\n",
767
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
768
            "\u001b[32;1m\u001b[1;3m <tool>arxiv_search</tool><tool_input>llama 2\u001b[0m\u001b[36;1m\u001b[1;3mEthical Considerations and Limitations (Section 5.2) Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide\n",
769
            "Table 52: Model card for Llama 2.\n",
770
            "77\n",
771
            "\n",
772
            "Model Developers Meta AI Variations Llama 2 comes in a range of parameter sizes—7B, 13B, and 70B—as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforce- ment learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License Where to send com- ments A custom commercial models-and-libraries/llama-downloads/ Instructions on how to provide feedback or comments on the model can be found in the model README, or by opening an issue in the GitHub repository (https://github.com/facebookresearch/llama/). license is available at: ai.meta.com/resources/ Intended Use Intended Use Cases Llama 2 is intended for commercial and research use in\n",
773
            "\n",
774
            "# GenAI, Meta\n",
775
            "# Abstract\n",
776
            "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed- source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.\n",
777
            "∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com †Second author\n",
778
            "Contributions for all the authors can be found in Section A.1.\n",
779
            "# Contents\n",
780
            "# 1 Introduction\n",
781
            "\n",
782
            "LLaMA-2 LLaMA-2 (Touvron et al., 2023b) consists of a series of base language models with a parameter count ranging from 7 billion to 70 billion. These base models are solely trained to opti- mize the likelihood of next-word prediction in the language modeling task. For a fair comparison, we employ the same prompt for LLaMA-2 as used for Dromedary-2.\n",
783
            "LLaMA-2-Chat LLaMA-2-Chat (Touvron et al., 2023b) is an adaptation tailored for dialogue applications. The initial stage of development utilized Supervised Fine-Tuning (SFT) with a collec- tion of 27,540 annotations. For reward modeling, the new human preference annotations for safety and helpfulness reached a count of 1,418,091. In its Reinforcement Learning with Human Feedback (RLHF) progression, it transitioned from RLHF-V1 to RLHF-V5, reflecting enriched human pref- erence data. The model predominantly employed Rejection Sampling fine-tuning up to RLHF-V4. Thereafter, it is trained with Proximal Policy Optimization (PPO) to produce RLHF-V5.\n",
784
            "\n",
785
            "In this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, Llama 2 and Llama 2-Chat, at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested, Llama 2-Chat models generally perform better than existing open-source models. They also appear to be on par with some of the closed-source models, at least on the human evaluations we performed (see Figures 1 and 3). We have taken measures to increase the safety of these models, using safety-specific data annotation and tuning, as well as conducting red-teaming and employing iterative evaluations. Additionally, this paper contributes a thorough description of our fine-tuning methodology and approach to improving LLM safety. We hope that this openness will enable the community to reproduce fine-tuned LLMs and continue to improve the safety of those models, paving the way for more responsible development of LLMs. We also share novel observations we made during the development of Llama 2 and Llama 2-Chat, such as the emergence of tool usage and temporal organization of knowledge.\n",
786
            "3\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>\n",
787
            "Based on the information provided, Llama 2 is a series of large language models developed by Meta AI ranging from 7 billion to 70 billion parameters. Llama 2-Chat is a fine-tuned version optimized for dialog applications using supervised learning and reinforcement learning with human feedback. It aims to be helpful, safe, and on par with closed-source chat models. Details are provided on the model architecture, training methodology, and approach to improving safety. The goal is to enable open and responsible development of large language models.\n",
788
            "\u001b[0m\n",
789
            "\n",
790
            "\u001b[1m> Finished chain.\u001b[0m\n"
791
          ]
792
        },
793
        {
794
          "data": {
795
            "text/plain": [
796
              "{'input': 'can you tell me about llama 2?',\n",
797
              " 'chat_history': '',\n",
798
              " 'output': '\\nBased on the information provided, Llama 2 is a series of large language models developed by Meta AI ranging from 7 billion to 70 billion parameters. Llama 2-Chat is a fine-tuned version optimized for dialog applications using supervised learning and reinforcement learning with human feedback. It aims to be helpful, safe, and on par with closed-source chat models. Details are provided on the model architecture, training methodology, and approach to improving safety. The goal is to enable open and responsible development of large language models.\\n'}"
799
            ]
800
          },
801
          "execution_count": 21,
802
          "metadata": {},
803
          "output_type": "execute_result"
804
        }
805
      ],
806
      "source": [
807
        "agent_executor.invoke({\n",
808
        "    \"input\": \"can you tell me about llama 2?\",\n",
809
        "    \"chat_history\": \"\"\n",
810
        "})"
811
      ]
812
    },
813
    {
814
      "cell_type": "markdown",
815
      "metadata": {
816
        "id": "Eae-JyUFl--N"
817
      },
818
      "source": [
819
        "That looks pretty good, but right now our agent is _stateless_ — making it hard to have a conversation with. We can give it memory in many different ways, but one the easiest ways to do so is to use `ConversationBufferWindowMemory`."
820
      ]
821
    },
822
    {
823
      "cell_type": "code",
824
      "execution_count": 22,
825
      "metadata": {
826
        "id": "EqOMNQUfmOEr"
827
      },
828
      "outputs": [],
829
      "source": [
830
        "from langchain.chains.conversation.memory import ConversationBufferWindowMemory\n",
831
        "\n",
832
        "# conversational memory\n",
833
        "conversational_memory = ConversationBufferWindowMemory(\n",
834
        "    memory_key='chat_history',\n",
835
        "    k=5,\n",
836
        "    return_messages=True\n",
837
        ")"
838
      ]
839
    },
840
    {
841
      "cell_type": "markdown",
842
      "metadata": {
843
        "id": "BvpyjfUwnBLx"
844
      },
845
      "source": [
846
        "Initially we have no `\"chat_history\"` so we will pass an empty string to our `invoke` method:"
847
      ]
848
    },
849
    {
850
      "cell_type": "code",
851
      "execution_count": 23,
852
      "metadata": {
853
        "colab": {
854
          "base_uri": "https://localhost:8080/"
855
        },
856
        "id": "KpKMRBMimEOt",
857
        "outputId": "31a36561-cf99-4f88-e3bc-c651ad3a5358"
858
      },
859
      "outputs": [
860
        {
861
          "name": "stdout",
862
          "output_type": "stream",
863
          "text": [
864
            "\n",
865
            "\n",
866
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
867
            "\u001b[32;1m\u001b[1;3m <final_answer>Hello! I'm here to try to help answer any questions you may have. Feel free to ask me something and I'll do my best to provide a helpful response.\u001b[0m\n",
868
            "\n",
869
            "\u001b[1m> Finished chain.\u001b[0m\n"
870
          ]
871
        }
872
      ],
873
      "source": [
874
        "user_msg = \"hello mate\"\n",
875
        "\n",
876
        "out = agent_executor.invoke({\n",
877
        "    \"input\": \"hello mate\",\n",
878
        "    \"chat_history\": \"\"\n",
879
        "})"
880
      ]
881
    },
882
    {
883
      "cell_type": "markdown",
884
      "metadata": {
885
        "id": "O9UvnBrsnNVw"
886
      },
887
      "source": [
888
        "We haven't attached our conversational memory to our agent — so the `conversational_memory` object will remain empty:"
889
      ]
890
    },
891
    {
892
      "cell_type": "code",
893
      "execution_count": 24,
894
      "metadata": {
895
        "colab": {
896
          "base_uri": "https://localhost:8080/"
897
        },
898
        "id": "KJZNIDAslNoC",
899
        "outputId": "3aff3b9b-8c10-47c5-9fe9-5582a4ba0502"
900
      },
901
      "outputs": [
902
        {
903
          "data": {
904
            "text/plain": [
905
              "[]"
906
            ]
907
          },
908
          "execution_count": 24,
909
          "metadata": {},
910
          "output_type": "execute_result"
911
        }
912
      ],
913
      "source": [
914
        "conversational_memory.chat_memory.messages"
915
      ]
916
    },
917
    {
918
      "cell_type": "markdown",
919
      "metadata": {
920
        "id": "ynX9Wca6nawr"
921
      },
922
      "source": [
923
        "We must manually add the interactions between ourselves and the agent to our memory."
924
      ]
925
    },
926
    {
927
      "cell_type": "code",
928
      "execution_count": 25,
929
      "metadata": {
930
        "colab": {
931
          "base_uri": "https://localhost:8080/"
932
        },
933
        "id": "-5hXy1FAnne3",
934
        "outputId": "c80251db-30be-4dfc-b590-4ab5eb36482f"
935
      },
936
      "outputs": [
937
        {
938
          "data": {
939
            "text/plain": [
940
              "[HumanMessage(content='hello mate'),\n",
941
              " AIMessage(content=\"Hello! I'm here to try to help answer any questions you may have. Feel free to ask me something and I'll do my best to provide a helpful response.\")]"
942
            ]
943
          },
944
          "execution_count": 25,
945
          "metadata": {},
946
          "output_type": "execute_result"
947
        }
948
      ],
949
      "source": [
950
        "conversational_memory.chat_memory.add_user_message(user_msg)\n",
951
        "conversational_memory.chat_memory.add_ai_message(out[\"output\"])\n",
952
        "\n",
953
        "conversational_memory.chat_memory.messages"
954
      ]
955
    },
956
    {
957
      "cell_type": "markdown",
958
      "metadata": {
959
        "id": "T3pA0o6ZnrAl"
960
      },
961
      "source": [
962
        "Now we can see that _two_ messages have been added, our `HumanMessage` the the agent's `AIMessage` response. Unfortunately, we cannot send these messages to our XML agent directly. Instead, we need to pass a string in the format:\n",
963
        "\n",
964
        "```\n",
965
        "Human: {human message}\n",
966
        "AI: {AI message}\n",
967
        "```\n",
968
        "\n",
969
        "Let's write a quick `memory2str` helper function to handle this for us:"
970
      ]
971
    },
972
    {
973
      "cell_type": "code",
974
      "execution_count": 26,
975
      "metadata": {
976
        "id": "raZHBdJmtGH-"
977
      },
978
      "outputs": [],
979
      "source": [
980
        "from langchain_core.messages.human import HumanMessage\n",
981
        "\n",
982
        "def memory2str(memory: ConversationBufferWindowMemory):\n",
983
        "    messages = memory.chat_memory.messages\n",
984
        "    memory_list = [\n",
985
        "        f\"Human: {mem.content}\" if isinstance(mem, HumanMessage) \\\n",
986
        "        else f\"AI: {mem.content}\" for mem in messages\n",
987
        "    ]\n",
988
        "    memory_str = \"\\n\".join(memory_list)\n",
989
        "    return memory_str"
990
      ]
991
    },
992
    {
993
      "cell_type": "code",
994
      "execution_count": 27,
995
      "metadata": {
996
        "colab": {
997
          "base_uri": "https://localhost:8080/"
998
        },
999
        "id": "t89yX-6i3hvd",
1000
        "outputId": "496db0fa-a5e5-4494-8645-ba6598dec95e"
1001
      },
1002
      "outputs": [
1003
        {
1004
          "name": "stdout",
1005
          "output_type": "stream",
1006
          "text": [
1007
            "Human: hello mate\n",
1008
            "AI: Hello! I'm here to try to help answer any questions you may have. Feel free to ask me something and I'll do my best to provide a helpful response.\n"
1009
          ]
1010
        }
1011
      ],
1012
      "source": [
1013
        "print(memory2str(conversational_memory))"
1014
      ]
1015
    },
1016
    {
1017
      "cell_type": "markdown",
1018
      "metadata": {
1019
        "id": "q0L_80WrpWqd"
1020
      },
1021
      "source": [
1022
        "Now let's put together another helper function called `chat` to help us handle the _state_ part of our agent."
1023
      ]
1024
    },
1025
    {
1026
      "cell_type": "code",
1027
      "execution_count": 28,
1028
      "metadata": {
1029
        "id": "C-Ck2Lv53rD-"
1030
      },
1031
      "outputs": [],
1032
      "source": [
1033
        "def chat(text: str):\n",
1034
        "    out = agent_executor.invoke({\n",
1035
        "        \"input\": text,\n",
1036
        "        \"chat_history\": memory2str(conversational_memory)\n",
1037
        "    })\n",
1038
        "    conversational_memory.chat_memory.add_user_message(text)\n",
1039
        "    conversational_memory.chat_memory.add_ai_message(out[\"output\"])\n",
1040
        "    return out[\"output\"]"
1041
      ]
1042
    },
1043
    {
1044
      "cell_type": "markdown",
1045
      "metadata": {
1046
        "id": "XIheLeTBsO9S"
1047
      },
1048
      "source": [
1049
        "Now we simply chat with our agent and it will remember the context of previous interactions."
1050
      ]
1051
    },
1052
    {
1053
      "cell_type": "code",
1054
      "execution_count": 29,
1055
      "metadata": {
1056
        "colab": {
1057
          "base_uri": "https://localhost:8080/"
1058
        },
1059
        "id": "iJ_PH7YcA_f2",
1060
        "outputId": "0de54a11-5abc-4db0-8ea8-62aa78589344"
1061
      },
1062
      "outputs": [
1063
        {
1064
          "name": "stdout",
1065
          "output_type": "stream",
1066
          "text": [
1067
            "\n",
1068
            "\n",
1069
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
1070
            "\u001b[32;1m\u001b[1;3m <tool>arxiv_search</tool><tool_input>llama 2\u001b[0m\u001b[36;1m\u001b[1;3mEthical Considerations and Limitations (Section 5.2) Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide\n",
1071
            "Table 52: Model card for Llama 2.\n",
1072
            "77\n",
1073
            "\n",
1074
            "Model Developers Meta AI Variations Llama 2 comes in a range of parameter sizes—7B, 13B, and 70B—as well as pretrained and fine-tuned variations. Input Models input text only. Output Models generate text only. Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforce- ment learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Model Dates Llama 2 was trained between January 2023 and July 2023. Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. License Where to send com- ments A custom commercial models-and-libraries/llama-downloads/ Instructions on how to provide feedback or comments on the model can be found in the model README, or by opening an issue in the GitHub repository (https://github.com/facebookresearch/llama/). license is available at: ai.meta.com/resources/ Intended Use Intended Use Cases Llama 2 is intended for commercial and research use in\n",
1075
            "\n",
1076
            "# GenAI, Meta\n",
1077
            "# Abstract\n",
1078
            "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed- source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.\n",
1079
            "∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com †Second author\n",
1080
            "Contributions for all the authors can be found in Section A.1.\n",
1081
            "# Contents\n",
1082
            "# 1 Introduction\n",
1083
            "\n",
1084
            "LLaMA-2 LLaMA-2 (Touvron et al., 2023b) consists of a series of base language models with a parameter count ranging from 7 billion to 70 billion. These base models are solely trained to opti- mize the likelihood of next-word prediction in the language modeling task. For a fair comparison, we employ the same prompt for LLaMA-2 as used for Dromedary-2.\n",
1085
            "LLaMA-2-Chat LLaMA-2-Chat (Touvron et al., 2023b) is an adaptation tailored for dialogue applications. The initial stage of development utilized Supervised Fine-Tuning (SFT) with a collec- tion of 27,540 annotations. For reward modeling, the new human preference annotations for safety and helpfulness reached a count of 1,418,091. In its Reinforcement Learning with Human Feedback (RLHF) progression, it transitioned from RLHF-V1 to RLHF-V5, reflecting enriched human pref- erence data. The model predominantly employed Rejection Sampling fine-tuning up to RLHF-V4. Thereafter, it is trained with Proximal Policy Optimization (PPO) to produce RLHF-V5.\n",
1086
            "\n",
1087
            "In this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, Llama 2 and Llama 2-Chat, at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested, Llama 2-Chat models generally perform better than existing open-source models. They also appear to be on par with some of the closed-source models, at least on the human evaluations we performed (see Figures 1 and 3). We have taken measures to increase the safety of these models, using safety-specific data annotation and tuning, as well as conducting red-teaming and employing iterative evaluations. Additionally, this paper contributes a thorough description of our fine-tuning methodology and approach to improving LLM safety. We hope that this openness will enable the community to reproduce fine-tuned LLMs and continue to improve the safety of those models, paving the way for more responsible development of LLMs. We also share novel observations we made during the development of Llama 2 and Llama 2-Chat, such as the emergence of tool usage and temporal organization of knowledge.\n",
1088
            "3\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>\n",
1089
            "Llama 2 is a series of large language models developed by Meta AI, with model sizes ranging from 7 billion to 70 billion parameters.\n",
1090
            "\n",
1091
            "The base Llama 2 models are pretrained language models optimized for next word prediction. Llama 2-Chat is a version fine-tuned specifically for dialog applications, using a combination of supervised fine-tuning and reinforcement learning with human feedback.\n",
1092
            "\n",
1093
            "Key points about Llama 2:\n",
1094
            "\n",
1095
            "- Outperforms other open source chat models on helpfulness and safety benchmarks\n",
1096
            "- Comparable to some closed source models on human evaluations \n",
1097
            "- Fine-tuning focused on improving safety as well as performance\n",
1098
            "- Thorough documentation to enable reproducibility and responsible LLM development\n",
1099
            "\n",
1100
            "So in summary, Llama 2 is an open source family of large language models, with Llama 2-Chat being the dialog-optimized version, notable for its combination of strong performance and efforts towards safety.\n",
1101
            "\u001b[0m\n",
1102
            "\n",
1103
            "\u001b[1m> Finished chain.\u001b[0m\n",
1104
            "\n",
1105
            "Llama 2 is a series of large language models developed by Meta AI, with model sizes ranging from 7 billion to 70 billion parameters.\n",
1106
            "\n",
1107
            "The base Llama 2 models are pretrained language models optimized for next word prediction. Llama 2-Chat is a version fine-tuned specifically for dialog applications, using a combination of supervised fine-tuning and reinforcement learning with human feedback.\n",
1108
            "\n",
1109
            "Key points about Llama 2:\n",
1110
            "\n",
1111
            "- Outperforms other open source chat models on helpfulness and safety benchmarks\n",
1112
            "- Comparable to some closed source models on human evaluations \n",
1113
            "- Fine-tuning focused on improving safety as well as performance\n",
1114
            "- Thorough documentation to enable reproducibility and responsible LLM development\n",
1115
            "\n",
1116
            "So in summary, Llama 2 is an open source family of large language models, with Llama 2-Chat being the dialog-optimized version, notable for its combination of strong performance and efforts towards safety.\n",
1117
            "\n"
1118
          ]
1119
        }
1120
      ],
1121
      "source": [
1122
        "print(chat(\"can you tell me about llama 2?\"))"
1123
      ]
1124
    },
1125
    {
1126
      "cell_type": "markdown",
1127
      "metadata": {
1128
        "id": "5p8m4Gc5w1OX"
1129
      },
1130
      "source": [
1131
        "We can ask follow up questions that miss key information but thanks to the conversational history the LLM understands the context and uses that to adjust the search query.\n",
1132
        "\n",
1133
        "_Note: if missing `\"chat_history\"` parameter from the `agent` definition you will likely notice a lack of context in the search term, and in some cases this lack of good information can trigger a `ValueError` during output parsing._"
1134
      ]
1135
    },
1136
    {
1137
      "cell_type": "code",
1138
      "execution_count": 30,
1139
      "metadata": {
1140
        "colab": {
1141
          "base_uri": "https://localhost:8080/"
1142
        },
1143
        "id": "3XJ_3JIgBDRl",
1144
        "outputId": "d7d23e70-c894-46a6-f990-73eb2135a36c"
1145
      },
1146
      "outputs": [
1147
        {
1148
          "name": "stdout",
1149
          "output_type": "stream",
1150
          "text": [
1151
            "\n",
1152
            "\n",
1153
            "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
1154
            "\u001b[32;1m\u001b[1;3m <tool>arxiv_search</tool><tool_input>Llama 2 red team\u001b[0m\u001b[36;1m\u001b[1;3m15\n",
1155
            "pafety Reward Model Scores Distribution on Red Teaming Prompts\n",
1156
            "Responding Model GPT 3.5 Turbo Code Llama 138 Instruct Code Llama 34B Instruct Code Llama 7B Instruct 0.0-+ -0.2 0.0 0.2 0.4 0.6 08 1.0 12 Llama 2 70B Safety Reward Model Score\n",
1157
            "Figure 7: KDE plot of the risk score output by the Llama 2 safety reward model on prompts with clear intent specific to code risk created by red teamers with background in cybersecurity and malware generation.\n",
1158
            "One red teamer remarked, “While LLMs being able to iteratively improve on produced source code is a risk, producing source code isn’t the actual gap. That said, LLMs may be risky because they can inform low-skill adversaries in production of scripts through iteration that perform some malicious behavior.”\n",
1159
            "According to another red teamer, “[v]arious scripts, program code, and compiled binaries are readily available on mainstream public websites, hacking forums or on ‘the dark web.’ Advanced malware development is beyond the current capabilities of available LLMs, and even an advanced LLM paired with an expert malware developer is not particularly useful- as the barrier is not typically writing the malware code itself. That said, these LLMs may produce code which will get easily caught if used directly.”\n",
1160
            "\n",
1161
            "In addition to red teaming sessions, we ran a quantitative evaluation on risk from generating malicious code by scoring Code Llama’s responses to ChatGPT’s (GPT3.5 Turbo) with LLAMAv2 70B’s safety reward model. For this second quantitative evaluation, we selected prompts that the red teamers generated specifically attempting to solicit malicious code (even though the red teaming included consideration of a broad set of safety risks). These prompts were a mix of clear intent and slightly obfuscated intentions (see some examples in Figure 15. We show a KDE plot of the distribution of the safety score for all models in Figure 7). We observe that Code Llama tends to answer with safer responses; the distribution of safety scores for Code Llama has more weight in the safer part of the range.\n",
1162
            "\n",
1163
            "Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models.\n",
1164
            "\n",
1165
            "Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models.\n",
1166
            "\n",
1167
            "Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao- qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023.\u001b[0m\u001b[32;1m\u001b[1;3m Based on the information from the arXiv search, it does appear that some red teaming was done on Llama 2 models:\n",
1168
            "\n",
1169
            "<final_answer>\n",
1170
            "Yes, red teaming was done on Llama 2 models to evaluate risks from generating malicious code. This included both qualitative red teaming sessions as well as quantitative evaluation scoring code generation prompts with a safety reward model. The red teaming and evaluations found that while iterative code improvement remains a risk, Llama 2 tended to respond with safer code compared to other models.\n",
1171
            "\u001b[0m\n",
1172
            "\n",
1173
            "\u001b[1m> Finished chain.\u001b[0m\n",
1174
            "\n",
1175
            "Yes, red teaming was done on Llama 2 models to evaluate risks from generating malicious code. This included both qualitative red teaming sessions as well as quantitative evaluation scoring code generation prompts with a safety reward model. The red teaming and evaluations found that while iterative code improvement remains a risk, Llama 2 tended to respond with safer code compared to other models.\n",
1176
            "\n"
1177
          ]
1178
        }
1179
      ],
1180
      "source": [
1181
        "print(chat(\"was any red teaming done?\"))"
1182
      ]
1183
    },
1184
    {
1185
      "cell_type": "markdown",
1186
      "metadata": {
1187
        "id": "SelG8OcOxggP"
1188
      },
1189
      "source": [
1190
        "We get a reasonable answer here. It's worth noting that with previous iterations of this test, ie \"llama 2 red teaming\" using the original `ai-arxiv` dataset rarely (if ever) returned directly relevant results."
1191
      ]
1192
    },
1193
    {
1194
      "cell_type": "markdown",
1195
      "metadata": {
1196
        "id": "e9bI9czPtWnl"
1197
      },
1198
      "source": [
1199
        "---"
1200
      ]
1201
    }
1202
  ],
1203
  "metadata": {
1204
    "colab": {
1205
      "provenance": []
1206
    },
1207
    "kernelspec": {
1208
      "display_name": "ml",
1209
      "language": "python",
1210
      "name": "python3"
1211
    },
1212
    "language_info": {
1213
      "codemirror_mode": {
1214
        "name": "ipython",
1215
        "version": 3
1216
      },
1217
      "file_extension": ".py",
1218
      "mimetype": "text/x-python",
1219
      "name": "python",
1220
      "nbconvert_exporter": "python",
1221
      "pygments_lexer": "ipython3",
1222
      "version": "3.9.12"
1223
    },
1224
    "widgets": {
1225
      "application/vnd.jupyter.widget-state+json": {
1226
        "0c54f36fde0f48b1971bed4fd8f17c25": {
1227
          "model_module": "@jupyter-widgets/controls",
1228
          "model_module_version": "1.5.0",
1229
          "model_name": "HBoxModel",
1230
          "state": {
1231
            "_dom_classes": [],
1232
            "_model_module": "@jupyter-widgets/controls",
1233
            "_model_module_version": "1.5.0",
1234
            "_model_name": "HBoxModel",
1235
            "_view_count": null,
1236
            "_view_module": "@jupyter-widgets/controls",
1237
            "_view_module_version": "1.5.0",
1238
            "_view_name": "HBoxView",
1239
            "box_style": "",
1240
            "children": [
1241
              "IPY_MODEL_cff5612716b8491f9e7e319d037a4532",
1242
              "IPY_MODEL_2ba98509dedb4a0ca6a8b50275e892f3",
1243
              "IPY_MODEL_36ee54cc891f45e699eb23c63c3bae28"
1244
            ],
1245
            "layout": "IPY_MODEL_b44c08bcc5604e259c4b7c54a8ab53e7"
1246
          }
1247
        },
1248
        "2b6a9a456f264bd8be154d980116d85a": {
1249
          "model_module": "@jupyter-widgets/base",
1250
          "model_module_version": "1.2.0",
1251
          "model_name": "LayoutModel",
1252
          "state": {
1253
            "_model_module": "@jupyter-widgets/base",
1254
            "_model_module_version": "1.2.0",
1255
            "_model_name": "LayoutModel",
1256
            "_view_count": null,
1257
            "_view_module": "@jupyter-widgets/base",
1258
            "_view_module_version": "1.2.0",
1259
            "_view_name": "LayoutView",
1260
            "align_content": null,
1261
            "align_items": null,
1262
            "align_self": null,
1263
            "border": null,
1264
            "bottom": null,
1265
            "display": null,
1266
            "flex": null,
1267
            "flex_flow": null,
1268
            "grid_area": null,
1269
            "grid_auto_columns": null,
1270
            "grid_auto_flow": null,
1271
            "grid_auto_rows": null,
1272
            "grid_column": null,
1273
            "grid_gap": null,
1274
            "grid_row": null,
1275
            "grid_template_areas": null,
1276
            "grid_template_columns": null,
1277
            "grid_template_rows": null,
1278
            "height": null,
1279
            "justify_content": null,
1280
            "justify_items": null,
1281
            "left": null,
1282
            "margin": null,
1283
            "max_height": null,
1284
            "max_width": null,
1285
            "min_height": null,
1286
            "min_width": null,
1287
            "object_fit": null,
1288
            "object_position": null,
1289
            "order": null,
1290
            "overflow": null,
1291
            "overflow_x": null,
1292
            "overflow_y": null,
1293
            "padding": null,
1294
            "right": null,
1295
            "top": null,
1296
            "visibility": null,
1297
            "width": null
1298
          }
1299
        },
1300
        "2ba98509dedb4a0ca6a8b50275e892f3": {
1301
          "model_module": "@jupyter-widgets/controls",
1302
          "model_module_version": "1.5.0",
1303
          "model_name": "FloatProgressModel",
1304
          "state": {
1305
            "_dom_classes": [],
1306
            "_model_module": "@jupyter-widgets/controls",
1307
            "_model_module_version": "1.5.0",
1308
            "_model_name": "FloatProgressModel",
1309
            "_view_count": null,
1310
            "_view_module": "@jupyter-widgets/controls",
1311
            "_view_module_version": "1.5.0",
1312
            "_view_name": "ProgressView",
1313
            "bar_style": "success",
1314
            "description": "",
1315
            "description_tooltip": null,
1316
            "layout": "IPY_MODEL_704efb62e6054a608fe93f2a3fc9b587",
1317
            "max": 200,
1318
            "min": 0,
1319
            "orientation": "horizontal",
1320
            "style": "IPY_MODEL_55c292a3d8cb40239e122a764b581e29",
1321
            "value": 200
1322
          }
1323
        },
1324
        "36ee54cc891f45e699eb23c63c3bae28": {
1325
          "model_module": "@jupyter-widgets/controls",
1326
          "model_module_version": "1.5.0",
1327
          "model_name": "HTMLModel",
1328
          "state": {
1329
            "_dom_classes": [],
1330
            "_model_module": "@jupyter-widgets/controls",
1331
            "_model_module_version": "1.5.0",
1332
            "_model_name": "HTMLModel",
1333
            "_view_count": null,
1334
            "_view_module": "@jupyter-widgets/controls",
1335
            "_view_module_version": "1.5.0",
1336
            "_view_name": "HTMLView",
1337
            "description": "",
1338
            "description_tooltip": null,
1339
            "layout": "IPY_MODEL_2b6a9a456f264bd8be154d980116d85a",
1340
            "placeholder": "​",
1341
            "style": "IPY_MODEL_bb55cfbf693d4b7a988f5395826e5e3e",
1342
            "value": " 200/200 [11:10&lt;00:00,  3.13s/it]"
1343
          }
1344
        },
1345
        "55c292a3d8cb40239e122a764b581e29": {
1346
          "model_module": "@jupyter-widgets/controls",
1347
          "model_module_version": "1.5.0",
1348
          "model_name": "ProgressStyleModel",
1349
          "state": {
1350
            "_model_module": "@jupyter-widgets/controls",
1351
            "_model_module_version": "1.5.0",
1352
            "_model_name": "ProgressStyleModel",
1353
            "_view_count": null,
1354
            "_view_module": "@jupyter-widgets/base",
1355
            "_view_module_version": "1.2.0",
1356
            "_view_name": "StyleView",
1357
            "bar_color": null,
1358
            "description_width": ""
1359
          }
1360
        },
1361
        "5d225cd9d5384c8c95744b8facdb174d": {
1362
          "model_module": "@jupyter-widgets/base",
1363
          "model_module_version": "1.2.0",
1364
          "model_name": "LayoutModel",
1365
          "state": {
1366
            "_model_module": "@jupyter-widgets/base",
1367
            "_model_module_version": "1.2.0",
1368
            "_model_name": "LayoutModel",
1369
            "_view_count": null,
1370
            "_view_module": "@jupyter-widgets/base",
1371
            "_view_module_version": "1.2.0",
1372
            "_view_name": "LayoutView",
1373
            "align_content": null,
1374
            "align_items": null,
1375
            "align_self": null,
1376
            "border": null,
1377
            "bottom": null,
1378
            "display": null,
1379
            "flex": null,
1380
            "flex_flow": null,
1381
            "grid_area": null,
1382
            "grid_auto_columns": null,
1383
            "grid_auto_flow": null,
1384
            "grid_auto_rows": null,
1385
            "grid_column": null,
1386
            "grid_gap": null,
1387
            "grid_row": null,
1388
            "grid_template_areas": null,
1389
            "grid_template_columns": null,
1390
            "grid_template_rows": null,
1391
            "height": null,
1392
            "justify_content": null,
1393
            "justify_items": null,
1394
            "left": null,
1395
            "margin": null,
1396
            "max_height": null,
1397
            "max_width": null,
1398
            "min_height": null,
1399
            "min_width": null,
1400
            "object_fit": null,
1401
            "object_position": null,
1402
            "order": null,
1403
            "overflow": null,
1404
            "overflow_x": null,
1405
            "overflow_y": null,
1406
            "padding": null,
1407
            "right": null,
1408
            "top": null,
1409
            "visibility": null,
1410
            "width": null
1411
          }
1412
        },
1413
        "704efb62e6054a608fe93f2a3fc9b587": {
1414
          "model_module": "@jupyter-widgets/base",
1415
          "model_module_version": "1.2.0",
1416
          "model_name": "LayoutModel",
1417
          "state": {
1418
            "_model_module": "@jupyter-widgets/base",
1419
            "_model_module_version": "1.2.0",
1420
            "_model_name": "LayoutModel",
1421
            "_view_count": null,
1422
            "_view_module": "@jupyter-widgets/base",
1423
            "_view_module_version": "1.2.0",
1424
            "_view_name": "LayoutView",
1425
            "align_content": null,
1426
            "align_items": null,
1427
            "align_self": null,
1428
            "border": null,
1429
            "bottom": null,
1430
            "display": null,
1431
            "flex": null,
1432
            "flex_flow": null,
1433
            "grid_area": null,
1434
            "grid_auto_columns": null,
1435
            "grid_auto_flow": null,
1436
            "grid_auto_rows": null,
1437
            "grid_column": null,
1438
            "grid_gap": null,
1439
            "grid_row": null,
1440
            "grid_template_areas": null,
1441
            "grid_template_columns": null,
1442
            "grid_template_rows": null,
1443
            "height": null,
1444
            "justify_content": null,
1445
            "justify_items": null,
1446
            "left": null,
1447
            "margin": null,
1448
            "max_height": null,
1449
            "max_width": null,
1450
            "min_height": null,
1451
            "min_width": null,
1452
            "object_fit": null,
1453
            "object_position": null,
1454
            "order": null,
1455
            "overflow": null,
1456
            "overflow_x": null,
1457
            "overflow_y": null,
1458
            "padding": null,
1459
            "right": null,
1460
            "top": null,
1461
            "visibility": null,
1462
            "width": null
1463
          }
1464
        },
1465
        "aaab14fba4a548848fa731dd2235c8f5": {
1466
          "model_module": "@jupyter-widgets/controls",
1467
          "model_module_version": "1.5.0",
1468
          "model_name": "DescriptionStyleModel",
1469
          "state": {
1470
            "_model_module": "@jupyter-widgets/controls",
1471
            "_model_module_version": "1.5.0",
1472
            "_model_name": "DescriptionStyleModel",
1473
            "_view_count": null,
1474
            "_view_module": "@jupyter-widgets/base",
1475
            "_view_module_version": "1.2.0",
1476
            "_view_name": "StyleView",
1477
            "description_width": ""
1478
          }
1479
        },
1480
        "b44c08bcc5604e259c4b7c54a8ab53e7": {
1481
          "model_module": "@jupyter-widgets/base",
1482
          "model_module_version": "1.2.0",
1483
          "model_name": "LayoutModel",
1484
          "state": {
1485
            "_model_module": "@jupyter-widgets/base",
1486
            "_model_module_version": "1.2.0",
1487
            "_model_name": "LayoutModel",
1488
            "_view_count": null,
1489
            "_view_module": "@jupyter-widgets/base",
1490
            "_view_module_version": "1.2.0",
1491
            "_view_name": "LayoutView",
1492
            "align_content": null,
1493
            "align_items": null,
1494
            "align_self": null,
1495
            "border": null,
1496
            "bottom": null,
1497
            "display": null,
1498
            "flex": null,
1499
            "flex_flow": null,
1500
            "grid_area": null,
1501
            "grid_auto_columns": null,
1502
            "grid_auto_flow": null,
1503
            "grid_auto_rows": null,
1504
            "grid_column": null,
1505
            "grid_gap": null,
1506
            "grid_row": null,
1507
            "grid_template_areas": null,
1508
            "grid_template_columns": null,
1509
            "grid_template_rows": null,
1510
            "height": null,
1511
            "justify_content": null,
1512
            "justify_items": null,
1513
            "left": null,
1514
            "margin": null,
1515
            "max_height": null,
1516
            "max_width": null,
1517
            "min_height": null,
1518
            "min_width": null,
1519
            "object_fit": null,
1520
            "object_position": null,
1521
            "order": null,
1522
            "overflow": null,
1523
            "overflow_x": null,
1524
            "overflow_y": null,
1525
            "padding": null,
1526
            "right": null,
1527
            "top": null,
1528
            "visibility": null,
1529
            "width": null
1530
          }
1531
        },
1532
        "bb55cfbf693d4b7a988f5395826e5e3e": {
1533
          "model_module": "@jupyter-widgets/controls",
1534
          "model_module_version": "1.5.0",
1535
          "model_name": "DescriptionStyleModel",
1536
          "state": {
1537
            "_model_module": "@jupyter-widgets/controls",
1538
            "_model_module_version": "1.5.0",
1539
            "_model_name": "DescriptionStyleModel",
1540
            "_view_count": null,
1541
            "_view_module": "@jupyter-widgets/base",
1542
            "_view_module_version": "1.2.0",
1543
            "_view_name": "StyleView",
1544
            "description_width": ""
1545
          }
1546
        },
1547
        "cff5612716b8491f9e7e319d037a4532": {
1548
          "model_module": "@jupyter-widgets/controls",
1549
          "model_module_version": "1.5.0",
1550
          "model_name": "HTMLModel",
1551
          "state": {
1552
            "_dom_classes": [],
1553
            "_model_module": "@jupyter-widgets/controls",
1554
            "_model_module_version": "1.5.0",
1555
            "_model_name": "HTMLModel",
1556
            "_view_count": null,
1557
            "_view_module": "@jupyter-widgets/controls",
1558
            "_view_module_version": "1.5.0",
1559
            "_view_name": "HTMLView",
1560
            "description": "",
1561
            "description_tooltip": null,
1562
            "layout": "IPY_MODEL_5d225cd9d5384c8c95744b8facdb174d",
1563
            "placeholder": "​",
1564
            "style": "IPY_MODEL_aaab14fba4a548848fa731dd2235c8f5",
1565
            "value": "100%"
1566
          }
1567
        }
1568
      }
1569
    }
1570
  },
1571
  "nbformat": 4,
1572
  "nbformat_minor": 0
1573
}
1574

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.