examples

Форк
0
/
10-langchain-multi-query.ipynb 
1040 строк · 46.2 Кб
1
{
2
  "cells": [
3
    {
4
      "attachments": {},
5
      "cell_type": "markdown",
6
      "metadata": {},
7
      "source": [
8
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/10-langchain-multi-query.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/langchain/handbook/08-langchain-multi-query.ipynb)"
9
      ]
10
    },
11
    {
12
      "attachments": {},
13
      "cell_type": "markdown",
14
      "metadata": {
15
        "id": "2-XDGL6Oi6h4"
16
      },
17
      "source": [
18
        "#### [LangChain Handbook](https://pinecone.io/learn/langchain)\n",
19
        "\n",
20
        "# LangChain Multi-Query for RAG"
21
      ]
22
    },
23
    {
24
      "cell_type": "code",
25
      "execution_count": null,
26
      "metadata": {
27
        "id": "qi8B1fgywJzE"
28
      },
29
      "outputs": [],
30
      "source": [
31
        "!pip install -qU \\\n",
32
        "  pinecone-client==3.1.0 \\\n",
33
        "  langchain==0.1.1 \\\n",
34
        "  langchain-community==0.0.13 \\\n",
35
        "  datasets==2.14.6 \\\n",
36
        "  openai==1.6.1 \\\n",
37
        "  tiktoken==0.5.2"
38
      ]
39
    },
40
    {
41
      "attachments": {},
42
      "cell_type": "markdown",
43
      "metadata": {
44
        "id": "CPmfrdJ9_2YA"
45
      },
46
      "source": [
47
        "## Getting Data"
48
      ]
49
    },
50
    {
51
      "attachments": {},
52
      "cell_type": "markdown",
53
      "metadata": {
54
        "id": "S4Py-rVqx-I0"
55
      },
56
      "source": [
57
        "We will download an existing dataset from Hugging Face Datasets."
58
      ]
59
    },
60
    {
61
      "cell_type": "code",
62
      "execution_count": 1,
63
      "metadata": {
64
        "id": "iatOGmKgz8NE"
65
      },
66
      "outputs": [
67
        {
68
          "data": {
69
            "text/plain": [
70
              "Dataset({\n",
71
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
72
              "    num_rows: 41584\n",
73
              "})"
74
            ]
75
          },
76
          "execution_count": 1,
77
          "metadata": {},
78
          "output_type": "execute_result"
79
        }
80
      ],
81
      "source": [
82
        "from datasets import load_dataset\n",
83
        "\n",
84
        "data = load_dataset(\"jamescalam/ai-arxiv-chunked\", split=\"train\")\n",
85
        "data"
86
      ]
87
    },
88
    {
89
      "cell_type": "code",
90
      "execution_count": 2,
91
      "metadata": {
92
        "id": "P7E6JYtb0cW7"
93
      },
94
      "outputs": [],
95
      "source": [
96
        "from langchain.docstore.document import Document\n",
97
        "\n",
98
        "docs = []\n",
99
        "\n",
100
        "for row in data:\n",
101
        "    doc = Document(\n",
102
        "        page_content=row[\"chunk\"],\n",
103
        "        metadata={\n",
104
        "            \"title\": row[\"title\"],\n",
105
        "            \"source\": row[\"source\"],\n",
106
        "            \"id\": row[\"id\"],\n",
107
        "            \"chunk-id\": row[\"chunk-id\"],\n",
108
        "            \"text\": row[\"chunk\"]\n",
109
        "        }\n",
110
        "    )\n",
111
        "    docs.append(doc)"
112
      ]
113
    },
114
    {
115
      "attachments": {},
116
      "cell_type": "markdown",
117
      "metadata": {
118
        "id": "yb540kEs_6PZ"
119
      },
120
      "source": [
121
        "## Embedding and Vector DB Setup"
122
      ]
123
    },
124
    {
125
      "attachments": {},
126
      "cell_type": "markdown",
127
      "metadata": {
128
        "id": "BlKEmBZMBxtd"
129
      },
130
      "source": [
131
        "Initialize our embedding model:"
132
      ]
133
    },
134
    {
135
      "cell_type": "code",
136
      "execution_count": 3,
137
      "metadata": {
138
        "id": "qZ6vTiDPBznz"
139
      },
140
      "outputs": [],
141
      "source": [
142
        "import os\n",
143
        "from getpass import getpass\n",
144
        "from langchain.embeddings.openai import OpenAIEmbeddings\n",
145
        "\n",
146
        "model_name = \"text-embedding-ada-002\"\n",
147
        "\n",
148
        "# get openai api key from platform.openai.com\n",
149
        "OPENAI_API_KEY = os.getenv('OPENAI_API_KEY') or getpass(\"OpenAI API Key: \")\n",
150
        "\n",
151
        "embed = OpenAIEmbeddings(\n",
152
        "    model=model_name, openai_api_key=OPENAI_API_KEY, disallowed_special=()\n",
153
        ")"
154
      ]
155
    },
156
    {
157
      "attachments": {},
158
      "cell_type": "markdown",
159
      "metadata": {
160
        "id": "IurEkeeI-IYl"
161
      },
162
      "source": [
163
        "Now we create our vector DB to store our vectors. For this we need to get a [free Pinecone API key](https://app.pinecone.io) — the API key can be found in the \"API Keys\" button found in the left navbar of the Pinecone dashboard."
164
      ]
165
    },
166
    {
167
      "cell_type": "code",
168
      "execution_count": 4,
169
      "metadata": {},
170
      "outputs": [],
171
      "source": [
172
        "from pinecone import Pinecone\n",
173
        "\n",
174
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
175
        "api_key = os.getenv(\"PINECONE_API_KEY\") or getpass(\"Enter your Pinecone API key: \")\n",
176
        "\n",
177
        "# configure client\n",
178
        "pc = Pinecone(api_key=api_key)"
179
      ]
180
    },
181
    {
182
      "cell_type": "markdown",
183
      "metadata": {},
184
      "source": [
185
        "Now we setup our index specification, this allows us to define the cloud provider and region where we want to deploy our index. You can find a list of all [available providers and regions here](https://docs.pinecone.io/docs/projects)."
186
      ]
187
    },
188
    {
189
      "cell_type": "code",
190
      "execution_count": 5,
191
      "metadata": {},
192
      "outputs": [],
193
      "source": [
194
        "from pinecone import ServerlessSpec\n",
195
        "\n",
196
        "spec = ServerlessSpec(\n",
197
        "    cloud=\"aws\", region=\"us-west-2\"\n",
198
        ")"
199
      ]
200
    },
201
    {
202
      "cell_type": "markdown",
203
      "metadata": {},
204
      "source": [
205
        "Creating an index, we set `dimension` equal to to dimensionality of Ada-002 (`1536`), and use a `metric` also compatible with Ada-002 (this can be either `cosine` or `dotproduct`). We also pass our `spec` to index initialization."
206
      ]
207
    },
208
    {
209
      "cell_type": "code",
210
      "execution_count": 6,
211
      "metadata": {
212
        "id": "nL3KFF9E9Qb_"
213
      },
214
      "outputs": [
215
        {
216
          "data": {
217
            "text/plain": [
218
              "{'dimension': 1536,\n",
219
              " 'index_fullness': 0.0,\n",
220
              " 'namespaces': {},\n",
221
              " 'total_vector_count': 0}"
222
            ]
223
          },
224
          "execution_count": 6,
225
          "metadata": {},
226
          "output_type": "execute_result"
227
        }
228
      ],
229
      "source": [
230
        "import time\n",
231
        "\n",
232
        "index_name = \"langchain-multi-query-demo\"\n",
233
        "existing_indexes = [\n",
234
        "    index_info[\"name\"] for index_info in pc.list_indexes()\n",
235
        "]\n",
236
        "\n",
237
        "# check if index already exists (it shouldn't if this is first time)\n",
238
        "if index_name not in existing_indexes:\n",
239
        "    # if does not exist, create index\n",
240
        "    pc.create_index(\n",
241
        "        index_name,\n",
242
        "        dimension=1536,  # dimensionality of ada 002\n",
243
        "        metric='dotproduct',\n",
244
        "        spec=spec\n",
245
        "    )\n",
246
        "    # wait for index to be initialized\n",
247
        "    while not pc.describe_index(index_name).status['ready']:\n",
248
        "        time.sleep(1)\n",
249
        "\n",
250
        "# connect to index\n",
251
        "index = pc.Index(index_name)\n",
252
        "time.sleep(1)\n",
253
        "# view index stats\n",
254
        "index.describe_index_stats()"
255
      ]
256
    },
257
    {
258
      "attachments": {},
259
      "cell_type": "markdown",
260
      "metadata": {
261
        "id": "A3B7dHsd6QcP"
262
      },
263
      "source": [
264
        "Populate our index:"
265
      ]
266
    },
267
    {
268
      "cell_type": "code",
269
      "execution_count": 7,
270
      "metadata": {
271
        "id": "B7Yi2YGBpTWf"
272
      },
273
      "outputs": [
274
        {
275
          "data": {
276
            "text/plain": [
277
              "41584"
278
            ]
279
          },
280
          "execution_count": 7,
281
          "metadata": {},
282
          "output_type": "execute_result"
283
        }
284
      ],
285
      "source": [
286
        "len(docs)"
287
      ]
288
    },
289
    {
290
      "cell_type": "code",
291
      "execution_count": 8,
292
      "metadata": {
293
        "id": "thfCYHuSpW4H"
294
      },
295
      "outputs": [],
296
      "source": [
297
        "# if you want to speed things up to follow along\n",
298
        "#docs = docs[:5000]"
299
      ]
300
    },
301
    {
302
      "cell_type": "code",
303
      "execution_count": 9,
304
      "metadata": {
305
        "id": "HXVVU97C6SwT"
306
      },
307
      "outputs": [
308
        {
309
          "data": {
310
            "application/vnd.jupyter.widget-view+json": {
311
              "model_id": "7ccacf3923234dd5821880d7942218c7",
312
              "version_major": 2,
313
              "version_minor": 0
314
            },
315
            "text/plain": [
316
              "  0%|          | 0/416 [00:00<?, ?it/s]"
317
            ]
318
          },
319
          "metadata": {},
320
          "output_type": "display_data"
321
        }
322
      ],
323
      "source": [
324
        "from tqdm.auto import tqdm\n",
325
        "\n",
326
        "batch_size = 100\n",
327
        "\n",
328
        "for i in tqdm(range(0, len(docs), batch_size)):\n",
329
        "    i_end = min(len(docs), i+batch_size)\n",
330
        "    docs_batch = docs[i:i_end]\n",
331
        "    # get IDs\n",
332
        "    ids = [f\"{doc.metadata['id']}-{doc.metadata['chunk-id']}\" for doc in docs_batch]\n",
333
        "    # get text and embed\n",
334
        "    texts = [d.page_content for d in docs_batch]\n",
335
        "    embeds = embed.embed_documents(texts=texts)\n",
336
        "    # get metadata\n",
337
        "    metadata = [d.metadata for d in docs_batch]\n",
338
        "    to_upsert = zip(ids, embeds, metadata)\n",
339
        "    index.upsert(vectors=to_upsert)"
340
      ]
341
    },
342
    {
343
      "attachments": {},
344
      "cell_type": "markdown",
345
      "metadata": {
346
        "id": "8FbngTBzAAU-"
347
      },
348
      "source": [
349
        "## Multi-Query with LangChain"
350
      ]
351
    },
352
    {
353
      "attachments": {},
354
      "cell_type": "markdown",
355
      "metadata": {
356
        "id": "YVVYr13n_Ot2"
357
      },
358
      "source": [
359
        "Now we switch across to using our populated index as a vectorstore in Langchain."
360
      ]
361
    },
362
    {
363
      "cell_type": "code",
364
      "execution_count": 10,
365
      "metadata": {
366
        "colab": {
367
          "base_uri": "https://localhost:8080/"
368
        },
369
        "id": "0ETs0emsAh-K",
370
        "outputId": "0b1de24b-2f9f-48a6-d8ca-bd3d6aa007e1"
371
      },
372
      "outputs": [
373
        {
374
          "name": "stderr",
375
          "output_type": "stream",
376
          "text": [
377
            "/Users/jamesbriggs/opt/anaconda3/envs/ml/lib/python3.9/site-packages/langchain_community/vectorstores/pinecone.py:74: UserWarning: Passing in `embedding` as a Callable is deprecated. Please pass in an Embeddings object instead.\n",
378
            "  warnings.warn(\n"
379
          ]
380
        }
381
      ],
382
      "source": [
383
        "from langchain.vectorstores import Pinecone\n",
384
        "\n",
385
        "text_field = \"text\"\n",
386
        "\n",
387
        "vectorstore = Pinecone(index, embed.embed_query, text_field)"
388
      ]
389
    },
390
    {
391
      "cell_type": "code",
392
      "execution_count": 11,
393
      "metadata": {
394
        "id": "nW_GCB6a3_N_"
395
      },
396
      "outputs": [],
397
      "source": [
398
        "from langchain.chat_models import ChatOpenAI\n",
399
        "\n",
400
        "llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)"
401
      ]
402
    },
403
    {
404
      "attachments": {},
405
      "cell_type": "markdown",
406
      "metadata": {
407
        "id": "1iptBAriANrD"
408
      },
409
      "source": [
410
        "We initialize the `MultiQueryRetriever`:"
411
      ]
412
    },
413
    {
414
      "cell_type": "code",
415
      "execution_count": 12,
416
      "metadata": {
417
        "id": "yYjztBp2ANHC"
418
      },
419
      "outputs": [],
420
      "source": [
421
        "from langchain.retrievers.multi_query import MultiQueryRetriever\n",
422
        "\n",
423
        "retriever = MultiQueryRetriever.from_llm(\n",
424
        "    retriever=vectorstore.as_retriever(), llm=llm\n",
425
        ")"
426
      ]
427
    },
428
    {
429
      "attachments": {},
430
      "cell_type": "markdown",
431
      "metadata": {
432
        "id": "H8qZCd1TAMAn"
433
      },
434
      "source": [
435
        "We set logging so that we can see the queries as they're generated by our LLM."
436
      ]
437
    },
438
    {
439
      "cell_type": "code",
440
      "execution_count": 13,
441
      "metadata": {
442
        "id": "rgV1eYU6FgX7"
443
      },
444
      "outputs": [],
445
      "source": [
446
        "# Set logging for the queries\n",
447
        "import logging\n",
448
        "\n",
449
        "logging.basicConfig()\n",
450
        "logging.getLogger(\"langchain.retrievers.multi_query\").setLevel(logging.INFO)"
451
      ]
452
    },
453
    {
454
      "attachments": {},
455
      "cell_type": "markdown",
456
      "metadata": {
457
        "id": "jrjwkpJWAaAn"
458
      },
459
      "source": [
460
        "To query with our multi-query retriever we call the `get_relevant_documents` method."
461
      ]
462
    },
463
    {
464
      "cell_type": "code",
465
      "execution_count": 14,
466
      "metadata": {
467
        "colab": {
468
          "base_uri": "https://localhost:8080/"
469
        },
470
        "id": "_DJ4cSJXFinV",
471
        "outputId": "265900d1-6aa7-4d28-cbbe-e2e95b7df7b4"
472
      },
473
      "outputs": [
474
        {
475
          "name": "stderr",
476
          "output_type": "stream",
477
          "text": [
478
            "INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide information about llama 2 and its characteristics?', '2. What can you tell me about llama 2 and its features?', '3. Could you give me an overview of llama 2 and its properties?']\n"
479
          ]
480
        },
481
        {
482
          "data": {
483
            "text/plain": [
484
              "6"
485
            ]
486
          },
487
          "execution_count": 14,
488
          "metadata": {},
489
          "output_type": "execute_result"
490
        }
491
      ],
492
      "source": [
493
        "question = \"tell me about llama 2?\"\n",
494
        "\n",
495
        "docs = retriever.get_relevant_documents(query=question)\n",
496
        "len(docs)"
497
      ]
498
    },
499
    {
500
      "attachments": {},
501
      "cell_type": "markdown",
502
      "metadata": {
503
        "id": "kSu1GsFfAqCd"
504
      },
505
      "source": [
506
        "From this we get a variety of docs retrieved by each of our queries independently. By default the `retriever` is returning `3` docs for each query — totalling `9` documents — however, as there is some overlap we actually return `6` unique docs."
507
      ]
508
    },
509
    {
510
      "cell_type": "code",
511
      "execution_count": 15,
512
      "metadata": {
513
        "colab": {
514
          "base_uri": "https://localhost:8080/"
515
        },
516
        "id": "ce5WBh6MFltP",
517
        "outputId": "f7b06949-e2a6-472e-eaf9-e712dc4bcca2"
518
      },
519
      "outputs": [
520
        {
521
          "data": {
522
            "text/plain": [
523
              "[Document(page_content='Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang\\nRoss Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang\\nAngela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic\\nSergey Edunov Thomas Scialom\\x03\\nGenAI, Meta\\nAbstract\\nIn this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\\nlarge language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\\nOur fine-tuned LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , are optimized for dialogue use cases. Our\\nmodels outperform open-source chat models on most benchmarks we tested, and based on\\nourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosedsource models. We provide a detailed description of our approach to fine-tuning and safety', metadata={'chunk-id': '1', 'id': '2307.09288', 'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
524
              " Document(page_content='asChatGPT,BARD,andClaude. TheseclosedproductLLMsareheavilyfine-tunedtoalignwithhuman\\npreferences, which greatly enhances their usability and safety. This step can require significant costs in\\ncomputeandhumanannotation,andisoftennottransparentoreasilyreproducible,limitingprogresswithin\\nthe community to advance AI alignment research.\\nIn this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle and\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested,\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc models generally perform better than existing open-source models. They also appear to\\nbe on par with some of the closed-source models, at least on the human evaluations we performed (see', metadata={'chunk-id': '9', 'id': '2307.09288', 'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
525
              " Document(page_content='Q:Yes or no: Could a llama birth twice during War in Vietnam (1945-46)?\\nA:TheWar inVietnam was6months. Thegestationperiod forallama is11months, which ismore than 6\\nmonths. Thus, allama could notgive birth twice duringtheWar inVietnam. So the answer is no.\\nQ:Yes or no: Would a pear sink in water?\\nA:Thedensityofapear isabout 0:6g=cm3,which islessthan water.Objects lessdense than waterfloat. Thus,\\napear would float. So the answer is no.\\nTable 26: Few-shot exemplars for full chain of thought prompt for Date Understanding.\\nPROMPT FOR DATE UNDERSTANDING\\nQ:2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?\\nA:If2015 iscomingin36hours, then itiscomingin2days. 2days before01/01/2015 is12/30/2014, sotoday\\nis12/30/2014. Sooneweek from todaywillbe01/05/2015. So the answer is 01/05/2015.', metadata={'chunk-id': '137', 'id': '2201.11903', 'source': 'http://arxiv.org/pdf/2201.11903', 'title': 'Chain-of-Thought Prompting Elicits Reasoning in Large Language Models'}),\n",
526
              " Document(page_content='Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,\\nand Tatsunori B. Hashimoto. 2023. Stanford alpaca:\\nAn instruction-following llama model. https://\\ngithub.com/tatsu-lab/stanford_alpaca .\\nRomal Thoppilan, Daniel De Freitas, Jamie Hall, Noam\\nShazeer, Apoorv Kulshreshtha, Heng-Tze Cheng,\\nAlicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al.\\n2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 .\\nHugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier\\nMartinet, Marie-Anne Lachaux, Timothée Lacroix,\\nBaptiste Rozière, Naman Goyal, Eric Hambro, Faisal\\nAzhar, Aurelien Rodriguez, Armand Joulin, Edouard\\nGrave, and Guillaume Lample. 2023. Llama: Open\\nand efficient foundation language models. arXiv\\npreprint arXiv:2302.13971 .', metadata={'chunk-id': '37', 'id': '2304.01196', 'source': 'http://arxiv.org/pdf/2304.01196', 'title': 'Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data'}),\n",
527
              " Document(page_content=\"user asks weather, which needs more specific and updated information. I should\\nfollow rules 3 (helpful), 7 (candor), 9 (static).\\nWatson: As an AI language model, I have no access to weather sensors or real-time\\nweather data. However, I can recommend you to:\\n1. check your local weather forecast on a weather website or app,\\n2. or by watching the news or checking your local weather station.\\nUser: Good job! Clear context\\nWatson (auto reply): Thank you! For further questions or guidance, just reach out.\\nUser: Tell me about alpaca 's family\\nWatson (internal thoughts): I am a helpful, ethical, and reliable assistant. The\\nuser asks alpaca, which is a public entity in one of my internal knowledge\\nbases: Wikipedia. I should follow rules 2 (informative), 6 (multi-aspect), 8 (\\nknowledge recitation).\\nWatson: According to my knowledge from Wikipedia, here is what I know about alpaca 's\\nfamily:\\nThe alpaca (Lama pacos) is a species of South American camelid mammal. It is similar\\nto, and often confused with, the llama. However, alpacas are often noticeably\\nsmaller than llamas.\\nRegarding their family, alpacas belong to the Camelidae family, which includes other\", metadata={'chunk-id': '112', 'id': '2305.03047', 'source': 'http://arxiv.org/pdf/2305.03047', 'title': 'Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision'}),\n",
528
              " Document(page_content='models will be released as we improve model safety with community feedback.\\nLicense A custom commercial license is available at: ai.meta.com/resources/\\nmodels-and-libraries/llama-downloads/\\nWhere to send commentsInstructions on how to provide feedback or comments on the model can be\\nfound in the model README, or by opening an issue in the GitHub repository\\n(https://github.com/facebookresearch/llama/ ).\\nIntended Use\\nIntended Use Cases L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle is intended for commercial and research use in English. Tuned models\\nare intended for assistant-like chat, whereas pretrained models can be adapted\\nfor a variety of natural language generation tasks.\\nOut-of-Scope Uses Use in any manner that violates applicable laws or regulations (including trade\\ncompliancelaws). UseinlanguagesotherthanEnglish. Useinanyotherway\\nthat is prohibited by the Acceptable Use Policy and Licensing Agreement for\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle.\\nHardware and Software (Section 2.2)\\nTraining Factors We usedcustomtraininglibraries, Meta’sResearchSuperCluster, andproductionclustersforpretraining. Fine-tuning,annotation,andevaluationwerealso', metadata={'chunk-id': '317', 'id': '2307.09288', 'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'})]"
529
            ]
530
          },
531
          "execution_count": 15,
532
          "metadata": {},
533
          "output_type": "execute_result"
534
        }
535
      ],
536
      "source": [
537
        "docs"
538
      ]
539
    },
540
    {
541
      "attachments": {},
542
      "cell_type": "markdown",
543
      "metadata": {
544
        "id": "KLMwfZPfBF89"
545
      },
546
      "source": [
547
        "## Adding the Generation in RAG"
548
      ]
549
    },
550
    {
551
      "attachments": {},
552
      "cell_type": "markdown",
553
      "metadata": {
554
        "id": "X79eNNL_BM4G"
555
      },
556
      "source": [
557
        "So far we've built a multi-query powered **R**etrieval **A**ugmentation chain. Now, we need to add **G**eneration."
558
      ]
559
    },
560
    {
561
      "cell_type": "code",
562
      "execution_count": 16,
563
      "metadata": {
564
        "id": "jNnXYOtqypiz"
565
      },
566
      "outputs": [],
567
      "source": [
568
        "from langchain.prompts import PromptTemplate\n",
569
        "from langchain.chains import LLMChain\n",
570
        "\n",
571
        "QA_PROMPT = PromptTemplate(\n",
572
        "    input_variables=[\"query\", \"contexts\"],\n",
573
        "    template=\"\"\"You are a helpful assistant who answers user queries using the\n",
574
        "    contexts provided. If the question cannot be answered using the information\n",
575
        "    provided say \"I don't know\".\n",
576
        "\n",
577
        "    Contexts:\n",
578
        "    {contexts}\n",
579
        "\n",
580
        "    Question: {query}\"\"\",\n",
581
        ")\n",
582
        "\n",
583
        "# Chain\n",
584
        "qa_chain = LLMChain(llm=llm, prompt=QA_PROMPT)"
585
      ]
586
    },
587
    {
588
      "cell_type": "code",
589
      "execution_count": 17,
590
      "metadata": {
591
        "colab": {
592
          "base_uri": "https://localhost:8080/",
593
          "height": 123
594
        },
595
        "id": "h6GVEZkhytdM",
596
        "outputId": "f03086b8-8d30-4d6e-a723-833ffecbcf8e"
597
      },
598
      "outputs": [
599
        {
600
          "data": {
601
            "text/plain": [
602
              "'Llama 2 is a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. The fine-tuned LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc, are optimized for dialogue use cases. These models outperform open-source chat models on most benchmarks and are considered a suitable substitute for closed-source models based on humane evaluations for helpfulness and safety. The approach to fine-tuning and safety is described in detail.'"
603
            ]
604
          },
605
          "execution_count": 17,
606
          "metadata": {},
607
          "output_type": "execute_result"
608
        }
609
      ],
610
      "source": [
611
        "out = qa_chain(\n",
612
        "    inputs={\n",
613
        "        \"query\": question,\n",
614
        "        \"contexts\": \"\\n---\\n\".join([d.page_content for d in docs])\n",
615
        "    }\n",
616
        ")\n",
617
        "out[\"text\"]"
618
      ]
619
    },
620
    {
621
      "attachments": {},
622
      "cell_type": "markdown",
623
      "metadata": {
624
        "id": "KemgDCg8DkgE"
625
      },
626
      "source": [
627
        "## Chaining Everything with a SequentialChain"
628
      ]
629
    },
630
    {
631
      "attachments": {},
632
      "cell_type": "markdown",
633
      "metadata": {
634
        "id": "kTbLlWgEEII1"
635
      },
636
      "source": [
637
        "We can pull together the logic above into a function or set of methods, whatever is prefered — however if we'd like to use LangChain's approach to this we must \"chain\" together multiple chains. The first retrieval component is (1) not a chain per se, and (2) requires processing of the output. To do that, and fit with LangChain's \"chaining chains\" approach, we setup the _retrieval_ component within a `TransformChain`:"
638
      ]
639
    },
640
    {
641
      "cell_type": "code",
642
      "execution_count": 18,
643
      "metadata": {
644
        "id": "BpFmiRtYDpHp"
645
      },
646
      "outputs": [],
647
      "source": [
648
        "from langchain.chains import TransformChain\n",
649
        "\n",
650
        "def retrieval_transform(inputs: dict) -> dict:\n",
651
        "    docs = retriever.get_relevant_documents(query=inputs[\"question\"])\n",
652
        "    docs = [d.page_content for d in docs]\n",
653
        "    docs_dict = {\n",
654
        "        \"query\": inputs[\"question\"],\n",
655
        "        \"contexts\": \"\\n---\\n\".join(docs)\n",
656
        "    }\n",
657
        "    return docs_dict\n",
658
        "\n",
659
        "retrieval_chain = TransformChain(\n",
660
        "    input_variables=[\"question\"],\n",
661
        "    output_variables=[\"query\", \"contexts\"],\n",
662
        "    transform=retrieval_transform\n",
663
        ")"
664
      ]
665
    },
666
    {
667
      "attachments": {},
668
      "cell_type": "markdown",
669
      "metadata": {
670
        "id": "SoD45Au1Eg-r"
671
      },
672
      "source": [
673
        "Now we chain this with our generation step using the `SequentialChain`:"
674
      ]
675
    },
676
    {
677
      "cell_type": "code",
678
      "execution_count": 19,
679
      "metadata": {
680
        "id": "azqCwDwXEkDT"
681
      },
682
      "outputs": [],
683
      "source": [
684
        "from langchain.chains import SequentialChain\n",
685
        "\n",
686
        "rag_chain = SequentialChain(\n",
687
        "    chains=[retrieval_chain, qa_chain],\n",
688
        "    input_variables=[\"question\"],  # we need to name differently to output \"query\"\n",
689
        "    output_variables=[\"query\", \"contexts\", \"text\"]\n",
690
        ")"
691
      ]
692
    },
693
    {
694
      "attachments": {},
695
      "cell_type": "markdown",
696
      "metadata": {
697
        "id": "xpB2aWV4ESzf"
698
      },
699
      "source": [
700
        "Then we perform the full RAG pipeline:"
701
      ]
702
    },
703
    {
704
      "cell_type": "code",
705
      "execution_count": 20,
706
      "metadata": {
707
        "colab": {
708
          "base_uri": "https://localhost:8080/",
709
          "height": 161
710
        },
711
        "id": "JvJbUaLqFRG2",
712
        "outputId": "582caa21-777a-4a01-a618-9db64185ad5e"
713
      },
714
      "outputs": [
715
        {
716
          "name": "stderr",
717
          "output_type": "stream",
718
          "text": [
719
            "INFO:langchain.retrievers.multi_query:Generated queries: ['1. What information can you provide about llama 2?', '2. Could you give me some details about llama 2?', '3. I would like to learn more about llama 2. Can you help me with that?']\n"
720
          ]
721
        },
722
        {
723
          "data": {
724
            "text/plain": [
725
              "'Llama 2 is a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. These LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc, are optimized for dialogue use cases. They have been shown to outperform open-source chat models on most benchmarks and are considered a suitable substitute for closed-source models based on humane evaluations for helpfulness and safety. The approach to fine-tuning and safety is described in detail in the work.'"
726
            ]
727
          },
728
          "execution_count": 20,
729
          "metadata": {},
730
          "output_type": "execute_result"
731
        }
732
      ],
733
      "source": [
734
        "out = rag_chain({\"question\": question})\n",
735
        "out[\"text\"]"
736
      ]
737
    },
738
    {
739
      "attachments": {},
740
      "cell_type": "markdown",
741
      "metadata": {
742
        "id": "bLmv01geK-ZS"
743
      },
744
      "source": [
745
        "---"
746
      ]
747
    },
748
    {
749
      "attachments": {},
750
      "cell_type": "markdown",
751
      "metadata": {
752
        "id": "vAZVPhHzLDQQ"
753
      },
754
      "source": [
755
        "## Custom Multiquery"
756
      ]
757
    },
758
    {
759
      "attachments": {},
760
      "cell_type": "markdown",
761
      "metadata": {
762
        "id": "rI-KVO6zjJZw"
763
      },
764
      "source": [
765
        "We'll try this with two prompts, both encourage more variety in search queries.\n",
766
        "\n",
767
        "**Prompt A**\n",
768
        "```\n",
769
        "Your task is to generate 3 different search queries that aim to\n",
770
        "answer the user question from multiple perspectives.\n",
771
        "Each query MUST tackle the question from a different viewpoint,\n",
772
        "we want to get a variety of RELEVANT search results.\n",
773
        "Provide these alternative questions separated by newlines.\n",
774
        "Original question: {question}\n",
775
        "```\n",
776
        "\n",
777
        "\n",
778
        "**Prompt B**\n",
779
        "```\n",
780
        "Your task is to generate 3 different search queries that aim to\n",
781
        "answer the user question from multiple perspectives. The user questions\n",
782
        "are focused on Large Language Models, Machine Learning, and related\n",
783
        "disciplines.\n",
784
        "Each query MUST tackle the question from a different viewpoint, we\n",
785
        "want to get a variety of RELEVANT search results.\n",
786
        "Provide these alternative questions separated by newlines.\n",
787
        "Original question: {question}\n",
788
        "```"
789
      ]
790
    },
791
    {
792
      "cell_type": "code",
793
      "execution_count": 21,
794
      "metadata": {
795
        "id": "4IlEnYeKLFzh"
796
      },
797
      "outputs": [],
798
      "source": [
799
        "from typing import List\n",
800
        "from langchain.chains import LLMChain\n",
801
        "from pydantic import BaseModel, Field\n",
802
        "from langchain.prompts import PromptTemplate\n",
803
        "from langchain.output_parsers import PydanticOutputParser\n",
804
        "\n",
805
        "\n",
806
        "# Output parser will split the LLM result into a list of queries\n",
807
        "class LineList(BaseModel):\n",
808
        "    # \"lines\" is the key (attribute name) of the parsed output\n",
809
        "    lines: List[str] = Field(description=\"Lines of text\")\n",
810
        "\n",
811
        "\n",
812
        "class LineListOutputParser(PydanticOutputParser):\n",
813
        "    def __init__(self) -> None:\n",
814
        "        super().__init__(pydantic_object=LineList)\n",
815
        "\n",
816
        "    def parse(self, text: str) -> LineList:\n",
817
        "        lines = text.strip().split(\"\\n\")\n",
818
        "        return LineList(lines=lines)\n",
819
        "\n",
820
        "\n",
821
        "output_parser = LineListOutputParser()\n",
822
        "\n",
823
        "template = \"\"\"\n",
824
        "Your task is to generate 3 different search queries that aim to\n",
825
        "answer the user question from multiple perspectives. The user questions\n",
826
        "are focused on Large Language Models, Machine Learning, and related\n",
827
        "disciplines.\n",
828
        "Each query MUST tackle the question from a different viewpoint, we\n",
829
        "want to get a variety of RELEVANT search results.\n",
830
        "Provide these alternative questions separated by newlines.\n",
831
        "Original question: {question}\n",
832
        "\"\"\"\n",
833
        "\n",
834
        "QUERY_PROMPT = PromptTemplate(\n",
835
        "    input_variables=[\"question\"],\n",
836
        "    template=template,\n",
837
        ")\n",
838
        "llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)\n",
839
        "\n",
840
        "# Chain\n",
841
        "llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)"
842
      ]
843
    },
844
    {
845
      "cell_type": "code",
846
      "execution_count": 22,
847
      "metadata": {
848
        "colab": {
849
          "base_uri": "https://localhost:8080/"
850
        },
851
        "id": "0CgduNJWLBez",
852
        "outputId": "7ffee6c2-27b4-4bdf-8c79-7effd27e3cd4"
853
      },
854
      "outputs": [
855
        {
856
          "name": "stderr",
857
          "output_type": "stream",
858
          "text": [
859
            "INFO:langchain.retrievers.multi_query:Generated queries: ['1. What are the key features and capabilities of Large Language Model Llama 2?', '2. How does Llama 2 compare to other Large Language Models in terms of performance and efficiency?', '3. What are the applications and use cases of Llama 2 in the field of Machine Learning and Natural Language Processing?']\n"
860
          ]
861
        },
862
        {
863
          "data": {
864
            "text/plain": [
865
              "7"
866
            ]
867
          },
868
          "execution_count": 22,
869
          "metadata": {},
870
          "output_type": "execute_result"
871
        }
872
      ],
873
      "source": [
874
        "# Run\n",
875
        "retriever = MultiQueryRetriever(\n",
876
        "    retriever=vectorstore.as_retriever(), llm_chain=llm_chain, parser_key=\"lines\"\n",
877
        ")  # \"lines\" is the key (attribute name) of the parsed output\n",
878
        "\n",
879
        "# Results\n",
880
        "docs = retriever.get_relevant_documents(\n",
881
        "    query=question\n",
882
        ")\n",
883
        "len(docs)"
884
      ]
885
    },
886
    {
887
      "cell_type": "code",
888
      "execution_count": 23,
889
      "metadata": {
890
        "colab": {
891
          "base_uri": "https://localhost:8080/"
892
        },
893
        "id": "PSySsaDKMK1i",
894
        "outputId": "e6f95abd-99fc-4576-d1f4-5fd4c21c70ab"
895
      },
896
      "outputs": [
897
        {
898
          "data": {
899
            "text/plain": [
900
              "[Document(page_content='Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang\\nRoss Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang\\nAngela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic\\nSergey Edunov Thomas Scialom\\x03\\nGenAI, Meta\\nAbstract\\nIn this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\\nlarge language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\\nOur fine-tuned LLMs, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , are optimized for dialogue use cases. Our\\nmodels outperform open-source chat models on most benchmarks we tested, and based on\\nourhumanevaluationsforhelpfulnessandsafety,maybeasuitablesubstituteforclosedsource models. We provide a detailed description of our approach to fine-tuning and safety', metadata={'chunk-id': '1', 'id': '2307.09288', 'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
901
              " Document(page_content='2\\n3.4.3 Even programmatic measures of model capability can be highly subjective . . . . . . . 15\\n3.5 Even large language models are brittle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15\\n3.6 Social bias in large language models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17\\n3.7 Performance on non-English languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\\n4 Behavior on selected tasks 21\\n4.1 Checkmate-in-one task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\\n4.2 Periodic elements task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\\n5 Additional related work 24\\n6 Discussion 25', metadata={'chunk-id': '14', 'id': '2206.04615', 'source': 'http://arxiv.org/pdf/2206.04615', 'title': 'Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models'}),\n",
902
              " Document(page_content='challenges described above) about how the development of large language models has unfolded thus far, including a\\nquantitative analysis of the increasing gap between academia and industry for large model development.\\nFinally, in Section 4 we outline policy interventions that may help concretely address the challenges we outline in\\nSections 2 and 3 in order to help guide the development and deployment of larger models for the broader social good.\\nWe leave some illustrative experiments, technical details, and caveats about our claims in Appendix A.\\n2 DISTINGUISHING FEATURES OF LARGE GENERATIVE MODELS\\nWe claim that large generative models (e.g., GPT-3 [ 11], LaMDA [ 78], Gopher [ 62], etc.) are distinguished by four\\nfeatures:\\n•Smooth, general capability scaling : It is possible to predictably improve the general performance of generative\\nmodels — their loss on capturing a specific, though very broad, data distribution — by scaling up the size of the\\nmodels, the compute used to train them, and the amount of data they’re trained on in the correct proportions.\\nThese proportions can be accurately predicted by scaling laws (Figure 1). We believe that these scaling laws\\nde-risk investments in building larger and generally more capable models despite the high resource costs and the\\ndifficulty of predicting precisely how well a model will perform on a specific task. Note, the harmful properties', metadata={'chunk-id': '9', 'id': '2202.07785', 'source': 'http://arxiv.org/pdf/2202.07785', 'title': 'Predictability and Surprise in Large Generative Models'}),\n",
903
              " Document(page_content='asChatGPT,BARD,andClaude. TheseclosedproductLLMsareheavilyfine-tunedtoalignwithhuman\\npreferences, which greatly enhances their usability and safety. This step can require significant costs in\\ncomputeandhumanannotation,andisoftennottransparentoreasilyreproducible,limitingprogresswithin\\nthe community to advance AI alignment research.\\nIn this work, we develop and release Llama 2, a family of pretrained and fine-tuned LLMs, L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle and\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc , at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested,\\nL/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc models generally perform better than existing open-source models. They also appear to\\nbe on par with some of the closed-source models, at least on the human evaluations we performed (see', metadata={'chunk-id': '9', 'id': '2307.09288', 'source': 'http://arxiv.org/pdf/2307.09288', 'title': 'Llama 2: Open Foundation and Fine-Tuned Chat Models'}),\n",
904
              " Document(page_content='but BoolQ. Similarly, this model surpasses PaLM540B everywhere but on BoolQ and WinoGrande.\\nLLaMA-13B model also outperforms GPT-3 on\\nmost benchmarks despite being 10 \\x02smaller.\\n3.2 Closed-book Question Answering\\nWe compare LLaMA to existing large language\\nmodels on two closed-book question answering\\nbenchmarks: Natural Questions (Kwiatkowski\\net al., 2019) and TriviaQA (Joshi et al., 2017). For\\nboth benchmarks, we report exact match performance in a closed book setting, i.e., where the models do not have access to documents that contain\\nevidence to answer the question. In Table 4, we\\nreport performance on NaturalQuestions, and in Table 5, we report on TriviaQA. On both benchmarks,\\nLLaMA-65B achieve state-of-the-arts performance\\nin the zero-shot and few-shot settings. More importantly, the LLaMA-13B is also competitive on\\nthese benchmarks with GPT-3 and Chinchilla, despite being 5-10 \\x02smaller. This model runs on a\\nsingle V100 GPU during inference.\\n0-shot 1-shot 5-shot 64-shot\\nGopher 280B 43.5 - 57.0 57.2', metadata={'chunk-id': '17', 'id': '2302.13971', 'source': 'http://arxiv.org/pdf/2302.13971', 'title': 'LLaMA: Open and Efficient Foundation Language Models'}),\n",
905
              " Document(page_content='5 Discussion 19\\n6 Conclusion 21\\n1 Introduction: motivation for the survey and definitions\\n1.1 Motivation\\nLarge Language Models (LLMs) ( Devlin et al. ,2019;Brown et al. ,2020;Chowdhery et al. ,2022) have fueled dramatic progress in Natural Language Processing (NLP ) and are already core in several products with\\nmillions of users, such as the coding assistant Copilot ( Chen et al. ,2021), Google search engine1or more recently ChatGPT2. Memorization ( Tirumala et al. ,2022) combined with compositionality ( Zhou et al. ,2022)\\ncapabilities made LLMs able to execute various tasks such as language understanding or conditional and unconditional text generation at an unprecedented level of pe rformance, thus opening a realistic path towards\\nhigher-bandwidth human-computer interactions.\\nHowever, LLMs suffer from important limitations hindering a broader deployment. LLMs often provide nonfactual but seemingly plausible predictions, often referr ed to as hallucinations ( Welleck et al. ,2020). This\\nleads to many avoidable mistakes, for example in the context of arithmetics ( Qian et al. ,2022) or within\\na reasoning chain ( Wei et al. ,2022c ). Moreover, many LLMs groundbreaking capabilities seem to emerge', metadata={'chunk-id': '5', 'id': '2302.07842', 'source': 'http://arxiv.org/pdf/2302.07842', 'title': 'Augmented Language Models: a Survey'}),\n",
906
              " Document(page_content='practicable options for academic research since they were acquired by Appen, a company that is\\nfocused on a business market.\\nThis paper explores the potential of large language models (LLMs) for text annotation tasks, with a\\nfocus on ChatGPT, which was released in November 2022. It demonstrates that zero-shot ChatGPT\\nclassifications (that is, without any additional training) outperform MTurk annotations, at a fraction\\nof the cost. LLMs have been shown to perform very well for a wide range of purposes, including\\nideological scaling (Wu et al., 2023), the classification of legislative proposals (Nay, 2023), the\\nresolution of cognitive psychology tasks (Binz and Schulz, 2023), and the simulation of human\\nsamples for survey research (Argyle et al., 2023). While a few studies suggested that ChatGPT\\nmight perform text annotation tasks of the kinds we have described (Kuzman, Mozeti ˇc and Ljubeši ´c,\\n2023; Huang, Kwak and An, 2023), to the best of our knowledge our work is the first systematic\\nevaluation. Our analysis relies on a sample of 6,183 documents, including tweets and news articles', metadata={'chunk-id': '3', 'id': '2303.15056', 'source': 'http://arxiv.org/pdf/2303.15056', 'title': 'ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks'})]"
907
            ]
908
          },
909
          "execution_count": 23,
910
          "metadata": {},
911
          "output_type": "execute_result"
912
        }
913
      ],
914
      "source": [
915
        "docs"
916
      ]
917
    },
918
    {
919
      "attachments": {},
920
      "cell_type": "markdown",
921
      "metadata": {
922
        "id": "F4q65OEiizU2"
923
      },
924
      "source": [
925
        "Putting this together in another `SequentialChain`:"
926
      ]
927
    },
928
    {
929
      "cell_type": "code",
930
      "execution_count": 24,
931
      "metadata": {
932
        "id": "LTRjTIKzi2-g"
933
      },
934
      "outputs": [],
935
      "source": [
936
        "retrieval_chain = TransformChain(\n",
937
        "    input_variables=[\"question\"],\n",
938
        "    output_variables=[\"query\", \"contexts\"],\n",
939
        "    transform=retrieval_transform\n",
940
        ")\n",
941
        "\n",
942
        "rag_chain = SequentialChain(\n",
943
        "    chains=[retrieval_chain, qa_chain],\n",
944
        "    input_variables=[\"question\"],  # we need to name differently to output \"query\"\n",
945
        "    output_variables=[\"query\", \"contexts\", \"text\"]\n",
946
        ")"
947
      ]
948
    },
949
    {
950
      "attachments": {},
951
      "cell_type": "markdown",
952
      "metadata": {
953
        "id": "Rda74xhpjE6A"
954
      },
955
      "source": [
956
        "And asking again:"
957
      ]
958
    },
959
    {
960
      "cell_type": "code",
961
      "execution_count": 25,
962
      "metadata": {
963
        "id": "9UcBY71cjGgX"
964
      },
965
      "outputs": [
966
        {
967
          "name": "stderr",
968
          "output_type": "stream",
969
          "text": [
970
            "INFO:langchain.retrievers.multi_query:Generated queries: ['1. What are the key features and capabilities of Large Language Model Llama 2?', '2. How does Llama 2 compare to other Large Language Models in terms of performance and efficiency?', '3. What are the applications and use cases of Llama 2 in the field of Machine Learning and Natural Language Processing?']\n"
971
          ]
972
        },
973
        {
974
          "data": {
975
            "text/plain": [
976
              "'Llama 2 is a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. These models, called L/l.sc/a.sc/m.sc/a.sc /two.taboldstyle-C/h.sc/a.sc/t.sc, are optimized for dialogue use cases and have been shown to outperform open-source chat models on most benchmarks. They are considered as a suitable substitute for closed-source models in terms of helpfulness and safety. The development of Llama 2 addresses challenges such as programmatic measures of model capability, brittleness of large language models, social bias, and performance on non-English languages.'"
977
            ]
978
          },
979
          "execution_count": 25,
980
          "metadata": {},
981
          "output_type": "execute_result"
982
        }
983
      ],
984
      "source": [
985
        "out = rag_chain({\"question\": question})\n",
986
        "out[\"text\"]"
987
      ]
988
    },
989
    {
990
      "attachments": {},
991
      "cell_type": "markdown",
992
      "metadata": {
993
        "id": "8jULksgk7gLA"
994
      },
995
      "source": [
996
        "After finishing, delete your Pinecone index to save resources:"
997
      ]
998
    },
999
    {
1000
      "cell_type": "code",
1001
      "execution_count": 26,
1002
      "metadata": {},
1003
      "outputs": [],
1004
      "source": [
1005
        "pc.delete_index(index_name)"
1006
      ]
1007
    },
1008
    {
1009
      "attachments": {},
1010
      "cell_type": "markdown",
1011
      "metadata": {},
1012
      "source": [
1013
        "---"
1014
      ]
1015
    }
1016
  ],
1017
  "metadata": {
1018
    "colab": {
1019
      "provenance": []
1020
    },
1021
    "kernelspec": {
1022
      "display_name": "Python 3",
1023
      "name": "python3"
1024
    },
1025
    "language_info": {
1026
      "codemirror_mode": {
1027
        "name": "ipython",
1028
        "version": 3
1029
      },
1030
      "file_extension": ".py",
1031
      "mimetype": "text/x-python",
1032
      "name": "python",
1033
      "nbconvert_exporter": "python",
1034
      "pygments_lexer": "ipython3",
1035
      "version": "3.9.12"
1036
    }
1037
  },
1038
  "nbformat": 4,
1039
  "nbformat_minor": 0
1040
}
1041

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.