GenerativeAIExamples

Форк
0
/
03_llama_index_simple.ipynb 
463 строки · 19.7 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "id": "d7bbdbd7",
6
   "metadata": {},
7
   "source": [
8
    "# Q&A with LlamaIndex\n",
9
    "\n",
10
    "This notebook demonstrates how to use [LlamaIndex](https://docs.llamaindex.ai/en/stable/) to build a chatbot that references a custom knowledge base. \n",
11
    "\n",
12
    "Suppose you have some text documents (PDF, blog, Notion pages, etc.) and want to ask questions related to the contents of those documents. LLMs, given their proficiency in understanding text, are a great tool for this. \n",
13
    "\n",
14
    "<div class=\"alert alert-block alert-info\">\n",
15
    "    \n",
16
    "⚠️ The notebook before this one, `02_langchain_index_simple.ipynb`, contains the same functionality as this notebook but uses some LangChain components instead of LlamaIndex components. \n",
17
    "\n",
18
    "Concepts that are used in this notebook are explained in-depth in the previous notebook. If you are new to retrieval augmented generation, it is recommended to go through the previous notebook before this one. \n",
19
    "\n",
20
    "Ultimately, we recommend reading about LangChain vs. LlamaIndex and picking the software/components of the software that makes the most sense to you. This is discussed a bit further below. \n",
21
    "\n",
22
    "</div>\n",
23
    "\n",
24
    "### [LlamaIndex](https://docs.llamaindex.ai/en/stable/)\n",
25
    "[**LlamaIndex**](https://docs.llamaindex.ai/en/stable/) is a data framework for LLM applications to ingest, structure, and access private or domain-specific data. Since LLMs are both only trained up to a fixed point in time and do not contain knowledge that is proprietary to an Enterprise, they can't answer questions about new or proprietary knowledge. LlamaIndex helps solve this problem by providing data connectors to ingest data, indices to structure data for storage, and engines to communicate with data. \n",
26
    "\n",
27
    "\n",
28
    "### [LlamaIndex](https://docs.llamaindex.ai/en/stable/) or [LangChain](https://python.langchain.com/docs/get_started/introduction)?\n",
29
    "\n",
30
    "It's recommended to read more about the unique strengths of both LlamaIndex and LangChain. At a high level, LangChain is a more general framework for building applications with LLMs. LangChain is (currently) more mature when it comes to multi-step chains and some other chat functionality such as conversational memory. LlamaIndex has plenty of overlap with LangChain, but is particularly strong for loading data from a wide variety of sources and indexing/querying tasks. \n",
31
    "\n",
32
    "Since LlamaIndex can be used *with* LangChain, the frameworks' unique capabilities can be leveraged together; the combination of the two is demonstrated in this notebook.\n"
33
   ]
34
  },
35
  {
36
   "cell_type": "markdown",
37
   "id": "953946f1",
38
   "metadata": {},
39
   "source": [
40
    "### Step 1: Integrate TensorRT-LLM to LangChain *and* LlamaIndex\n",
41
    "#### Customized LangChain LLM in LlamaIndex\n",
42
    "Langchain allows you to create custom wrappers for your LLM in case you want to use your own LLM or a different wrapper than the one that is supported in LangChain. Since we are using LlamaIndex, we have written a custom langchain wrapper compatible with LlamaIndex.\n",
43
    "\n",
44
    "We can easily take a custom LLM that has been wrapped for LangChain and plug it into [LlamaIndex as an LLM](https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms.html#using-llms)! We use the [LlamaIndex LangChainLLM library](https://docs.llamaindex.ai/en/v0.9.48/api_reference/llms/langchain.html) so the LangChain LLM can be used in LlamaIndex. \n",
45
    "\n",
46
    "<div class=\"alert alert-block alert-warning\">\n",
47
    "    \n",
48
    "<b>WARNING!</b> Be sure to replace `server_url` with the address and port that Triton is running on. \n",
49
    "\n",
50
    "</div>\n",
51
    "\n",
52
    "Use the address and port that the Triton is available on; for example `localhost:8001`. **If you are running this notebook as part of the generative ai workflow, your can use the existing url.**"
53
   ]
54
  },
55
  {
56
   "cell_type": "code",
57
   "execution_count": 1,
58
   "id": "7919fd82",
59
   "metadata": {},
60
   "outputs": [],
61
   "source": [
62
    "from triton_trt_llm import TensorRTLLM\n",
63
    "from llama_index.llms import LangChainLLM\n",
64
    "trtllm =TensorRTLLM(server_url =\"llm:8001\", model_name=\"ensemble\", tokens=500)\n",
65
    "llm = LangChainLLM(llm=trtllm)"
66
   ]
67
  },
68
  {
69
   "cell_type": "markdown",
70
   "id": "18600300",
71
   "metadata": {},
72
   "source": [
73
    "### Step 2: Create a Prompt Template\n",
74
    "\n",
75
    "A [**prompt template**](https://docs.llamaindex.ai/en/stable/module_guides/models/prompts.html) is a common paradigm in LLM development.\n",
76
    "\n",
77
    "They are a pre-defined set of instructions provided to the LLM and guide the output produced by the model. They can contain few shot examples and guidance and are a quick way to engineer the responses from the LLM. Llama 2 accepts the [prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) shown in `LLAMA_PROMPT_TEMPLATE`, which we manipulate to be constructed with:\n",
78
    "- The system prompt\n",
79
    "- The context\n",
80
    "- The user's question\n",
81
    "  \n",
82
    "Much like LangChain's abstraction of prompts, LlamaIndex has similar abstractions for you to create prompts."
83
   ]
84
  },
85
  {
86
   "cell_type": "code",
87
   "execution_count": 2,
88
   "id": "4fa60e49",
89
   "metadata": {},
90
   "outputs": [],
91
   "source": [
92
    "from llama_index import Prompt\n",
93
    "\n",
94
    "LLAMA_PROMPT_TEMPLATE = (\n",
95
    " \"<s>[INST] <<SYS>>\"\n",
96
    " \"Use the following context to answer the user's question. If you don't know the answer, just say that you don't know, don't try to make up an answer.\"\n",
97
    " \"<</SYS>>\"\n",
98
    " \"<s>[INST] Context: {context_str} Question: {query_str} Only return the helpful answer below and nothing else. Helpful answer:[/INST]\"\n",
99
    ")\n",
100
    "\n",
101
    "qa_template = Prompt(LLAMA_PROMPT_TEMPLATE)"
102
   ]
103
  },
104
  {
105
   "cell_type": "markdown",
106
   "id": "6063c0e0",
107
   "metadata": {},
108
   "source": [
109
    "### Step 3: Load Documents\n",
110
    "\n",
111
    "<div>\n",
112
    "<img src=\"./imgs/llama_hub.png\" width=\"500\"/>\n",
113
    "</div>\n",
114
    "\n",
115
    "LlamaIndex provides [**data loaders**](https://docs.llamaindex.ai/en/stable/module_guides/loading/connector/root.html#data-connectors-llamahub) through Llama Hub.\n",
116
    "These allow for custom data sources to be connected to your LLM using integrations.\n",
117
    "For example, integrations are available to load documents from\n",
118
    "Jira,\n",
119
    "Outlook Calendar,\n",
120
    "Slack,\n",
121
    "Trello, and many other applications. \n",
122
    "\n",
123
    "At the core of each data loader is a `download_loader` function which downloads the loader file into a module that you can use in your application. Once the loader is downloaded, data is ingested through the loader. The output of this ingestion is data formatted as a LlamaIndex [**Document**](https://docs.llamaindex.ai/en/stable/module_guides/loading/documents_and_nodes/root.html#documents-nodes) (text and metadata). \n",
124
    "\n",
125
    "Similar to the previous notebook with LangChain, an [`UnstructuredReader`](https://llamahub.ai/l/readers/llama-index-readers-file) is used in this example. However, this time it's from from [Llama Hub](https://llamahub.ai/) (LlamaIndex). Again, we load a research paper about Llama2 from Meta. \n",
126
    "\n",
127
    "[Here](https://python.langchain.com/docs/integrations/document_loaders) are some of the other document loaders available from LangChain."
128
   ]
129
  },
130
  {
131
   "cell_type": "code",
132
   "execution_count": 3,
133
   "id": "4f14d618",
134
   "metadata": {},
135
   "outputs": [
136
    {
137
     "name": "stdout",
138
     "output_type": "stream",
139
     "text": [
140
      "File ‘llama2_paper.pdf’ already there; not retrieving.\n"
141
     ]
142
    }
143
   ],
144
   "source": [
145
    "! wget -O \"llama2_paper.pdf\" -nc --user-agent=\"Mozilla\" https://arxiv.org/pdf/2307.09288.pdf"
146
   ]
147
  },
148
  {
149
   "cell_type": "code",
150
   "execution_count": 4,
151
   "id": "81fe0d1c",
152
   "metadata": {},
153
   "outputs": [
154
    {
155
     "name": "stderr",
156
     "output_type": "stream",
157
     "text": [
158
      "[nltk_data] Downloading package punkt to /root/nltk_data...\n",
159
      "[nltk_data]   Package punkt is already up-to-date!\n",
160
      "[nltk_data] Downloading package averaged_perceptron_tagger to\n",
161
      "[nltk_data]     /root/nltk_data...\n",
162
      "[nltk_data]   Package averaged_perceptron_tagger is already up-to-\n",
163
      "[nltk_data]       date!\n"
164
     ]
165
    }
166
   ],
167
   "source": [
168
    "from llama_hub.file.unstructured.base import UnstructuredReader\n",
169
    "import time\n",
170
    "\n",
171
    "loader = UnstructuredReader()\n",
172
    "start_time = time.time()\n",
173
    "documents = loader.load_data(file=\"llama2_paper.pdf\")\n",
174
    "print(f\"--- {time.time() - start_time} seconds ---\")"
175
   ]
176
  },
177
  {
178
   "cell_type": "markdown",
179
   "id": "068e61bd",
180
   "metadata": {},
181
   "source": [
182
    "### Step 4: Transform Documents with Text Splitting and a Node Parser\n",
183
    "#### a) Generate Embeddings \n",
184
    "Once documents have been loaded, they are often transformed. One method of transformation is known as **chunking**, which breaks down large pieces of text, for example, a long document, into smaller segments. This technique is valuable because it helps [optimize the relevance of the content returned from the vector database](https://www.pinecone.io/learn/chunking-strategies/). \n",
185
    "\n",
186
    "This is the same process as the previous notebook; again, we use a LangChain text splitter. In this example, we use a [``SentenceTransformersTokenTextSplitter``](https://api.python.langchain.com/en/latest/sentence_transformers/langchain_text_splitters.sentence_transformers.SentenceTransformersTokenTextSplitter.html). The ``SentenceTransformersTokenTextSplitter`` is a specialized text splitter for use with the sentence-transformer models. The default behavior is to split the text into chunks that fit the token window of the sentence transformer model that you would like to use. This sentence transformer model is used to generate the embeddings from documents.\n",
187
    "\n",
188
    "There are some nuanced complexities to text splitting since semantically related text, in theory, should be kept together. \n",
189
    "\n",
190
    "To use the Langchain's `SentenceTransformersTokenTextSplitter` with LlamaIndex we use the [**Langchain node parser**](https://docs.llamaindex.ai/en/stable/module_guides/loading/node_parsers/modules.html#langchainnodeparser) on top of the text splitter from LangChain. This is not required, but since LlamaIndex provides a [**node structure**](https://docs.llamaindex.ai/en/stable/module_guides/loading/documents_and_nodes/root.html#documents-nodes), we choose to use this functionality to level up our storage of documents. \n",
191
    "\n",
192
    "**Nodes** represent chunks of source documents, but they also contain metadata and relationship information with other nodes and index structures. Since nodes provide these additional forms of hierarchy and connections across the data, they can help generate more accurate answers upon retrieval."
193
   ]
194
  },
195
  {
196
   "cell_type": "code",
197
   "execution_count": 5,
198
   "id": "cdcd2b05",
199
   "metadata": {},
200
   "outputs": [
201
    {
202
     "name": "stderr",
203
     "output_type": "stream",
204
     "text": [
205
      "/usr/local/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
206
      "  from .autonotebook import tqdm as notebook_tqdm\n"
207
     ]
208
    }
209
   ],
210
   "source": [
211
    "from langchain.text_splitter import SentenceTransformersTokenTextSplitter\n",
212
    "from llama_index.node_parser import LangchainNodeParser\n",
213
    "\n",
214
    "\n",
215
    "TEXT_SPLITTER_MODEL = \"intfloat/e5-large-v2\"\n",
216
    "TEXT_SPLITTER_TOKENS_PER_CHUNK = 510\n",
217
    "TEXT_SPLITTER_CHUNCK_OVERLAP = 200\n",
218
    "\n",
219
    "text_splitter = SentenceTransformersTokenTextSplitter(\n",
220
    "    model_name=TEXT_SPLITTER_MODEL,\n",
221
    "    tokens_per_chunk=TEXT_SPLITTER_TOKENS_PER_CHUNK,\n",
222
    "    chunk_overlap=TEXT_SPLITTER_CHUNCK_OVERLAP,\n",
223
    ")\n",
224
    "\n",
225
    "node_parser = LangchainNodeParser(text_splitter)"
226
   ]
227
  },
228
  {
229
   "cell_type": "markdown",
230
   "id": "2b27c7b7",
231
   "metadata": {},
232
   "source": [
233
    "Additionally, we use a LlamaIndex [``PromptHelper``](https://docs.llamaindex.ai/en/stable/api_reference/service_context/prompt_helper.html) to help deal with LLM context window token limitations. It calculates available context size to the LLM by taking the initial context token length and subtracting out reserved token space for the prompt template and output. It provides a utility for re-packing text chunks from the index to maximally use the context window to minimize requests sent to the LLM.\n",
234
    "\n",
235
    "- ``context_window``: context window for the LLM -- the context length for Llama2 is 4k tokens\n",
236
    "- ``num_ouptut``: number of output tokens for the LLM\n",
237
    "- ``chunk_overlap_ratio``: chunk overlap as a ratio to chunk size\n",
238
    "- ``chunk_size_limit``: maximum chunk size to use"
239
   ]
240
  },
241
  {
242
   "cell_type": "code",
243
   "execution_count": 6,
244
   "id": "1f429667",
245
   "metadata": {},
246
   "outputs": [],
247
   "source": [
248
    "from llama_index import PromptHelper\n",
249
    "\n",
250
    "prompt_helper = PromptHelper(\n",
251
    "  context_window=4096,\n",
252
    "  num_output=256,\n",
253
    "  chunk_overlap_ratio=0.1,\n",
254
    "  chunk_size_limit=None\n",
255
    ")"
256
   ]
257
  },
258
  {
259
   "cell_type": "markdown",
260
   "id": "ca97830a",
261
   "metadata": {},
262
   "source": [
263
    "### Step 5: Generate and Store Embeddings\n",
264
    "#### a) Generate Embeddings \n",
265
    "[Embeddings](https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#embeddings) for documents are created by vectorizing the document text; this vectorization captures the semantic meaning of the text. This allows you to quickly and efficiently find other pieces of text that are similar. \n",
266
    "\n",
267
    "When a user sends in their query, the query is also embedded using the same embedding model that was used to embed the documents. As explained earlier, this allows us to find similar (relevant) documents to the user's query. \n",
268
    "\n",
269
    "Like other sections in this notebook, we can easily take a LangChain embedding object and use with LlamaIndex. We use the [LangchainEmbedding library](https://docs.llamaindex.ai/en/stable/api_reference/service_context/embeddings.html#langchainembedding), which acts as a wrapper around Langchain's embedding models. "
270
   ]
271
  },
272
  {
273
   "cell_type": "code",
274
   "execution_count": 7,
275
   "id": "0fa4c0fd",
276
   "metadata": {},
277
   "outputs": [],
278
   "source": [
279
    "from langchain.embeddings import HuggingFaceEmbeddings\n",
280
    "from llama_index.embeddings import LangchainEmbedding\n",
281
    "\n",
282
    "#Running the model on CPU as we want to conserve gpu memory.\n",
283
    "#In the production deployment (API server shown as part of the 5th notebook we run the model on GPU)\n",
284
    "model_name=\"intfloat/e5-large-v2\"\n",
285
    "model_kwargs = {\"device\": \"cpu\"}\n",
286
    "encode_kwargs = {\"normalize_embeddings\": False}\n",
287
    "hf_embeddings = HuggingFaceEmbeddings(\n",
288
    "    model_name=model_name,\n",
289
    "    model_kwargs=model_kwargs,\n",
290
    "    encode_kwargs=encode_kwargs,\n",
291
    ")\n",
292
    "# Load in a specific embedding model\n",
293
    "embed_model = LangchainEmbedding(hf_embeddings)"
294
   ]
295
  },
296
  {
297
   "cell_type": "markdown",
298
   "id": "22aa461b",
299
   "metadata": {},
300
   "source": [
301
    "#### b) Store Embeddings \n",
302
    "\n",
303
    "LlamaIndex provides a supporting module, [`ServiceContext`](https://docs.llamaindex.ai/en/v0.10.19/api_reference/service_context.html), to bundle commonly used resources during the indexing and querying stage. In this example, we bundle resources we've built: the LLM, the embedding model, the node parser, and the prompt helper.   "
304
   ]
305
  },
306
  {
307
   "cell_type": "code",
308
   "execution_count": 8,
309
   "id": "4a11b80f",
310
   "metadata": {},
311
   "outputs": [],
312
   "source": [
313
    "from llama_index import ServiceContext\n",
314
    "service_context = ServiceContext.from_defaults(\n",
315
    "  llm=llm,\n",
316
    "  embed_model=embed_model,\n",
317
    "  node_parser=node_parser,\n",
318
    "  prompt_helper=prompt_helper\n",
319
    ")"
320
   ]
321
  },
322
  {
323
   "cell_type": "markdown",
324
   "id": "c14162d7",
325
   "metadata": {},
326
   "source": [
327
    "Set the service context globally, to avoid passing it to every llm call/"
328
   ]
329
  },
330
  {
331
   "cell_type": "code",
332
   "execution_count": 9,
333
   "id": "48d000dd",
334
   "metadata": {},
335
   "outputs": [],
336
   "source": [
337
    "from llama_index import set_global_service_context\n",
338
    "set_global_service_context(service_context)"
339
   ]
340
  },
341
  {
342
   "cell_type": "markdown",
343
   "id": "7584850f",
344
   "metadata": {},
345
   "source": [
346
    "<div class=\"alert alert-block alert-info\">\n",
347
    "    \n",
348
    "⚠️ in the deployment of this workflow, [Milvus](https://milvus.io/) is running as a vector database microservice.\n",
349
    "</div>"
350
   ]
351
  },
352
  {
353
   "cell_type": "code",
354
   "execution_count": 10,
355
   "id": "50b5fbfc",
356
   "metadata": {},
357
   "outputs": [],
358
   "source": [
359
    "from llama_index import VectorStoreIndex\n",
360
    "from llama_index.storage.storage_context import StorageContext\n",
361
    "from llama_index.vector_stores import MilvusVectorStore\n",
362
    "\n",
363
    "vector_store = MilvusVectorStore(uri=\"http://milvus:19530\", dim=1024, overwrite=False)\n",
364
    "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n",
365
    "index = VectorStoreIndex.from_vector_store(vector_store)"
366
   ]
367
  },
368
  {
369
   "cell_type": "markdown",
370
   "id": "6af82726",
371
   "metadata": {},
372
   "source": [
373
    "Let's load the documents into the vector database index"
374
   ]
375
  },
376
  {
377
   "cell_type": "code",
378
   "execution_count": null,
379
   "id": "b49c4acf",
380
   "metadata": {},
381
   "outputs": [],
382
   "source": [
383
    "import time\n",
384
    "start_time = time.time()\n",
385
    "nodes = node_parser.get_nodes_from_documents(documents)\n",
386
    "index.insert_nodes(nodes)\n",
387
    "print(f\"--- {time.time() - start_time} seconds ---\")"
388
   ]
389
  },
390
  {
391
   "cell_type": "markdown",
392
   "id": "126cda61",
393
   "metadata": {},
394
   "source": [
395
    "### Step 6: Build the Query Engine and Stream Response\n",
396
    "\n",
397
    "#### a) Build the Query Engine\n",
398
    "\n",
399
    "A query engine is an object that takes in a query and returns a response. Each vector index has a default corresponding query engine; for example, the default query engine for a vector index performs a standard top-k retrieval over the vector store.\n",
400
    "\n",
401
    "A query engine contains the following components:\n",
402
    "- Retriever\n",
403
    "- Node PostProcessor\n",
404
    "- Response Synthesizer "
405
   ]
406
  },
407
  {
408
   "cell_type": "code",
409
   "execution_count": null,
410
   "id": "cd24b951",
411
   "metadata": {},
412
   "outputs": [],
413
   "source": [
414
    "query_engine = index.as_query_engine(text_qa_template=qa_template, streaming=True)"
415
   ]
416
  },
417
  {
418
   "cell_type": "markdown",
419
   "id": "90b61943",
420
   "metadata": {},
421
   "source": [
422
    "#### b) Stream a Response from the Query Engine\n",
423
    "Lastly, we pass the query engine a user's question and stream the response. "
424
   ]
425
  },
426
  {
427
   "cell_type": "code",
428
   "execution_count": null,
429
   "id": "97a018d6",
430
   "metadata": {},
431
   "outputs": [],
432
   "source": [
433
    "import time\n",
434
    "\n",
435
    "start_time = time.time()\n",
436
    "response = query_engine.query(\"what is the context length of llama2?\")\n",
437
    "response.print_response_stream()\n",
438
    "print(f\"\\n--- {time.time() - start_time} seconds ---\")"
439
   ]
440
  }
441
 ],
442
 "metadata": {
443
  "kernelspec": {
444
   "display_name": "Python 3 (ipykernel)",
445
   "language": "python",
446
   "name": "python3"
447
  },
448
  "language_info": {
449
   "codemirror_mode": {
450
    "name": "ipython",
451
    "version": 3
452
   },
453
   "file_extension": ".py",
454
   "mimetype": "text/x-python",
455
   "name": "python",
456
   "nbconvert_exporter": "python",
457
   "pygments_lexer": "ipython3",
458
   "version": "3.10.6"
459
  }
460
 },
461
 "nbformat": 4,
462
 "nbformat_minor": 5
463
}
464

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.