milvus-io_bootcamp

Форк
0
/
readthedocs_zilliz_langchain.ipynb 
1641 строка · 60.9 Кб
1
{
2
 "cells": [
3
  {
4
   "cell_type": "markdown",
5
   "id": "369c3444",
6
   "metadata": {},
7
   "source": [
8
    "# ReadtheDocs Retrieval Augmented Generation (RAG) using Zilliz Free Tier"
9
   ]
10
  },
11
  {
12
   "cell_type": "markdown",
13
   "id": "f6ffd11a",
14
   "metadata": {},
15
   "source": [
16
    "In this notebook, we are going to use Milvus documentation pages to create a chatbot about our product.  The chatbot is going to follow RAG steps to retrieve chunks of data using Semantic Vector Search, then the Question + Context will be fed as a Prompt to a LLM to generate an answer.\n",
17
    "\n",
18
    "Many RAG demos use OpenAI for the Embedding Model and ChatGPT for the Generative AI model.  **In this notebook, we will demo a fully open source RAG stack.**\n",
19
    "\n",
20
    "Using open-source Q&A with retrieval saves money since we make free calls to our own data almost all the time - retrieval, evaluation, and development iterations.  We only make a paid call to OpenAI once for the final chat generation step. \n",
21
    "\n",
22
    "<div>\n",
23
    "<img src=\"../../images/rag_image.png\" width=\"80%\"/>\n",
24
    "</div>\n",
25
    "\n",
26
    "Let's get started!"
27
   ]
28
  },
29
  {
30
   "cell_type": "code",
31
   "execution_count": 1,
32
   "id": "b2509fe9",
33
   "metadata": {},
34
   "outputs": [],
35
   "source": [
36
    "# For colab install these libraries in this order:\n",
37
    "# !python -m pip install torch transformers sentence-transformers langchain\n",
38
    "# !python -m pip install -U pymilvus 'pymilvus[model]'\n",
39
    "# !python -m pip install unstructured openai tqdm numpy ipykernel "
40
   ]
41
  },
42
  {
43
   "cell_type": "code",
44
   "execution_count": 2,
45
   "id": "d7570b2e",
46
   "metadata": {},
47
   "outputs": [],
48
   "source": [
49
    "# Import common libraries.\n",
50
    "import sys, os, time, pprint\n",
51
    "import numpy as np"
52
   ]
53
  },
54
  {
55
   "cell_type": "markdown",
56
   "id": "e059b674",
57
   "metadata": {},
58
   "source": [
59
    "## Download Milvus documentation.\n",
60
    "\n",
61
    "The data we’ll use is our own product documentation web pages.  ReadTheDocs is an open-source free software documentation hosting platform, where documentation is written with the Sphinx document generator.\n",
62
    "\n",
63
    "The code block below downloads the web pages into a local directory called `rtdocs`.  \n",
64
    "\n",
65
    "I've already uploaded the `rtdocs` data folder to github, so you should see it if you cloned my repo."
66
   ]
67
  },
68
  {
69
   "cell_type": "code",
70
   "execution_count": 3,
71
   "id": "25686cc7",
72
   "metadata": {},
73
   "outputs": [],
74
   "source": [
75
    "# UNCOMMENT TO DOWNLOAD THE DOCS.\n",
76
    "\n",
77
    "# # !pip install -U langchain\n",
78
    "# from langchain_community.document_loaders import RecursiveUrlLoader\n",
79
    "\n",
80
    "# DOCS_PAGE=\"https://milvus.io/docs/\"\n",
81
    "\n",
82
    "# loader = RecursiveUrlLoader(DOCS_PAGE)\n",
83
    "# docs = loader.load()\n",
84
    "\n",
85
    "# num_documents = len(docs)\n",
86
    "# print(f\"loaded {num_documents} documents\")"
87
   ]
88
  },
89
  {
90
   "cell_type": "code",
91
   "execution_count": 4,
92
   "id": "83b232dd",
93
   "metadata": {},
94
   "outputs": [
95
    {
96
     "name": "stdout",
97
     "output_type": "stream",
98
     "text": [
99
      "loaded 22 documents\n"
100
     ]
101
    }
102
   ],
103
   "source": [
104
    "# UNCOMMENT TO READ THE DOCS FROM A LOCAL DIRECTORY.\n",
105
    "\n",
106
    "# Read docs into LangChain\n",
107
    "# !pip install -U langchain\n",
108
    "# !pip install unstructured\n",
109
    "from langchain.document_loaders import DirectoryLoader\n",
110
    "\n",
111
    "# Load HTML files from a local directory\n",
112
    "path = \"rtdocs/\"\n",
113
    "loader = DirectoryLoader(path, glob='*.html')\n",
114
    "docs = loader.load()\n",
115
    "\n",
116
    "num_documents = len(docs)\n",
117
    "print(f\"loaded {num_documents} documents\")"
118
   ]
119
  },
120
  {
121
   "cell_type": "markdown",
122
   "id": "fb844837",
123
   "metadata": {},
124
   "source": [
125
    "## Start up Milvus running in local Docker (or Zilliz free tier)\n",
126
    "\n",
127
    ">⛔️ Make sure you pip install the correct version of pymilvus and server yml file.  **Versions (major and minor) should all match**.\n",
128
    "\n",
129
    "1. [Install Docker](https://docs.docker.com/get-docker/)\n",
130
    "2. Start your Docker Desktop\n",
131
    "3. Download the latest [docker-compose.yml](https://milvus.io/docs/install_standalone-docker.md#Download-the-YAML-file) (or run the wget command, replacing version to what you are using)\n",
132
    "> wget https://github.com/milvus-io/milvus/releases/download/v2.4.0-rc.1/milvus-standalone-docker-compose.yml -O docker-compose.yml\n",
133
    "4. From your terminal:  \n",
134
    "   - cd into directory where you saved the .yml file (usualy same dir as this notebook)\n",
135
    "   - docker compose up -d\n",
136
    "   - verify (either in terminal or on Docker Desktop) the containers are running\n",
137
    "5. From your code (see notebook code below):\n",
138
    "   - Import milvus\n",
139
    "   - Connect to the local milvus server"
140
   ]
141
  },
142
  {
143
   "cell_type": "code",
144
   "execution_count": 5,
145
   "id": "86786ab7",
146
   "metadata": {},
147
   "outputs": [
148
    {
149
     "name": "stdout",
150
     "output_type": "stream",
151
     "text": [
152
      "Pymilvus: 2.4.0\n",
153
      "v2.4.0-rc.1-dev\n"
154
     ]
155
    }
156
   ],
157
   "source": [
158
    "# STEP 1. CONNECT TO MILVUS STANDALONE DOCKER.\n",
159
    "\n",
160
    "import pymilvus, time\n",
161
    "from pymilvus import (\n",
162
    "    MilvusClient, utility, connections,\n",
163
    "    FieldSchema, CollectionSchema, DataType, IndexType,\n",
164
    "    Collection, AnnSearchRequest, RRFRanker, WeightedRanker\n",
165
    ")\n",
166
    "print(f\"Pymilvus: {pymilvus.__version__}\")\n",
167
    "\n",
168
    "# Connect to the local server.\n",
169
    "connection = connections.connect(\n",
170
    "  alias=\"default\", \n",
171
    "  host='localhost', # or '0.0.0.0' or 'localhost'\n",
172
    "  port='19530'\n",
173
    ")\n",
174
    "\n",
175
    "# Get server version.\n",
176
    "print(utility.get_server_version())\n",
177
    "\n",
178
    "# Use no-schema Milvus client uses flexible json key:value format.\n",
179
    "mc = MilvusClient(connections=connection)"
180
   ]
181
  },
182
  {
183
   "cell_type": "markdown",
184
   "id": "f9d758e4",
185
   "metadata": {},
186
   "source": [
187
    "# Optionally, use Zilliz free tier cluster\n",
188
    "To use fully-managed Milvus on [Ziliz Cloud free trial](https://cloud.zilliz.com/login).  \n",
189
    "  1. Choose the default \"Starter\" option when you provision > Create collection > Give it a name > Create cluster and collection.  \n",
190
    "  2. On the Cluster main page, copy your `API Key` and store it locally in a .env variable.  See note below how to do that.\n",
191
    "  3. Also on the Cluster main page, copy the `Public Endpoint URI`.\n",
192
    "\n",
193
    "💡 Note: To keep your tokens private, best practice is to use an **env variable**.  See [how to save api key in env variable](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety). <br>\n",
194
    "\n",
195
    "👉🏼 In Jupyter, you need a .env file (in same dir as notebooks) containing lines like this:\n",
196
    "- ZILLIZ_API_KEY=f370c...\n",
197
    "- OPENAI_API_KEY=sk-H...\n",
198
    "- VARIABLE_NAME=value..."
199
   ]
200
  },
201
  {
202
   "cell_type": "code",
203
   "execution_count": 6,
204
   "id": "0806d2db",
205
   "metadata": {},
206
   "outputs": [],
207
   "source": [
208
    "# # STEP 1. CONNECT TO ZILLIZ CLOUD\n",
209
    "\n",
210
    "# # !pip install pymilvus==2.3.7 #python sdk for milvus\n",
211
    "# import os\n",
212
    "# import pymilvus\n",
213
    "# print(f\"pymilvus version: {pymilvus.__version__}\")\n",
214
    "# from pymilvus import connections, utility\n",
215
    "# TOKEN = os.getenv(\"ZILLIZ_API_KEY\")\n",
216
    "\n",
217
    "# # Connect to Zilliz cloud using endpoint URI and API key TOKEN.\n",
218
    "# # TODO change this.\n",
219
    "# CLUSTER_ENDPOINT=\"https://in03-xxxx.api.gcp-us-west1.zillizcloud.com:443\"\n",
220
    "# CLUSTER_ENDPOINT=\"https://in03-48a5b11fae525c9.api.gcp-us-west1.zillizcloud.com:443\"\n",
221
    "# connections.connect(\n",
222
    "#   alias='default',\n",
223
    "#   #  Public endpoint obtained from Zilliz Cloud\n",
224
    "#   uri=CLUSTER_ENDPOINT,\n",
225
    "#   # API key or a colon-separated cluster username and password\n",
226
    "#   token=TOKEN,\n",
227
    "# )\n",
228
    "\n",
229
    "# # Use no-schema Milvus client uses flexible json key:value format.\n",
230
    "# # https://milvus.io/docs/using_milvusclient.md\n",
231
    "# mc = MilvusClient(\n",
232
    "#     uri=CLUSTER_ENDPOINT,\n",
233
    "#     # API key or a colon-separated cluster username and password\n",
234
    "#     token=TOKEN)\n",
235
    "\n",
236
    "# # Check if the server is ready and get colleciton name.\n",
237
    "# print(f\"Type of server: {utility.get_server_version()}\")"
238
   ]
239
  },
240
  {
241
   "cell_type": "markdown",
242
   "id": "f39af3fd",
243
   "metadata": {},
244
   "source": [
245
    "## Load the Embedding Model checkpoint and use it to create vector embeddings\n",
246
    "\n",
247
    "#### What are Embeddings?\n",
248
    "\n",
249
    "Check out [this blog](https://zilliz.com/glossary/vector-embeddings) for an introduction to embeddings.  \n",
250
    "\n",
251
    "An excellent place to start is by selecting an embedding model from the [HuggingFace MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard), sorted descending by the \"Retrieval Average'' column since this task is most relevant to RAG. Then, choose the smallest, highest-ranking embedding model. But, Beware!! some models listed are overfit to the training data, so they won't perform on your data as promised.  \n",
252
    "\n",
253
    "Milvus (and Zilliz) only supports tested embedding models that are not overfit."
254
   ]
255
  },
256
  {
257
   "cell_type": "markdown",
258
   "id": "b01d6622",
259
   "metadata": {},
260
   "source": [
261
    "#### In this notebook, we will use the **open-source BGE-M3** which supports: \n",
262
    "- over 100 languages\n",
263
    "- context lengths of up to 8192\n",
264
    "- multiple embedding inferences such as dense (semantic), sparse (lexical), and multi-vector Colbert reranking. \n",
265
    "\n",
266
    "BGE-M3 holds the distinction of being the first embedding model to offer support for all three retrieval methods, achieving state-of-the-art performance on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmark tests.  [Paper](https://arxiv.org/abs/2402.03216), [HuggingFace](https://huggingface.co/BAAI/bge-m3)\n",
267
    "\n",
268
    "**[Milvus](https://github.com/milvus-io/milvus)**, the world's first Open Source Vector Database, plays a vital role in semantic search with scaleable, efficient storage and search for GenerativeAI workflows. Its advanced functionalities include metadata filtering and hybrid search.  Since version 2.4, Milvus has built-in support for BGE M3.\n",
269
    "\n",
270
    "\n",
271
    "\n",
272
    "<div>\n",
273
    "<img src=\"../../images/bge_m3.png\" width=\"80%\"/>\n",
274
    "</div>"
275
   ]
276
  },
277
  {
278
   "cell_type": "code",
279
   "execution_count": 7,
280
   "id": "1805f966",
281
   "metadata": {},
282
   "outputs": [
283
    {
284
     "name": "stdout",
285
     "output_type": "stream",
286
     "text": [
287
      "device: cpu\n"
288
     ]
289
    },
290
    {
291
     "data": {
292
      "application/vnd.jupyter.widget-view+json": {
293
       "model_id": "5d650d0997c64fd2adc5b52fe85acdc8",
294
       "version_major": 2,
295
       "version_minor": 0
296
      },
297
      "text/plain": [
298
       "Fetching 30 files:   0%|          | 0/30 [00:00<?, ?it/s]"
299
      ]
300
     },
301
     "metadata": {},
302
     "output_type": "display_data"
303
    },
304
    {
305
     "name": "stdout",
306
     "output_type": "stream",
307
     "text": [
308
      "dense_dim: 1024\n",
309
      "sparse_dim: 250002\n",
310
      "colbert_dim: 1024\n"
311
     ]
312
    }
313
   ],
314
   "source": [
315
    "# STEP 2. DOWNLOAD AN OPEN SOURCE EMBEDDING MODEL.\n",
316
    "\n",
317
    "from pymilvus.model.hybrid import BGEM3EmbeddingFunction\n",
318
    "import torch\n",
319
    "\n",
320
    "# Initialize torch settings\n",
321
    "DEVICE = torch.device('cuda:3' if torch.cuda.is_available() else 'cpu')\n",
322
    "print(f\"device: {DEVICE}\")\n",
323
    "\n",
324
    "# Initialize a Milvus built-in sparse-dense-reranking encoder.\n",
325
    "# https://huggingface.co/BAAI/bge-m3\n",
326
    "embedding_model = BGEM3EmbeddingFunction(use_fp16=False, device=DEVICE)\n",
327
    "EMBEDDING_DIM = embedding_model.dim['dense']\n",
328
    "print(f\"dense_dim: {EMBEDDING_DIM}\")\n",
329
    "print(f\"sparse_dim: {embedding_model.dim['sparse']}\")\n",
330
    "print(f\"colbert_dim: {embedding_model.dim['colbert_vecs']}\")"
331
   ]
332
  },
333
  {
334
   "cell_type": "markdown",
335
   "metadata": {},
336
   "source": [
337
    "## Create a Milvus collection\n",
338
    "\n",
339
    "You can think of a collection in Milvus like a \"table\" in SQL databases.  The **collection** will contain the \n",
340
    "- **Schema** (or [no-schema Milvus client](https://milvus.io/docs/using_milvusclient.md)).  \n",
341
    "💡 You'll need the vector `EMBEDDING_DIM` parameter from your embedding model.\n",
342
    "Typical values are:\n",
343
    "   - 1024 for sbert embedding models\n",
344
    "   - 1536 for ada-002 OpenAI embedding models\n",
345
    "- **Vector index** for efficient vector search\n",
346
    "- **Vector distance metric** for measuring nearest neighbor vectors\n",
347
    "- **Consistency level**\n",
348
    "In Milvus, transactional consistency is possible; however, according to the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), some latency must be sacrificed. 💡 Searching movie reviews is not mission-critical, so [`eventually`](https://milvus.io/docs/consistency.md) consistent is fine here.\n",
349
    "\n",
350
    "## Add a Vector Index\n",
351
    "\n",
352
    "The vector index determines the vector **search algorithm** used to find the closest vectors in your data to the query a user submits.  \n",
353
    "\n",
354
    "Most vector indexes use different sets of parameters depending on whether the database is:\n",
355
    "- **inserting vectors** (creation mode) - vs - \n",
356
    "- **searching vectors** (search mode) \n",
357
    "\n",
358
    "Scroll down the [docs page](https://milvus.io/docs/index.md) to see a table listing different vector indexes available on Milvus.  For example:\n",
359
    "- FLAT - deterministic exhaustive search\n",
360
    "- IVF_FLAT or IVF_SQ8 - Hash index (stochastic approximate search)\n",
361
    "- HNSW - Graph index (stochastic approximate search)\n",
362
    "- AUTOINDEX - OSS or [Zilliz cloud](https://docs.zilliz.com/docs/autoindex-explained) automatic index based on type of GPU, size of data.\n",
363
    "\n",
364
    "Besides a search algorithm, we also need to specify a **distance metric**, that is, a definition of what is considered \"close\" in vector space.  In the cell below, the [`HNSW`](https://github.com/nmslib/hnswlib/blob/master/ALGO_PARAMS.md) search index is chosen.  Its possible distance metrics are one of:\n",
365
    "- L2 - L2-norm\n",
366
    "- IP - Dot-product\n",
367
    "- COSINE - Angular distance\n",
368
    "\n",
369
    "💡 Most use cases work better with normalized embeddings, in which case L2 is useless (every vector has length=1) and IP and COSINE are the same.  Only choose L2 if you plan to keep your embeddings unnormalized."
370
   ]
371
  },
372
  {
373
   "cell_type": "code",
374
   "execution_count": 8,
375
   "metadata": {},
376
   "outputs": [
377
    {
378
     "name": "stdout",
379
     "output_type": "stream",
380
     "text": [
381
      "Successfully dropped collection: `MilvusDocs`\n",
382
      "Successfully created collection: `MilvusDocs`\n"
383
     ]
384
    }
385
   ],
386
   "source": [
387
    "# STEP 3. CREATE A NO-SCHEMA MILVUS COLLECTION AND DEFINE THE DATABASE INDEX.\n",
388
    "# See docstrings for more information.\n",
389
    "# https://github.com/milvus-io/pymilvus/blob/master/examples/hello_hybrid_sparse_dense.py\n",
390
    "\n",
391
    "from pymilvus import MilvusClient\n",
392
    "\n",
393
    "# Set the Milvus collection name.\n",
394
    "COLLECTION_NAME = \"MilvusDocs\"\n",
395
    "\n",
396
    "# Specify the data schema for the new Collection.\n",
397
    "MAX_LENGTH = 65535\n",
398
    "fields = [\n",
399
    "    # Use auto generated id as primary key\n",
400
    "    FieldSchema(name=\"id\", dtype=DataType.INT64,\n",
401
    "                is_primary=True, auto_id=True, max_length=100),\n",
402
    "    FieldSchema(name=\"sparse_vector\", dtype=DataType.SPARSE_FLOAT_VECTOR),\n",
403
    "    FieldSchema(name=\"dense_vector\", dtype=DataType.FLOAT_VECTOR,\n",
404
    "                dim=EMBEDDING_DIM),\n",
405
    "    FieldSchema(name=\"chunk\", dtype=DataType.VARCHAR, max_length=MAX_LENGTH),\n",
406
    "    FieldSchema(name=\"source\", dtype=DataType.VARCHAR, max_length=MAX_LENGTH),\n",
407
    "    FieldSchema(name=\"h1\", dtype=DataType.VARCHAR, max_length=100),\n",
408
    "    FieldSchema(name=\"h2\", dtype=DataType.VARCHAR, max_length=MAX_LENGTH),\n",
409
    "]\n",
410
    "schema = CollectionSchema(fields, \"\")\n",
411
    "\n",
412
    "# Check if collection already exists, if so drop it.\n",
413
    "has = utility.has_collection(COLLECTION_NAME)\n",
414
    "if has:\n",
415
    "    drop_result = utility.drop_collection(COLLECTION_NAME)\n",
416
    "    print(f\"Successfully dropped collection: `{COLLECTION_NAME}`\")\n",
417
    "\n",
418
    "# Create the collection.\n",
419
    "schema = CollectionSchema(fields, \"\")\n",
420
    "col = Collection(COLLECTION_NAME, schema, consistency_level=\"Eventually\")\n",
421
    "\n",
422
    "# Add custom HNSW search index to the collection.\n",
423
    "# M = max number graph connections per layer. Large M = denser graph.\n",
424
    "# Choice of M: 4~64, larger M for larger data and larger embedding lengths.\n",
425
    "M = 16\n",
426
    "# efConstruction = num_candidate_nearest_neighbors per layer. \n",
427
    "# Use Rule of thumb: int. 8~512, efConstruction = M * 2.\n",
428
    "efConstruction = M * 2\n",
429
    "# Create the search index for local Milvus server.\n",
430
    "INDEX_PARAMS = dict({\n",
431
    "    'M': M,               \n",
432
    "    \"efConstruction\": efConstruction })\n",
433
    "\n",
434
    "# Create indices for the vector fields. \n",
435
    "# The indices will pre-load data into memory for efficient search.\n",
436
    "sparse_index = {\"index_type\": \"SPARSE_INVERTED_INDEX\", \"metric_type\": \"IP\"}\n",
437
    "dense_index = {\"index_type\": \"HNSW\", \"metric_type\": \"COSINE\", \"params\": INDEX_PARAMS}\n",
438
    "col.create_index(\"sparse_vector\", sparse_index)\n",
439
    "col.create_index(\"dense_vector\", dense_index)\n",
440
    "col.load()\n",
441
    "\n",
442
    "print(f\"Successfully created collection: `{COLLECTION_NAME}`\")\n",
443
    "# print(mc.describe_collection(COLLECTION_NAME))"
444
   ]
445
  },
446
  {
447
   "cell_type": "markdown",
448
   "id": "c60423a5",
449
   "metadata": {},
450
   "source": [
451
    "## Chunking\n",
452
    "\n",
453
    "Before embedding, it is necessary to decide your chunk strategy, chunk size, and chunk overlap.  In this demo, I will use:\n",
454
    "- **Strategy** = Use markdown header hierarchies.  Keep markdown sections together unless they are too long.\n",
455
    "- **Chunk size** = Use the embedding model's parameter `MAX_SEQ_LENGTH`\n",
456
    "- **Overlap** = Rule-of-thumb 10-15%\n",
457
    "- **Function** = \n",
458
    "  - Langchain's `HTMLHeaderTextSplitter` to split markdown sections.\n",
459
    "  - Langchain's `RecursiveCharacterTextSplitter` to split up long reviews recursively.\n",
460
    "\n",
461
    "\n",
462
    "Notice below, each chunk is grounded with the document source page.  <br>\n",
463
    "In addition, header titles are kept together with the chunk of markdown text."
464
   ]
465
  },
466
  {
467
   "cell_type": "code",
468
   "execution_count": 9,
469
   "metadata": {},
470
   "outputs": [
471
    {
472
     "name": "stdout",
473
     "output_type": "stream",
474
     "text": [
475
      "chunk_size: 512, chunk_overlap: 51.0\n",
476
      "chunking time: 0.028937101364135742\n",
477
      "docs: 22, split into: 22\n",
478
      "split into chunks: 304, type: list of <class 'langchain_core.documents.base.Document'>\n",
479
      "\n",
480
      "Looking at a sample chunk...\n",
481
      "Why Milvus Docs Tutorials Tools Blog Community Stars0 Try Managed Milvus FREE Search Home v2.4.x Abo\n",
482
      "{'h1': 'Why Milvus Docs Tutorials Tools Blog Community Stars0 Try Managed Milvus FREE Search Home v2.4.x Abo', 'source': 'rtdocs/quickstart.html'}\n"
483
     ]
484
    }
485
   ],
486
   "source": [
487
    "# # STEP 4. PREPARE DATA: CHUNK AND EMBED\n",
488
    "\n",
489
    "# !python -m pip install lxml\n",
490
    "from langchain_community.document_transformers import BeautifulSoupTransformer\n",
491
    "from langchain.text_splitter import HTMLHeaderTextSplitter, RecursiveCharacterTextSplitter\n",
492
    "\n",
493
    "# Define chunk size 512 and overlap 10% chunk_size.\n",
494
    "chunk_size = 512\n",
495
    "chunk_overlap = np.round(chunk_size * 0.10, 0)\n",
496
    "print(f\"chunk_size: {chunk_size}, chunk_overlap: {chunk_overlap}\")\n",
497
    "\n",
498
    "# Define the headers to split on for the HTMLHeaderTextSplitter\n",
499
    "headers_to_split_on = [\n",
500
    "    (\"h1\", \"Header 1\"),\n",
501
    "    (\"h2\", \"Header 2\"),\n",
502
    "]\n",
503
    "# Create an instance of the HTMLHeaderTextSplitter\n",
504
    "html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\n",
505
    "\n",
506
    "# Create an instance of the RecursiveCharacterTextSplitter\n",
507
    "child_splitter = RecursiveCharacterTextSplitter(\n",
508
    "    chunk_size = chunk_size,\n",
509
    "    chunk_overlap = chunk_overlap,\n",
510
    "    length_function = len,  # using built-in Python len function\n",
511
    ")\n",
512
    "\n",
513
    "# Split the HTML text using the HTMLHeaderTextSplitter.\n",
514
    "start_time = time.time()\n",
515
    "html_header_splits = []\n",
516
    "for doc in docs:\n",
517
    "    splits = html_splitter.split_text(doc.page_content)\n",
518
    "    for split in splits:\n",
519
    "        # Add the source URL and header values to the metadata\n",
520
    "        metadata = {}\n",
521
    "        new_text = split.page_content\n",
522
    "        for header_name, metadata_header_name in headers_to_split_on:\n",
523
    "            # Handle exception if h1 does not exist.\n",
524
    "            try:\n",
525
    "                header_value = new_text.split(\"¶ \")[0].strip()[:100]\n",
526
    "                metadata[header_name] = header_value\n",
527
    "            except:\n",
528
    "                break\n",
529
    "            # Handle exception if h2 does not exist.\n",
530
    "            try:\n",
531
    "                new_text = new_text.split(\"¶ \")[1].strip()[:50]\n",
532
    "            except:\n",
533
    "                break\n",
534
    "        split.metadata = {\n",
535
    "            **metadata,\n",
536
    "            \"source\": doc.metadata[\"source\"]\n",
537
    "        }\n",
538
    "        # Add the header to the text\n",
539
    "        split.page_content = split.page_content\n",
540
    "    html_header_splits.extend(splits)\n",
541
    "\n",
542
    "    # # TODO - Uncomment to save each doc.page_content as a local html file under OUTPUT_DIR.\n",
543
    "    # OUTPUT_DIR = \"output\"\n",
544
    "    # # Set filename to first 50 characters of h1 header.\n",
545
    "    # filename = doc.metadata[\"source\"].split(\"/\")[-1].split(\".\")[0][:50]\n",
546
    "    # with open(f\"{OUTPUT_DIR}/{filename}.html\", \"w\") as f:\n",
547
    "    #     f.write(doc.page_content)\n",
548
    "\n",
549
    "# Split the documents further into smaller, recursive chunks.\n",
550
    "chunks = child_splitter.split_documents(html_header_splits)\n",
551
    "\n",
552
    "end_time = time.time()\n",
553
    "print(f\"chunking time: {end_time - start_time}\")\n",
554
    "print(f\"docs: {len(docs)}, split into: {len(html_header_splits)}\")\n",
555
    "print(f\"split into chunks: {len(chunks)}, type: list of {type(chunks[0])}\") \n",
556
    "\n",
557
    "# Inspect a chunk.\n",
558
    "print()\n",
559
    "print(\"Looking at a sample chunk...\")\n",
560
    "print(chunks[0].page_content[:100])\n",
561
    "print(chunks[0].metadata)\n",
562
    "\n",
563
    "# # TODO - Uncomment to print child splits with their associated header metadata for debugging.\n",
564
    "# print()\n",
565
    "# for child in chunks:\n",
566
    "#     print(f\"Content: {child.page_content}\")\n",
567
    "#     print(f\"Metadata: {child.metadata}\")\n",
568
    "#     print()"
569
   ]
570
  },
571
  {
572
   "cell_type": "code",
573
   "execution_count": 10,
574
   "id": "512130a3",
575
   "metadata": {},
576
   "outputs": [
577
    {
578
     "name": "stdout",
579
     "output_type": "stream",
580
     "text": [
581
      "Why Milvus Docs Tutorials Tools Blog Community Stars0 Try Managed Milvus FREE Search Home v2.4.x Abo\n",
582
      "{'h1': 'Why Milvus Docs Tutorials Tools Blog Community Stars0 Try Managed Milvus FREE Search Home v2.4.x Abo', 'source': 'https://milvus.io/docs/quickstart.md'}\n"
583
     ]
584
    }
585
   ],
586
   "source": [
587
    "# Clean up the metadata urls\n",
588
    "for doc in chunks:\n",
589
    "    new_url = doc.metadata[\"source\"]\n",
590
    "    new_url = new_url.replace(\"rtdocs\", \"https://milvus.io/docs\")\n",
591
    "    new_url = new_url.replace(\".html\", \".md\")\n",
592
    "    doc.metadata.update({\"source\": new_url})\n",
593
    "\n",
594
    "print(chunks[0].page_content[:100])\n",
595
    "print(chunks[0].metadata)"
596
   ]
597
  },
598
  {
599
   "cell_type": "markdown",
600
   "id": "229daa80",
601
   "metadata": {},
602
   "source": [
603
    "Use the built-in Milvus BGE M3 embedding functions.  The output will be 2 vectors:\n",
604
    "- `embeddings['dense'][i]` is a list of numpy arrays, one per chunk. Milvus supports more than 1 dense embedding vector if desired, so i is the ith dense embedding vector.\n",
605
    "- `embeddings['sparse'][:, [i]]` is a scipy sparse matrix where each column represents a chunk."
606
   ]
607
  },
608
  {
609
   "cell_type": "code",
610
   "execution_count": 11,
611
   "id": "d223c6f1",
612
   "metadata": {},
613
   "outputs": [
614
    {
615
     "name": "stderr",
616
     "output_type": "stream",
617
     "text": [
618
      "Inference Embeddings: 100%|██████████| 19/19 [00:33<00:00,  1.79s/it]"
619
     ]
620
    },
621
    {
622
     "name": "stdout",
623
     "output_type": "stream",
624
     "text": [
625
      "Embedding time for 304 chunks: 34.03 seconds\n"
626
     ]
627
    },
628
    {
629
     "name": "stderr",
630
     "output_type": "stream",
631
     "text": [
632
      "\n"
633
     ]
634
    }
635
   ],
636
   "source": [
637
    "# STEP 5. TRANSFORM CHUNKS INTO VECTORS USING EMBEDDING MODEL INFERENCE.\n",
638
    "\n",
639
    "# BGEM3EmbeddingFunction input is docs as a list of strings.\n",
640
    "list_of_strings = [doc.page_content for doc in chunks if hasattr(doc, 'page_content')]\n",
641
    "\n",
642
    "# Embedding inference using the Milvus built-in sparse-dense-reranking encoder.\n",
643
    "start_time = time.time()\n",
644
    "embeddings = embedding_model(list_of_strings)\n",
645
    "end_time = time.time()\n",
646
    "\n",
647
    "print(f\"Embedding time for {len(list_of_strings)} chunks: \", end=\"\")\n",
648
    "print(f\"{np.round(end_time - start_time, 2)} seconds\")\n",
649
    "\n",
650
    "# Inference Embeddings: 100%|██████████| 19/19 [00:35<00:00,  1.86s/it]\n",
651
    "# Embedding time for 304 chunks: 35.74 seconds"
652
   ]
653
  },
654
  {
655
   "cell_type": "markdown",
656
   "id": "d9bd8153",
657
   "metadata": {},
658
   "source": [
659
    "## Insert data into Milvus\n",
660
    "\n",
661
    "For each original text chunk, we'll write the sextuplet (`chunk, h1, h2, source, dense_vector, sparse_vector`) into the database.\n",
662
    "\n",
663
    "<div>\n",
664
    "<img src=\"../../images/db_insert_sparse_dense.png\" width=\"80%\"/>\n",
665
    "</div>\n",
666
    "\n",
667
    "**The Milvus Client wrapper can only handle loading data from a list of dictionaries.**\n",
668
    "\n",
669
    "Otherwise, in general, Milvus supports loading data from:\n",
670
    "- pandas dataframes \n",
671
    "- list of dictionaries"
672
   ]
673
  },
674
  {
675
   "cell_type": "code",
676
   "execution_count": 12,
677
   "id": "79dd2299",
678
   "metadata": {},
679
   "outputs": [
680
    {
681
     "name": "stdout",
682
     "output_type": "stream",
683
     "text": [
684
      "304\n",
685
      "<class 'dict'> 6\n",
686
      "{'chunk': 'Why Milvus Docs Tutorials Tools Blog Community Stars0 Try Managed '\n",
687
      "          'Milvus FREE Search Home v2.4.x About Milvus Get '\n",
688
      "          'StartedPrerequisitesInstall MilvusInstall SDKsQuickstart Concepts '\n",
689
      "          'User Guide Embeddings Administration Guide Tools Integrations '\n",
690
      "          'Example Applications FAQs API reference Quickstart This guide '\n",
691
      "          'explains how to connect to your Milvus cluster and performs CRUD '\n",
692
      "          'operations in minutes Before you start You have installed Milvus '\n",
693
      "          'standalone or Milvus cluster. You have installed preferred SDKs. '\n",
694
      "          'You can',\n",
695
      " 'dense_vector': array([-0.01666467,  0.05284622, -0.05246124, ..., -0.0182556 ,\n",
696
      "        0.03670057, -0.00945159], dtype=float32),\n",
697
      " 'h1': 'Why Milvus Docs Tutorials Tools Blog Community Sta',\n",
698
      " 'h2': '',\n",
699
      " 'source': 'https://milvus.io/docs/quickstart.md',\n",
700
      " 'sparse_vector': <1x250002 sparse array of type '<class 'numpy.float32'>'\n",
701
      "\twith 63 stored elements in Compressed Sparse Row format>}\n"
702
     ]
703
    }
704
   ],
705
   "source": [
706
    "# STEP 6. INSERT CHUNK LIST INTO MILVUS OR ZILLIZ.\n",
707
    "\n",
708
    "# Create chunk_list and dict_list in a single loop\n",
709
    "dict_list = []\n",
710
    "for chunk, sparse, dense in zip(chunks, embeddings[\"sparse\"], embeddings[\"dense\"]):\n",
711
    "    # Assemble embedding vector, original text chunk, metadata.\n",
712
    "    chunk_dict = {\n",
713
    "        'chunk': chunk.page_content,\n",
714
    "        'h1': chunk.metadata.get('h1', \"\")[:50],\n",
715
    "        'h2': chunk.metadata.get('h2', \"\")[:50],\n",
716
    "        'source': chunk.metadata.get('source', \"\"),\n",
717
    "        'sparse_vector': sparse,\n",
718
    "        'dense_vector': dense\n",
719
    "    }\n",
720
    "    dict_list.append(chunk_dict)\n",
721
    "\n",
722
    "# TODO - Uncomment to inspect the first chunk and its metadata.\n",
723
    "print(len(dict_list))\n",
724
    "print(type(dict_list[0]), len(dict_list[0]))\n",
725
    "pprint.pprint(dict_list[0])"
726
   ]
727
  },
728
  {
729
   "cell_type": "code",
730
   "execution_count": 13,
731
   "id": "f3ac0d5c",
732
   "metadata": {},
733
   "outputs": [
734
    {
735
     "name": "stdout",
736
     "output_type": "stream",
737
     "text": [
738
      "Start inserting entities\n",
739
      "Milvus insert time for 304 vectors: 0.22 seconds\n"
740
     ]
741
    }
742
   ],
743
   "source": [
744
    "# Insert data into the Milvus collection.\n",
745
    "print(\"Start inserting entities\")\n",
746
    "start_time = time.time()\n",
747
    "col.insert(dict_list)\n",
748
    "\n",
749
    "end_time = time.time()\n",
750
    "print(f\"Milvus insert time for {len(dict_list)} vectors: \", end=\"\")\n",
751
    "print(f\"{np.round(end_time - start_time, 2)} seconds\")\n",
752
    "col.flush()"
753
   ]
754
  },
755
  {
756
   "cell_type": "markdown",
757
   "id": "cd834ae6",
758
   "metadata": {},
759
   "source": [
760
    "## Aside - example Milvus collection API calls\n",
761
    "https://milvus.io/docs/manage-collections.md#View-Collections\n",
762
    "\n",
763
    "Below are some common API calls for checking a collection.\n",
764
    "- `.num_entities`, flushes data and executes row count.\n",
765
    "- `.describe_collection()`, gives details about the schema, index, collection.\n",
766
    "- `.query()`, gives back selected data from the collection."
767
   ]
768
  },
769
  {
770
   "cell_type": "code",
771
   "execution_count": 14,
772
   "id": "aa628f3f",
773
   "metadata": {},
774
   "outputs": [
775
    {
776
     "name": "stdout",
777
     "output_type": "stream",
778
     "text": [
779
      "Count rows: 304\n",
780
      "timing: 0.0056 seconds\n",
781
      "\n",
782
      "{'aliases': [],\n",
783
      " 'auto_id': True,\n",
784
      " 'collection_id': 449197422121357429,\n",
785
      " 'collection_name': 'MilvusDocs',\n",
786
      " 'consistency_level': 3,\n",
787
      " 'description': '',\n",
788
      " 'enable_dynamic_field': False,\n",
789
      " 'fields': [{'auto_id': True,\n",
790
      "             'description': '',\n",
791
      "             'field_id': 100,\n",
792
      "             'is_primary': True,\n",
793
      "             'name': 'id',\n",
794
      "             'params': {},\n",
795
      "             'type': <DataType.INT64: 5>},\n",
796
      "            {'description': '',\n",
797
      "             'field_id': 101,\n",
798
      "             'name': 'sparse_vector',\n",
799
      "             'params': {},\n",
800
      "             'type': <DataType.SPARSE_FLOAT_VECTOR: 104>},\n",
801
      "            {'description': '',\n",
802
      "             'field_id': 102,\n",
803
      "             'name': 'dense_vector',\n",
804
      "             'params': {'dim': 1024},\n",
805
      "             'type': <DataType.FLOAT_VECTOR: 101>},\n",
806
      "            {'description': '',\n",
807
      "             'field_id': 103,\n",
808
      "             'name': 'chunk',\n",
809
      "             'params': {'max_length': 65535},\n",
810
      "             'type': <DataType.VARCHAR: 21>},\n",
811
      "            {'description': '',\n",
812
      "             'field_id': 104,\n",
813
      "             'name': 'source',\n",
814
      "             'params': {'max_length': 65535},\n",
815
      "             'type': <DataType.VARCHAR: 21>},\n",
816
      "            {'description': '',\n",
817
      "             'field_id': 105,\n",
818
      "             'name': 'h1',\n",
819
      "             'params': {'max_length': 100},\n",
820
      "             'type': <DataType.VARCHAR: 21>},\n",
821
      "            {'description': '',\n",
822
      "             'field_id': 106,\n",
823
      "             'name': 'h2',\n",
824
      "             'params': {'max_length': 65535},\n",
825
      "             'type': <DataType.VARCHAR: 21>}],\n",
826
      " 'num_partitions': 1,\n",
827
      " 'num_shards': 1,\n",
828
      " 'properties': {}}\n",
829
      "timing: 0.0031 seconds\n",
830
      "\n",
831
      "[{'count(*)': 304}]\n",
832
      "timing: 0.0096 seconds\n",
833
      "\n"
834
     ]
835
    }
836
   ],
837
   "source": [
838
    "# Example Milvus Collection utility API calls.\n",
839
    "# https://milvus.io/docs/manage-collections.md#View-Collections\n",
840
    "\n",
841
    "# # Count rows, incurs a call to .flush() first.\n",
842
    "start_time = time.time()\n",
843
    "print(f\"Count rows: {col.num_entities}\")\n",
844
    "end_time = time.time()\n",
845
    "print(f\"timing: {np.round(end_time - start_time, 4)} seconds\")\n",
846
    "print()\n",
847
    "\n",
848
    "# View collection info, incurs a call to .flush() first.\n",
849
    "start_time = time.time()\n",
850
    "pprint.pprint(mc.describe_collection(COLLECTION_NAME))\n",
851
    "end_time = time.time()\n",
852
    "print(f\"timing: {np.round(end_time - start_time, 4)} seconds\")\n",
853
    "print()\n",
854
    "\n",
855
    "# Count rows without incurring call to .flush().\n",
856
    "start_time = time.time()\n",
857
    "res = mc.query( collection_name=COLLECTION_NAME, \n",
858
    "               filter=\"\", \n",
859
    "               output_fields = [\"count(*)\"], )\n",
860
    "pprint.pprint(res)\n",
861
    "end_time = time.time()\n",
862
    "print(f\"timing: {np.round(end_time - start_time, 4)} seconds\")\n",
863
    "print()\n",
864
    "\n",
865
    "# View rows without incurring call to .flush().\n",
866
    "# Careful, this can be a lot of output.\n",
867
    "# OUTPUT_FIELDS = [\"id\", \"h1\", \"h2\", \"source\", \"chunk\"]\n",
868
    "# res = mc.query( collection_name=COLLECTION_NAME, \n",
869
    "#                filter=\"id <= 449197422118227014\", \n",
870
    "#                output_fields = OUTPUT_FIELDS, )\n",
871
    "# pprint.pprint(res)"
872
   ]
873
  },
874
  {
875
   "cell_type": "markdown",
876
   "id": "02c589ff",
877
   "metadata": {},
878
   "source": [
879
    "## Ask a question about your data\n",
880
    "\n",
881
    "So far in this demo notebook: \n",
882
    "1. Your custom data has been mapped into a vector embedding space\n",
883
    "2. Those vector embeddings have been saved into a vector database\n",
884
    "\n",
885
    "Next, you can ask a question about your custom data!\n",
886
    "\n",
887
    "💡 In LLM vocabulary:\n",
888
    "> **Query** is the generic term for user questions.  \n",
889
    "A query is a list of multiple individual questions, up to maybe 1000 different questions!\n",
890
    "\n",
891
    "> **Question** usually refers to a single user question.  \n",
892
    "In our example below, the user question is \"What is AUTOINDEX in Milvus Client?\"\n",
893
    "\n",
894
    "> **Semantic Search** = very fast search of the entire knowledge base to find the `TOP_K` documentation chunks with the closest embeddings to the user's query.\n",
895
    "\n",
896
    "💡 The same model should always be used for consistency for all the embeddings data and the query."
897
   ]
898
  },
899
  {
900
   "cell_type": "code",
901
   "execution_count": 15,
902
   "id": "5e7f41f4",
903
   "metadata": {},
904
   "outputs": [
905
    {
906
     "name": "stdout",
907
     "output_type": "stream",
908
     "text": [
909
      "query length: 75\n"
910
     ]
911
    }
912
   ],
913
   "source": [
914
    "# Define a sample question about your data.\n",
915
    "QUESTION1 = \"What do the parameters for HNSW mean?\"\n",
916
    "QUESTION2 = \"What are good default values for HNSW parameters with 25K vectors dim 1024?\"\n",
917
    "QUESTION3 = \"What is the default AUTOINDEX distance metric in Milvus Client?\"\n",
918
    "QUESTION4 = \"What does nlist mean in ivf_flat?\"\n",
919
    "\n",
920
    "# In case you want to ask all the questions at once.\n",
921
    "QUERY = [QUESTION1, QUESTION2, QUESTION3, QUESTION4]\n",
922
    "\n",
923
    "# Inspect the length of one question.\n",
924
    "QUERY_LENGTH = len(QUESTION2)\n",
925
    "print(f\"query length: {QUERY_LENGTH}\")"
926
   ]
927
  },
928
  {
929
   "cell_type": "code",
930
   "execution_count": 16,
931
   "metadata": {},
932
   "outputs": [],
933
   "source": [
934
    "# SELECT A PARTICULAR QUESTION TO ASK.\n",
935
    "\n",
936
    "SAMPLE_QUESTION = QUESTION1"
937
   ]
938
  },
939
  {
940
   "cell_type": "markdown",
941
   "id": "9ea29411",
942
   "metadata": {},
943
   "source": [
944
    "## Execute a vector search\n",
945
    "\n",
946
    "Search Milvus using [PyMilvus API](https://milvus.io/docs/search.md).\n",
947
    "\n",
948
    "💡 By their nature, vector searches are \"semantic\" searches.  For example, if you were to search for \"leaky faucet\": \n",
949
    "> **Traditional Key-word Search** - either or both words \"leaky\", \"faucet\" would have to match some text in order to return a web page or link text to the document.\n",
950
    "\n",
951
    "> **Semantic search** - results containing words \"drippy\" \"taps\" would be returned as well because these words mean the same thing even though they are different words."
952
   ]
953
  },
954
  {
955
   "cell_type": "code",
956
   "execution_count": 17,
957
   "id": "2bcf6cdc",
958
   "metadata": {},
959
   "outputs": [
960
    {
961
     "name": "stdout",
962
     "output_type": "stream",
963
     "text": [
964
      "Milvus Client search time for 304 vectors: 0.012432098388671875 seconds\n",
965
      "type: <class 'pymilvus.client.abstract.Hits'>, count: 2\n"
966
     ]
967
    }
968
   ],
969
   "source": [
970
    "# STEP 7. RETRIEVE ANSWERS FROM YOUR DOCUMENTS STORED IN MILVUS OR ZILLIZ.\n",
971
    "\n",
972
    "# Load the index into memory for search.\n",
973
    "col.load()\n",
974
    "\n",
975
    "# Embed the question using the same encoder.\n",
976
    "query_embeddings = embedding_model([SAMPLE_QUESTION])\n",
977
    "TOP_K = 2\n",
978
    "\n",
979
    "# Return top k results with HNSW index.\n",
980
    "SEARCH_PARAMS = dict({\n",
981
    "    # Re-use index param for num_candidate_nearest_neighbors.\n",
982
    "    \"ef\": INDEX_PARAMS['efConstruction']\n",
983
    "    })\n",
984
    "\n",
985
    "# Prepare the search requests for both vector fields\n",
986
    "sparse_search_params = {\"metric_type\": \"IP\"}\n",
987
    "sparse_req = AnnSearchRequest(\n",
988
    "                query_embeddings[\"sparse\"],\n",
989
    "                \"sparse_vector\", sparse_search_params, limit=TOP_K)\n",
990
    "\n",
991
    "dense_search_params = {\"metric_type\": \"COSINE\"}\n",
992
    "dense_search_params.update(SEARCH_PARAMS)\n",
993
    "dense_req = AnnSearchRequest(\n",
994
    "                query_embeddings[\"dense\"],\n",
995
    "                \"dense_vector\", dense_search_params, limit=TOP_K)\n",
996
    "\n",
997
    "# Define output fields to return.\n",
998
    "OUTPUT_FIELDS = [\"id\", \"h1\", \"h2\", \"source\", \"chunk\"]\n",
999
    "\n",
1000
    "# Run semantic vector search using your query and the vector database.\n",
1001
    "start_time = time.time()\n",
1002
    "# Use the reranker.\n",
1003
    "results = col.hybrid_search([\n",
1004
    "            sparse_req, dense_req], rerank=RRFRanker(),\n",
1005
    "            limit=TOP_K, output_fields=OUTPUT_FIELDS)\n",
1006
    "# # No reranking.\n",
1007
    "# results = col.hybrid_search([\n",
1008
    "#             sparse_req, dense_req], rerank=WeightedRanker(0.5, 0.5),\n",
1009
    "#             limit=TOP_K, output_fields=OUTPUT_FIELDS)\n",
1010
    "\n",
1011
    "elapsed_time = time.time() - start_time\n",
1012
    "print(f\"Milvus Client search time for {len(dict_list)} vectors: {elapsed_time} seconds\")\n",
1013
    "\n",
1014
    "# Inspect search result.\n",
1015
    "print(f\"type: {type(results[0])}, count: {len(results[0])}\")\n",
1016
    "\n",
1017
    "# Currently Milvus only support 1 query in the same hybrid search request, so\n",
1018
    "# we inspect res[0] directly. In future release Milvus will accept batch\n",
1019
    "# hybrid search queries in the same call.\n",
1020
    "results = results[0]\n",
1021
    "\n",
1022
    "# Milvus Client search time for 304 vectors: 0.02100086212158203 seconds\n",
1023
    "# type: <class 'pymilvus.client.abstract.Hits'>, count: 2"
1024
   ]
1025
  },
1026
  {
1027
   "cell_type": "markdown",
1028
   "metadata": {},
1029
   "source": [
1030
    "## Assemble and inspect the search result\n",
1031
    "\n",
1032
    "The search result is in the variable `results[0]` consisting of top_k-count of objects of type `'pymilvus.client.abstract.Hits'`\n",
1033
    "\n"
1034
   ]
1035
  },
1036
  {
1037
   "cell_type": "code",
1038
   "execution_count": 18,
1039
   "metadata": {},
1040
   "outputs": [
1041
    {
1042
     "name": "stdout",
1043
     "output_type": "stream",
1044
     "text": [
1045
      "Retrieved result #449197422118233005\n",
1046
      "distance = 0.032522473484277725\n",
1047
      "\n",
1048
      "Retrieved result #449197422118233006\n",
1049
      "distance = 0.032522473484277725\n",
1050
      "\n"
1051
     ]
1052
    }
1053
   ],
1054
   "source": [
1055
    "# Assemble retrieved context and context metadata.\n",
1056
    "METADATA_FIELDS = [f for f in OUTPUT_FIELDS if f != 'chunk']\n",
1057
    "\n",
1058
    "# Assemble retrieved ids, distances, contexts, sources, and metadata.\n",
1059
    "ids = []\n",
1060
    "distances = []\n",
1061
    "contexts = []\n",
1062
    "sources = []\n",
1063
    "metas = []\n",
1064
    "for i in range(len(results)):\n",
1065
    "    print(f\"Retrieved result #{results[i].id}\")\n",
1066
    "    ids.append(results[i].id)\n",
1067
    "    print(f\"distance = {results[i].distance}\")\n",
1068
    "    distances.append(results[i].distance)\n",
1069
    "    # print(f\"Context: {results[i].entity.chunk[:150]}\")\n",
1070
    "    contexts.append(results[i].entity.chunk)\n",
1071
    "    for j in METADATA_FIELDS:\n",
1072
    "        if hasattr(results[i].entity, j):\n",
1073
    "            meta_dict = {j: getattr(results[i].entity, j)}\n",
1074
    "            metas.append(meta_dict)\n",
1075
    "            if j == \"source\":\n",
1076
    "                # print(f\"{j}: {getattr(results[i].entity, j)}\")\n",
1077
    "                sources.append(getattr(results[i].entity, j))\n",
1078
    "    print()\n",
1079
    "\n",
1080
    "# Keep results in a list of tuples.\n",
1081
    "formatted_results = list(zip(ids, distances, contexts, sources, metas)) \n",
1082
    "# pprint.pprint(formatted_results)\n",
1083
    "\n",
1084
    "# Reranking: I only see differences in positions 7-10, when k=10.\n",
1085
    "# Rerank = True\n",
1086
    "# Retrieved result #449197422118225569\n",
1087
    "# distance = 0.032522473484277725\n",
1088
    "\n",
1089
    "# Retrieved result #449197422118225570\n",
1090
    "# distance = 0.032522473484277725\n",
1091
    "\n",
1092
    "# Retrieved result #449197422118225710\n",
1093
    "# distance = 0.03079839050769806\n",
1094
    "\n",
1095
    "# Retrieved result #449197422118225553\n",
1096
    "# distance = 0.03077651560306549\n",
1097
    "\n",
1098
    "# Retrieved result #449197422118225771\n",
1099
    "# distance = 0.03077651560306549\n",
1100
    "\n",
1101
    "# Retrieved result #449197422118225565\n",
1102
    "# distance = 0.03030998818576336\n",
1103
    "\n",
1104
    "# Retrieved result #449197422118225604\n",
1105
    "# distance = 0.01587301678955555\n",
1106
    "\n",
1107
    "# Retrieved result #449197422118225631\n",
1108
    "# distance = 0.015384615398943424\n",
1109
    "\n",
1110
    "# Retrieved result #449197422118225568\n",
1111
    "\n",
1112
    "# Rerank = False\n",
1113
    "# Retrieved result #449197422118225569\n",
1114
    "# distance = 0.39207690954208374\n",
1115
    "\n",
1116
    "# Retrieved result #449197422118225570\n",
1117
    "# distance = 0.36890196800231934\n",
1118
    "\n",
1119
    "# Retrieved result #449197422118225710\n",
1120
    "# distance = 0.25794607400894165\n",
1121
    "\n",
1122
    "# Retrieved result #449197422118225553\n",
1123
    "# distance = 0.25781089067459106\n",
1124
    "\n",
1125
    "# Retrieved result #449197422118225771\n",
1126
    "# distance = 0.25772351026535034\n",
1127
    "\n",
1128
    "# Retrieved result #449197422118225565\n",
1129
    "# distance = 0.2537841796875\n",
1130
    "\n",
1131
    "# Retrieved result #449197422118225604\n",
1132
    "# distance = 0.2157345414161682\n",
1133
    "\n",
1134
    "# Retrieved result #449197422118225575\n",
1135
    "# distance = 0.2065277099609375\n",
1136
    "\n",
1137
    "# Retrieved result #449197422118225562\n"
1138
   ]
1139
  },
1140
  {
1141
   "cell_type": "markdown",
1142
   "id": "bd6060ce",
1143
   "metadata": {},
1144
   "source": [
1145
    "## Use an LLM to Generate a chat response to the user's question using the Retrieved Context.\n",
1146
    "\n",
1147
    "Many different generative LLMs exist these days.  Check out the lmsys [leaderboard](https://chat.lmsys.org/?leaderboard).\n",
1148
    "\n",
1149
    "In this notebook, we'll try these LLMs:\n",
1150
    "- The newly released open-source Llama 3 from Meta.\n",
1151
    "- The cheapest, paid model from Anthropic Claude3 Haiku.\n",
1152
    "- The standard in its price cateogory, gpt-3.5-turbo, from Openai."
1153
   ]
1154
  },
1155
  {
1156
   "cell_type": "code",
1157
   "execution_count": 34,
1158
   "id": "eb4c323f",
1159
   "metadata": {},
1160
   "outputs": [
1161
    {
1162
     "name": "stdout",
1163
     "output_type": "stream",
1164
     "text": [
1165
      "Length long text to summarize: 1017\n"
1166
     ]
1167
    }
1168
   ],
1169
   "source": [
1170
    "# STEP 8. LLM-GENERATED ANSWER TO THE QUESTION, GROUNDED BY RETRIEVED CONTEXT.\n",
1171
    "\n",
1172
    "# Separate all the context together by space.\n",
1173
    "contexts_combined = ' '.join(reversed(contexts))\n",
1174
    "# Separate all the sources together by comma.\n",
1175
    "source_combined = ' '.join(sources)\n",
1176
    "print(f\"Length long text to summarize: {len(contexts_combined)}\")\n"
1177
   ]
1178
  },
1179
  {
1180
   "cell_type": "markdown",
1181
   "id": "11fb35aa",
1182
   "metadata": {},
1183
   "source": [
1184
    "# Try Llama3 with Ollama to generate a human-like chat response to the user's question\n",
1185
    "\n",
1186
    "Follow the instructions to install ollama and pull a model.<br>\n",
1187
    "https://github.com/ollama/ollama\n",
1188
    "\n",
1189
    "View details about which models are supported by ollama. <br>\n",
1190
    "https://ollama.com/library/llama3\n",
1191
    "\n",
1192
    "That page says `ollama run llama3` will by default pull the latest \"instruct\" model, which is fine-tuned for chat/dialogue use cases.\n",
1193
    "\n",
1194
    "The other kind of llama3 models are \"pre-trained\" base model. <br>\n",
1195
    "Example: ollama run llama3:text ollama run llama3:70b-text\n",
1196
    "\n",
1197
    "**Format** `gguf` means the model runs on CPU.  gg = \"Georgi Gerganov\", creator of the C library model format ggml, which was recently changed to gguf.\n",
1198
    "\n",
1199
    "**Quantization** (think of it like vector compaction) can lead to higher throughput at the expense of lower accuracy.  For the curious, quantization meanings can be found on: <br>\n",
1200
    "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/tree/main.  \n",
1201
    "\n",
1202
    "Below just listing the main quantization types.\n",
1203
    "- **q4_0**: Original quant method, 4-bit.\n",
1204
    "- **q4_k_m**: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K\n",
1205
    "- **q5_0**: Higher accuracy, higher resource usage and slower inference.\n",
1206
    "- **q5_k_m**: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K\n",
1207
    "- **q 6_k**: Uses Q8_K for all tensors\n",
1208
    "- **q8_0**: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users."
1209
   ]
1210
  },
1211
  {
1212
   "cell_type": "code",
1213
   "execution_count": 20,
1214
   "id": "0edc67e3",
1215
   "metadata": {},
1216
   "outputs": [
1217
    {
1218
     "name": "stdout",
1219
     "output_type": "stream",
1220
     "text": [
1221
      "MODEL:llama3:latest, FORMAT:gguf, PARAMETER_SIZE:8B, QUANTIZATION_LEVEL:Q4_0, \n",
1222
      "\n"
1223
     ]
1224
    }
1225
   ],
1226
   "source": [
1227
    "# !python -m pip install ollama\n",
1228
    "import ollama\n",
1229
    "\n",
1230
    "# Verify details which model you are running.\n",
1231
    "ollama_llama3 = ollama.list()['models'][0]\n",
1232
    "\n",
1233
    "# Print the model details.\n",
1234
    "keys = ['format', 'parameter_size', 'quantization_level']\n",
1235
    "print(f\"MODEL:{ollama.list()['models'][0]['name']}\", end=\", \")\n",
1236
    "for key in keys:\n",
1237
    "    print(f\"{str.upper(key)}:{ollama.list()['models'][0]['details'].get(key, 'Key not found in dictionary')}\", end=\", \")\n",
1238
    "print(end=\"\\n\\n\")"
1239
   ]
1240
  },
1241
  {
1242
   "cell_type": "code",
1243
   "execution_count": 50,
1244
   "id": "1c900282",
1245
   "metadata": {},
1246
   "outputs": [],
1247
   "source": [
1248
    "SYSTEM_PROMPT = f\"\"\"Given the provided context, your task is to \n",
1249
    "understand the content and accurately answer the question based \n",
1250
    "on the information available in the context.  \n",
1251
    "Provide a complete, clear, concise, relevant response in fewer\n",
1252
    "than 3 sentences and cite the unique sources.\n",
1253
    "Sources: {source_combined}\n",
1254
    "Context: {contexts_combined}\n",
1255
    "\"\"\""
1256
   ]
1257
  },
1258
  {
1259
   "cell_type": "code",
1260
   "execution_count": 51,
1261
   "id": "76042c9a",
1262
   "metadata": {},
1263
   "outputs": [
1264
    {
1265
     "name": "stdout",
1266
     "output_type": "stream",
1267
     "text": [
1268
      "('According to the provided context, the parameters for HNSW (Hierarchical '\n",
1269
      " 'Navigable Small World Graph) are:  * `M`: The maximum degree of nodes on '\n",
1270
      " 'each layer of the graph. This value can improve recall rate at the cost of '\n",
1271
      " 'increased search time. * `ef` (in construction or searching targets): A '\n",
1272
      " 'search range parameter that can be used to specify the search scope.  In '\n",
1273
      " 'simpler terms, these parameters help control how efficiently HNSW searches '\n",
1274
      " 'for nearest neighbors in a dataset. By adjusting these values, you can '\n",
1275
      " 'balance the trade-off between search accuracy and speed.')\n"
1276
     ]
1277
    }
1278
   ],
1279
   "source": [
1280
    "# Send the question to llama 3 chat.\n",
1281
    "response = ollama.chat(\n",
1282
    "    messages=[\n",
1283
    "        {\"role\": \"system\", \"content\": SYSTEM_PROMPT,},\n",
1284
    "        {\"role\": \"user\", \"content\": f\"question: {SAMPLE_QUESTION}\",}\n",
1285
    "    ],\n",
1286
    "    model='llama3',\n",
1287
    ")\n",
1288
    "pprint.pprint(response['message']['content'].replace('\\n', ' '))"
1289
   ]
1290
  },
1291
  {
1292
   "cell_type": "markdown",
1293
   "id": "4fd2b2dd",
1294
   "metadata": {},
1295
   "source": [
1296
    "## Use Anthropic to generate a human-like chat response to the user's question \n",
1297
    "\n",
1298
    "We've practiced retrieval for free on our own data using open-source LLMs.  <br>\n",
1299
    "\n",
1300
    "Now let's make a call to the paid Claude3. [List of models](https://docs.anthropic.com/claude/docs/models-overview)\n",
1301
    "- Opus - most expensive\n",
1302
    "- Sonnet\n",
1303
    "- Haiku - least expensive!\n",
1304
    "\n",
1305
    "Prompt engineering tutorials\n",
1306
    "- [Interactive](https://docs.google.com/spreadsheets/d/19jzLgRruG9kjUQNKtCg1ZjdD6l6weA6qRXG5zLIAhC8/edit#gid=150872633)\n",
1307
    "- [Static](https://docs.google.com/spreadsheets/d/1jIxjzUWG-6xBVIa2ay6yDpLyeuOh_hR_ZB75a47KX_E/edit#gid=869808629)"
1308
   ]
1309
  },
1310
  {
1311
   "cell_type": "code",
1312
   "execution_count": 23,
1313
   "id": "edf66e04",
1314
   "metadata": {},
1315
   "outputs": [],
1316
   "source": [
1317
    "SYSTEM_PROMPT = f\"\"\"Use the Context below to answer the user's question. \n",
1318
    "Be clear, factual, complete, concise.\n",
1319
    "If the answer is not in the Context, say \"I don't know\". \n",
1320
    "Otherwise answer with fewer than 3 sentences and cite the grounding sources.\n",
1321
    "Context: {contexts_combined}\n",
1322
    "Sources: {source_combined}\n",
1323
    "\n",
1324
    "Answer with 2 parts: the answer and the source citations.\n",
1325
    "Answer: The answer to the question.\n",
1326
    "Sources: grounding source only unique urls\n",
1327
    "\"\"\""
1328
   ]
1329
  },
1330
  {
1331
   "cell_type": "code",
1332
   "execution_count": 24,
1333
   "id": "c87b8428",
1334
   "metadata": {},
1335
   "outputs": [
1336
    {
1337
     "name": "stdout",
1338
     "output_type": "stream",
1339
     "text": [
1340
      "Model: claude-3-haiku-20240307\n",
1341
      "\n",
1342
      "Question: What do the parameters for HNSW mean?\n"
1343
     ]
1344
    }
1345
   ],
1346
   "source": [
1347
    "# !python -m pip install anthropic\n",
1348
    "import anthropic\n",
1349
    "\n",
1350
    "ANTHROPIC_API_KEY=os.environ.get(\"ANTHROPIC_API_KEY\")\n",
1351
    "\n",
1352
    "# # Model names\n",
1353
    "# claude-3-opus-20240229\n",
1354
    "# claude-3-sonnet-20240229\n",
1355
    "# claude-3-haiku-20240307\n",
1356
    "CLAUDE_MODEL = \"claude-3-haiku-20240307\"\n",
1357
    "print(f\"Model: {CLAUDE_MODEL}\")\n",
1358
    "print()\n",
1359
    "\n",
1360
    "client = anthropic.Anthropic(\n",
1361
    "    # defaults to os.environ.get(\"ANTHROPIC_API_KEY\")\n",
1362
    "    api_key=ANTHROPIC_API_KEY,\n",
1363
    ")\n",
1364
    "\n",
1365
    "# Print the question and answer along with grounding sources and citations.\n",
1366
    "print(f\"Question: {SAMPLE_QUESTION}\")\n",
1367
    "\n",
1368
    "# # CAREFUL!! THIS COSTS MONEY!!\n",
1369
    "# message = client.messages.create(\n",
1370
    "#     model=CLAUDE_MODEL,\n",
1371
    "#     max_tokens=1000,\n",
1372
    "#     temperature=0.0,\n",
1373
    "#     system=SYSTEM_PROMPT,\n",
1374
    "#     messages=[\n",
1375
    "#         {\"role\": \"user\", \"content\": SAMPLE_QUESTION}\n",
1376
    "#     ]\n",
1377
    "# )\n",
1378
    "# print(\"Answer:\")\n",
1379
    "# pprint.pprint(message.content[0].text.replace('\\n', ' '))"
1380
   ]
1381
  },
1382
  {
1383
   "cell_type": "code",
1384
   "execution_count": 25,
1385
   "id": "5c1c9758",
1386
   "metadata": {},
1387
   "outputs": [],
1388
   "source": [
1389
    "# # model=\"claude-3-haiku-20240307\"\n",
1390
    "\n",
1391
    "# # Question: What do the parameters for HNSW mean?\n",
1392
    "# Answer:\n",
1393
    "# ('The parameters for HNSW (Hierarchical Navigable Small World Graph) are:  1. '\n",
1394
    "#  'M: This is the maximum degree of the nodes in the graph. It controls the '\n",
1395
    "#  'sparsity of the upper layers and the density of the lower layers. The range '\n",
1396
    "#  'for M is (2, 2048).  2. efConstruction: This parameter specifies the search '\n",
1397
    "#  'range when building the index. It affects the recall rate and search time - '\n",
1398
    "#  'a higher value can improve recall at the cost of increased search time.  3. '\n",
1399
    "#  'ef: This parameter specifies the search range when searching for targets. '\n",
1400
    "#  'Similar to efConstruction, a higher value can improve recall but increase '\n",
1401
    "#  'search time.  Sources: [1] https://milvus.io/docs/index.md')"
1402
   ]
1403
  },
1404
  {
1405
   "cell_type": "code",
1406
   "execution_count": 26,
1407
   "id": "17f138a4",
1408
   "metadata": {},
1409
   "outputs": [],
1410
   "source": [
1411
    "# # model=\"claude-3-sonnet-20240229\"\n",
1412
    "\n",
1413
    "# # Question: What do the parameters for HNSW mean?\n",
1414
    "# # Answer:\n",
1415
    "# ('The parameters M and ef/efConstruction control the behavior of the HNSW '\n",
1416
    "#  '(Hierarchical Navigable Small World) algorithm used for indexing and '\n",
1417
    "#  'searching.  M specifies the maximum number of connections (edges) that each '\n",
1418
    "#  'node in the HNSW graph can have. A higher M value allows more connections, '\n",
1419
    "#  'which can improve recall rate (finding more relevant results) but increases '\n",
1420
    "#  'search time.  ef and efConstruction determine how many nodes in each layer '\n",
1421
    "#  'of the HNSW graph should be explored during searching and index construction '\n",
1422
    "#  'respectively. Higher values increase the search range and can improve '\n",
1423
    "#  'accuracy, but also increase computation time.  Sources: [1] '\n",
1424
    "#  'https://milvus.io/docs/index.md [2] https://milvus.io/docs/index.md')"
1425
   ]
1426
  },
1427
  {
1428
   "cell_type": "code",
1429
   "execution_count": 27,
1430
   "id": "704d7900",
1431
   "metadata": {},
1432
   "outputs": [],
1433
   "source": [
1434
    "# # model=\"claude-3-opus-20240229\"\n",
1435
    "\n",
1436
    "# # Question: What do the parameters for HNSW mean?\n",
1437
    "# # Answer:\n",
1438
    "# ('According to the context, the HNSW algorithm has two key parameters:  1. M: '\n",
1439
    "#  'The maximum degree of the node on each layer of the graph, which can be set '\n",
1440
    "#  'between 2 and 2048. [1]  2. efConstruction (when building index) or ef (when '\n",
1441
    "#  'searching targets): These parameters specify the search range to improve '\n",
1442
    "#  'performance. [1]  Grounding sources: [1] https://milvus.io/docs/index.md')"
1443
   ]
1444
  },
1445
  {
1446
   "cell_type": "markdown",
1447
   "id": "3e172726",
1448
   "metadata": {},
1449
   "source": [
1450
    "## Try OpenAI to generate a human-like chat response to the user's question \n",
1451
    "\n",
1452
    "We've practiced retrieval for free on our own data using open-source LLMs.  <br>\n",
1453
    "\n",
1454
    "Now let's make a call to the paid OpenAI GPT.\n",
1455
    "\n",
1456
    "💡 Note: For use cases that need to always be factually grounded, use very low temperature values while more creative tasks can benefit from higher temperatures."
1457
   ]
1458
  },
1459
  {
1460
   "cell_type": "code",
1461
   "execution_count": 28,
1462
   "id": "426d87d3",
1463
   "metadata": {},
1464
   "outputs": [],
1465
   "source": [
1466
    "SYSTEM_PROMPT = f\"\"\"Use the Context below to answer the user's question. \n",
1467
    "Be clear, factual, complete, concise.\n",
1468
    "If the answer is not in the Context, say \"I don't know\". \n",
1469
    "Otherwise answer with fewer than 4 sentences and cite the grounding sources.\n",
1470
    "Context: {contexts_combined}\n",
1471
    "Answer: The answer to the question.\n",
1472
    "Grounding sources: {source_combined}\n",
1473
    "\"\"\""
1474
   ]
1475
  },
1476
  {
1477
   "cell_type": "code",
1478
   "execution_count": 29,
1479
   "id": "76a62feb",
1480
   "metadata": {},
1481
   "outputs": [
1482
    {
1483
     "name": "stdout",
1484
     "output_type": "stream",
1485
     "text": [
1486
      "Question: What do the parameters for HNSW mean?\n",
1487
      "('Answer: The parameters for HNSW are as follows:\\n'\n",
1488
      " '- M: Maximum degree of the node, limiting the number of connections each '\n",
1489
      " 'node can have on each layer of the graph. It ranges from 2 to 2048.\\n'\n",
1490
      " '- efConstruction: Used during index building to specify a search range.\\n'\n",
1491
      " '- ef: Used when searching for targets to specify a search range.')\n",
1492
      "\n",
1493
      "\n"
1494
     ]
1495
    }
1496
   ],
1497
   "source": [
1498
    "# CAREFUL!! THIS COSTS MONEY!!\n",
1499
    "import openai, pprint\n",
1500
    "from openai import OpenAI\n",
1501
    "\n",
1502
    "# Define the generation llm model to use.\n",
1503
    "# https://openai.com/blog/new-embedding-models-and-api-updates\n",
1504
    "# Customers using the pinned gpt-3.5-turbo model alias will be automatically upgraded to gpt-3.5-turbo-0125 two weeks after this model launches.\n",
1505
    "LLM_NAME = \"gpt-3.5-turbo\"\n",
1506
    "TEMPERATURE = 0.1\n",
1507
    "RANDOM_SEED = 415\n",
1508
    "\n",
1509
    "# See how to save api key in env variable.\n",
1510
    "# https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety\n",
1511
    "openai_client = OpenAI(\n",
1512
    "    # This is the default and can be omitted\n",
1513
    "    api_key=os.environ.get(\"OPENAI_API_KEY\"),\n",
1514
    ")\n",
1515
    "\n",
1516
    "# Generate response using the OpenAI API.\n",
1517
    "response = openai_client.chat.completions.create(\n",
1518
    "    messages=[\n",
1519
    "        {\"role\": \"system\", \"content\": SYSTEM_PROMPT,},\n",
1520
    "        {\"role\": \"user\", \"content\": f\"question: {SAMPLE_QUESTION}\",}\n",
1521
    "    ],\n",
1522
    "    model=LLM_NAME,\n",
1523
    "    temperature=TEMPERATURE,\n",
1524
    "    seed=RANDOM_SEED,\n",
1525
    "    frequency_penalty=2,\n",
1526
    ")\n",
1527
    "\n",
1528
    "# Print the question and answer along with grounding sources and citations.\n",
1529
    "print(f\"Question: {SAMPLE_QUESTION}\")\n",
1530
    "\n",
1531
    "# Print all answers in the response.\n",
1532
    "for i, choice in enumerate(response.choices, 1):\n",
1533
    "    pprint.pprint(f\"Answer: {choice.message.content}\")\n",
1534
    "    print(\"\\n\")\n",
1535
    "\n",
1536
    "# Question1: What do the parameters for HNSW mean?\n",
1537
    "# Answer:  Looks perfect!\n",
1538
    "# Best answer:  M: maximum degree of nodes in a layer of the graph. \n",
1539
    "# efConstruction: number of nearest neighbors to consider when connecting nodes in the graph.\n",
1540
    "# ef: number of nearest neighbors to consider when searching for similar vectors. \n",
1541
    "\n",
1542
    "# Question2: What are good default values for HNSW parameters with 25K vectors dim 1024?\n",
1543
    "# Answer: M=16, efConstruction=500, and ef=64\n",
1544
    "# Best answer:  M=16, efConstruction=32, ef=32\n",
1545
    "\n",
1546
    "# Question3: what is the default distance metric used in AUTOINDEX in Milvus?\n",
1547
    "# Answer: L2 \n",
1548
    "# Trick answer:  IP inner product, not yet updated in documentation still says L2."
1549
   ]
1550
  },
1551
  {
1552
   "cell_type": "code",
1553
   "execution_count": 30,
1554
   "id": "9af34809",
1555
   "metadata": {},
1556
   "outputs": [],
1557
   "source": [
1558
    "# # model=\"gpt-3.5-turbo\"\n",
1559
    "\n",
1560
    "# # Question: What do the parameters for HNSW mean?\n",
1561
    "# ('Answer: The parameters for HNSW are as follows:\\n'\n",
1562
    "#  '- M: Maximum degree of the node, limiting the connections each node can have '\n",
1563
    "#  'in the graph. Range is [2, 2048].\\n'\n",
1564
    "#  '- efConstruction: Parameter used during index building to specify a search '\n",
1565
    "#  'range.\\n'\n",
1566
    "#  '- ef: Parameter used when searching for targets to specify a search range.\\n'\n",
1567
    "#  '\\n'\n",
1568
    "#  'Sources:\\n'\n",
1569
    "#  'https://milvus.io/docs/index.md')"
1570
   ]
1571
  },
1572
  {
1573
   "cell_type": "code",
1574
   "execution_count": 31,
1575
   "id": "d0e81e68",
1576
   "metadata": {},
1577
   "outputs": [],
1578
   "source": [
1579
    "# Drop collection\n",
1580
    "utility.drop_collection(COLLECTION_NAME)"
1581
   ]
1582
  },
1583
  {
1584
   "cell_type": "code",
1585
   "execution_count": 32,
1586
   "id": "c777937e",
1587
   "metadata": {},
1588
   "outputs": [
1589
    {
1590
     "name": "stdout",
1591
     "output_type": "stream",
1592
     "text": [
1593
      "Author: Christy Bergman\n",
1594
      "\n",
1595
      "Python implementation: CPython\n",
1596
      "Python version       : 3.11.8\n",
1597
      "IPython version      : 8.22.2\n",
1598
      "\n",
1599
      "torch    : 2.2.2\n",
1600
      "pymilvus : 2.4.0\n",
1601
      "langchain: 0.1.16\n",
1602
      "ollama   : 0.1.8\n",
1603
      "anthropic: 0.25.6\n",
1604
      "openai   : 1.14.3\n",
1605
      "\n",
1606
      "conda environment: py311-ray\n",
1607
      "\n"
1608
     ]
1609
    }
1610
   ],
1611
   "source": [
1612
    "# Props to Sebastian Raschka for this handy watermark.\n",
1613
    "# !pip install watermark\n",
1614
    "\n",
1615
    "%load_ext watermark\n",
1616
    "%watermark -a 'Christy Bergman' -v -p torch,pymilvus,langchain,ollama,anthropic,openai --conda"
1617
   ]
1618
  }
1619
 ],
1620
 "metadata": {
1621
  "kernelspec": {
1622
   "display_name": "Python 3 (ipykernel)",
1623
   "language": "python",
1624
   "name": "python3"
1625
  },
1626
  "language_info": {
1627
   "codemirror_mode": {
1628
    "name": "ipython",
1629
    "version": 3
1630
   },
1631
   "file_extension": ".py",
1632
   "mimetype": "text/x-python",
1633
   "name": "python",
1634
   "nbconvert_exporter": "python",
1635
   "pygments_lexer": "ipython3",
1636
   "version": "3.11.8"
1637
  }
1638
 },
1639
 "nbformat": 4,
1640
 "nbformat_minor": 5
1641
}
1642

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.