examples

Форк
0
/
03-rag-with-actions.ipynb 
1353 строки · 51.4 Кб
1
{
2
  "cells": [
3
    {
4
      "attachments": {},
5
      "cell_type": "markdown",
6
      "metadata": {},
7
      "source": [
8
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pinecone-io/examples/blob/master/learn/generation/chatbots/nemo-guardrails/03-rag-with-actions.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/pinecone-io/examples/blob/master/learn/generation/chatbots/nemo-guardrails/03-rag-with-actions.ipynb)"
9
      ]
10
    },
11
    {
12
      "attachments": {},
13
      "cell_type": "markdown",
14
      "metadata": {
15
        "id": "u6m1lOaUcKl5"
16
      },
17
      "source": [
18
        "# Retrieval Augmented Generation (RAG) with Actions\n",
19
        "\n",
20
        "Using actions in NeMo Guardrails gives us the ability to do a lot of things without needing heavy agent decision making pipelines. We can enable to use of tools with little more than what is essentially a \"fuzzy logic match\" between a users input and the intent groups that we define in our Colang files.\n",
21
        "\n",
22
        "In this example, we'll be taking a look at how to apply the power of **R**etrieval **A**ugmented **G**eneration (RAG) to LLMs using nothing more than Colang logic.\n",
23
        "\n",
24
        "We'll get started by installing the prerequisite libraries:"
25
      ]
26
    },
27
    {
28
      "cell_type": "code",
29
      "execution_count": 6,
30
      "metadata": {
31
        "colab": {
32
          "base_uri": "https://localhost:8080/"
33
        },
34
        "id": "pC-PQoCxcKl6",
35
        "outputId": "a3236f70-4751-40ce-bb42-1f6b1e3dea73"
36
      },
37
      "outputs": [
38
        {
39
          "name": "stdout",
40
          "output_type": "stream",
41
          "text": [
42
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m13.9/13.9 MB\u001b[0m \u001b[31m15.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
43
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m179.1/179.1 kB\u001b[0m \u001b[31m18.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
44
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m519.1/519.1 kB\u001b[0m \u001b[31m46.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
45
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m73.6/73.6 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
46
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m57.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
47
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.4/1.4 MB\u001b[0m \u001b[31m63.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
48
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.6/62.6 kB\u001b[0m \u001b[31m7.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
49
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.2/1.2 MB\u001b[0m \u001b[31m69.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
50
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m647.5/647.5 kB\u001b[0m \u001b[31m56.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
51
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
52
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m86.0/86.0 kB\u001b[0m \u001b[31m7.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
53
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
54
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.1/57.1 kB\u001b[0m \u001b[31m5.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
55
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.0/67.0 kB\u001b[0m \u001b[31m7.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
56
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
57
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.5/71.5 kB\u001b[0m \u001b[31m6.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
58
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m60.0/60.0 kB\u001b[0m \u001b[31m7.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
59
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m300.3/300.3 kB\u001b[0m \u001b[31m28.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
60
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m14.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
61
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m19.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
62
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m16.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
63
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m268.8/268.8 kB\u001b[0m \u001b[31m26.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
64
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m69.6/69.6 kB\u001b[0m \u001b[31m9.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
65
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m90.0/90.0 kB\u001b[0m \u001b[31m10.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
66
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.4/7.4 MB\u001b[0m \u001b[31m80.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
67
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m71.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
68
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
69
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.4/49.4 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
70
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m104.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
71
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m72.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
72
            "\u001b[?25h  Building wheel for annoy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
73
            "  Building wheel for sentence-transformers (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
74
            "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
75
            "ipython 7.34.0 requires jedi>=0.16, which is not installed.\n",
76
            "cvxpy 1.3.2 requires setuptools>65.5.1, but you have setuptools 65.5.1 which is incompatible.\n",
77
            "google-colab 1.0.0 requires requests==2.27.1, but you have requests 2.31.0 which is incompatible.\u001b[0m\u001b[31m\n",
78
            "\u001b[0m"
79
          ]
80
        }
81
      ],
82
      "source": [
83
        "!pip install -qU \\\n",
84
        "    nemoguardrails==0.4.0 \\\n",
85
        "    pinecone-client==2.2.2 \\\n",
86
        "    datasets==2.14.3 \\\n",
87
        "    openai==0.27.8"
88
      ]
89
    },
90
    {
91
      "attachments": {},
92
      "cell_type": "markdown",
93
      "metadata": {},
94
      "source": [
95
        "## Knowledge Base Download"
96
      ]
97
    },
98
    {
99
      "attachments": {},
100
      "cell_type": "markdown",
101
      "metadata": {
102
        "id": "L0709sN9cKl8"
103
      },
104
      "source": [
105
        "To begin, we need to setup our data and retrieval components for RAG. We'll start with a dataset that contains info on the recent Llama 2 models:"
106
      ]
107
    },
108
    {
109
      "cell_type": "code",
110
      "execution_count": 7,
111
      "metadata": {
112
        "colab": {
113
          "base_uri": "https://localhost:8080/",
114
          "height": 264,
115
          "referenced_widgets": [
116
            "15e370c281f5451d90803f44a18b5df3",
117
            "3f440a391336415186b1061a3b664bca",
118
            "3dd53f401a0c45a6b239a6bb3de4ae8c",
119
            "9ebb0b2d61b443fcb02d4c442242a633",
120
            "1cb150c3faa6443fb2cb33de7b323b04",
121
            "1940183a0f3f44e2a7b977ffdc409d97",
122
            "f38778d2645b4325b772a0f3b35006a9",
123
            "20f24b406fef48148954e30c760a2585",
124
            "5b09bc28459d4f7da410fd489f822541",
125
            "0c839559ac8140968888a690475fbb9d",
126
            "866651d572d346bc87a58b3532d08789",
127
            "0a67007519b94e9f889dfb812e8aed57",
128
            "978ee6c4061842868b523768fdef761f",
129
            "c55fe51f5fa34b40b5838151f4bd8f80",
130
            "8765d2a63da649f391600d85be4ffa3f",
131
            "eb420851e4ab4ec8bcd6ffe029334f3d",
132
            "156b6881e2ab4aa9b36dad3425515a8c",
133
            "5f6719dce40249ae9d7be18dd945e142",
134
            "da0c131629e74c1abaaf302dba2501db",
135
            "37dd8ba79ea3497f8d93a975ee2e31f7",
136
            "86d6d86419ed4575a219a83022c6f2d8",
137
            "47a73febe8344f28990b7ef0a48e0fdf",
138
            "c2f722a0dabf4423b9dab3650654eb3d",
139
            "04963424351e4e189fd1accff0761c13",
140
            "e3be547b6cf74014ba896988331d58be",
141
            "23481861d34040eaad6a44afb220abd5",
142
            "0466c1a6801c4523898e736d7fd840e4",
143
            "43333bd25ea64ab0a69e5c6523bb4f12",
144
            "1ce6d260cfbc42a3819b5eea477454c6",
145
            "c049b4a572c2401ca8f5ec1e97ce7eec",
146
            "2ff85b2c3d99438f95741225842eeee1",
147
            "4d7768dee55b4a5d9a6c508145773d4c",
148
            "40e395600ca6480ea0faf3e43b6a97a0",
149
            "d91d537e023a4d05b92e2ac816decb56",
150
            "2ef4863c67144f73957265f05f45f4b0",
151
            "b87feaecd2dd429eab247ef11526b83a",
152
            "476baf88f01144e0b440507f440de9fb",
153
            "f952a2d871ff42dc9e573b73ae6bf7a2",
154
            "a2402336caca46318d374954ba896626",
155
            "fd3afd88485c4bb2a92b0171d58df088",
156
            "c5caf402e5a04706a679957eb4f6db45",
157
            "7f68415905974927b5dabf7afe1a212c",
158
            "288b2f021a17414aae208194f49fa1bb",
159
            "e327d29d8754472e8842b6c180e7e3ec",
160
            "c0226a557e4f4db28b2aef5bb1957d1f",
161
            "f995c312c6004029bf9bcd6ef1e8c968",
162
            "64dc7303faf442dc9e11ab2b9c32ef27",
163
            "52fc99d752d74084a13a800fa4a32c5c",
164
            "60ada0e7c07d4296a851f7e7502d7076",
165
            "74507837428a4c73a7f7cc0c14eb5943",
166
            "f34ac26c9268462ca8a160ab399eb3d8",
167
            "e9f1b5eea416421f90bedd998ebd3d31",
168
            "a992efa50d014755a5bf9e0ca94aecba",
169
            "27dccef87f8a49ce9e477ee3fea03aac",
170
            "dc7a4ff1e33545688788d4e84382f3da"
171
          ]
172
        },
173
        "id": "rDo76dejcKl8",
174
        "outputId": "04638843-b458-406a-9ead-b896ec372358"
175
      },
176
      "outputs": [
177
        {
178
          "data": {
179
            "application/vnd.jupyter.widget-view+json": {
180
              "model_id": "15e370c281f5451d90803f44a18b5df3",
181
              "version_major": 2,
182
              "version_minor": 0
183
            },
184
            "text/plain": [
185
              "Downloading readme:   0%|          | 0.00/409 [00:00<?, ?B/s]"
186
            ]
187
          },
188
          "metadata": {},
189
          "output_type": "display_data"
190
        },
191
        {
192
          "data": {
193
            "application/vnd.jupyter.widget-view+json": {
194
              "model_id": "0a67007519b94e9f889dfb812e8aed57",
195
              "version_major": 2,
196
              "version_minor": 0
197
            },
198
            "text/plain": [
199
              "Downloading data files:   0%|          | 0/1 [00:00<?, ?it/s]"
200
            ]
201
          },
202
          "metadata": {},
203
          "output_type": "display_data"
204
        },
205
        {
206
          "data": {
207
            "application/vnd.jupyter.widget-view+json": {
208
              "model_id": "c2f722a0dabf4423b9dab3650654eb3d",
209
              "version_major": 2,
210
              "version_minor": 0
211
            },
212
            "text/plain": [
213
              "Downloading data:   0%|          | 0.00/14.4M [00:00<?, ?B/s]"
214
            ]
215
          },
216
          "metadata": {},
217
          "output_type": "display_data"
218
        },
219
        {
220
          "data": {
221
            "application/vnd.jupyter.widget-view+json": {
222
              "model_id": "d91d537e023a4d05b92e2ac816decb56",
223
              "version_major": 2,
224
              "version_minor": 0
225
            },
226
            "text/plain": [
227
              "Extracting data files:   0%|          | 0/1 [00:00<?, ?it/s]"
228
            ]
229
          },
230
          "metadata": {},
231
          "output_type": "display_data"
232
        },
233
        {
234
          "data": {
235
            "application/vnd.jupyter.widget-view+json": {
236
              "model_id": "c0226a557e4f4db28b2aef5bb1957d1f",
237
              "version_major": 2,
238
              "version_minor": 0
239
            },
240
            "text/plain": [
241
              "Generating train split: 0 examples [00:00, ? examples/s]"
242
            ]
243
          },
244
          "metadata": {},
245
          "output_type": "display_data"
246
        },
247
        {
248
          "data": {
249
            "text/plain": [
250
              "Dataset({\n",
251
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references'],\n",
252
              "    num_rows: 4838\n",
253
              "})"
254
            ]
255
          },
256
          "execution_count": 7,
257
          "metadata": {},
258
          "output_type": "execute_result"
259
        }
260
      ],
261
      "source": [
262
        "from datasets import load_dataset\n",
263
        "\n",
264
        "data = load_dataset(\n",
265
        "    \"jamescalam/llama-2-arxiv-papers-chunked\",\n",
266
        "    split=\"train\"\n",
267
        ")\n",
268
        "data"
269
      ]
270
    },
271
    {
272
      "cell_type": "code",
273
      "execution_count": 8,
274
      "metadata": {
275
        "colab": {
276
          "base_uri": "https://localhost:8080/"
277
        },
278
        "id": "LuLjiD5ycKl8",
279
        "outputId": "4837a2fe-c4b7-4f34-9410-1722139c2ba7"
280
      },
281
      "outputs": [
282
        {
283
          "data": {
284
            "text/plain": [
285
              "{'doi': '1102.0183',\n",
286
              " 'chunk-id': '0',\n",
287
              " 'chunk': 'High-Performance Neural Networks\\nfor Visual Object Classi\\x0ccation\\nDan C. Cire\\x18 san, Ueli Meier, Jonathan Masci,\\nLuca M. Gambardella and J\\x7f urgen Schmidhuber\\nTechnical Report No. IDSIA-01-11\\nJanuary 2011\\nIDSIA / USI-SUPSI\\nDalle Molle Institute for Arti\\x0ccial Intelligence\\nGalleria 2, 6928 Manno, Switzerland\\nIDSIA is a joint institute of both University of Lugano (USI) and University of Applied Sciences of Southern Switzerland (SUPSI),\\nand was founded in 1988 by the Dalle Molle Foundation which promoted quality of life.\\nThis work was partially supported by the Swiss Commission for Technology and Innovation (CTI), Project n. 9688.1 IFF:\\nIntelligent Fill in Form.arXiv:1102.0183v1  [cs.AI]  1 Feb 2011\\nTechnical Report No. IDSIA-01-11 1\\nHigh-Performance Neural Networks\\nfor Visual Object Classi\\x0ccation\\nDan C. Cire\\x18 san, Ueli Meier, Jonathan Masci,\\nLuca M. Gambardella and J\\x7f urgen Schmidhuber\\nJanuary 2011\\nAbstract\\nWe present a fast, fully parameterizable GPU implementation of Convolutional Neural\\nNetwork variants. Our feature extractors are neither carefully designed nor pre-wired, but',\n",
288
              " 'id': '1102.0183',\n",
289
              " 'title': 'High-Performance Neural Networks for Visual Object Classification',\n",
290
              " 'summary': 'We present a fast, fully parameterizable GPU implementation of Convolutional\\nNeural Network variants. Our feature extractors are neither carefully designed\\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\\narchitectures achieve the best published results on benchmarks for object\\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\\nback-propagation perform better than more shallow ones. Learning is\\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\\nrespectively.',\n",
291
              " 'source': 'http://arxiv.org/pdf/1102.0183',\n",
292
              " 'authors': ['Dan C. Cireşan',\n",
293
              "  'Ueli Meier',\n",
294
              "  'Jonathan Masci',\n",
295
              "  'Luca M. Gambardella',\n",
296
              "  'Jürgen Schmidhuber'],\n",
297
              " 'categories': ['cs.AI', 'cs.NE'],\n",
298
              " 'comment': '12 pages, 2 figures, 5 tables',\n",
299
              " 'journal_ref': None,\n",
300
              " 'primary_category': 'cs.AI',\n",
301
              " 'published': '20110201',\n",
302
              " 'updated': '20110201',\n",
303
              " 'references': []}"
304
            ]
305
          },
306
          "execution_count": 8,
307
          "metadata": {},
308
          "output_type": "execute_result"
309
        }
310
      ],
311
      "source": [
312
        "data[0]"
313
      ]
314
    },
315
    {
316
      "attachments": {},
317
      "cell_type": "markdown",
318
      "metadata": {
319
        "id": "ohXiH7DxcKl8"
320
      },
321
      "source": [
322
        "We mainly want the information contained within the `chunk` parameter, although we can pull in other bits of data as metadata for use later. We'll also create a new unique ID for each record by concatenating the `doi` and `chunk-id` fields."
323
      ]
324
    },
325
    {
326
      "cell_type": "code",
327
      "execution_count": 9,
328
      "metadata": {
329
        "colab": {
330
          "base_uri": "https://localhost:8080/",
331
          "height": 136,
332
          "referenced_widgets": [
333
            "7090556da17f4a45a881deff320a301f",
334
            "d4749795649442b29a06d1d43964aa7f",
335
            "29e6f822b0f94e5eb70f236a648f2518",
336
            "57f0df54e956480c88bef41f27a3253c",
337
            "058ca681ca89482aad52e7bac45f4799",
338
            "2edcf48f48d54eb7817fb3cbf04104d7",
339
            "32e69f8655a442e9bd66cef247ef53f0",
340
            "34bc9aaaca26469a800e4b887aead284",
341
            "8d5bbcc74d4c463db6def4e25dd9e5ac",
342
            "05cea30bb8944ab99fd226a6b244a121",
343
            "fbb4f2a3250d430d86991cbe2691fb23"
344
          ]
345
        },
346
        "id": "wzF9e3SVcKl8",
347
        "outputId": "a7741908-f739-4ffe-90ff-02ea9b7ca24f"
348
      },
349
      "outputs": [
350
        {
351
          "data": {
352
            "application/vnd.jupyter.widget-view+json": {
353
              "model_id": "7090556da17f4a45a881deff320a301f",
354
              "version_major": 2,
355
              "version_minor": 0
356
            },
357
            "text/plain": [
358
              "Map:   0%|          | 0/4838 [00:00<?, ? examples/s]"
359
            ]
360
          },
361
          "metadata": {},
362
          "output_type": "display_data"
363
        },
364
        {
365
          "data": {
366
            "text/plain": [
367
              "Dataset({\n",
368
              "    features: ['doi', 'chunk-id', 'chunk', 'id', 'title', 'summary', 'source', 'authors', 'categories', 'comment', 'journal_ref', 'primary_category', 'published', 'updated', 'references', 'uid'],\n",
369
              "    num_rows: 4838\n",
370
              "})"
371
            ]
372
          },
373
          "execution_count": 9,
374
          "metadata": {},
375
          "output_type": "execute_result"
376
        }
377
      ],
378
      "source": [
379
        "data = data.map(lambda x: {\n",
380
        "    'uid': f\"{x['doi']}-{x['chunk-id']}\"\n",
381
        "})\n",
382
        "data"
383
      ]
384
    },
385
    {
386
      "cell_type": "code",
387
      "execution_count": 10,
388
      "metadata": {
389
        "id": "5Zun9xprcKl8"
390
      },
391
      "outputs": [],
392
      "source": [
393
        "data = data.to_pandas()\n",
394
        "# drop irrelevant fields\n",
395
        "data = data[['uid', 'chunk', 'title', 'source']]"
396
      ]
397
    },
398
    {
399
      "attachments": {},
400
      "cell_type": "markdown",
401
      "metadata": {
402
        "id": "0zp3hhNBcKl9"
403
      },
404
      "source": [
405
        "`chunk` will be the text that we encode and store inside Pinecone. To encode that text data we need to use an embedding model, for that we can use open source [sentence transformers](https://github.com/UKPLab/sentence-transformers), Cohere, OpenAI, and many other services. In this example we will use OpenAI, to do so we will need an [OpenAI API key](https://platform.openai.com) (there will be some minor embedding cost incurred here)."
406
      ]
407
    },
408
    {
409
      "cell_type": "code",
410
      "execution_count": 11,
411
      "metadata": {
412
        "id": "AuUszBFzcKl9"
413
      },
414
      "outputs": [],
415
      "source": [
416
        "import os\n",
417
        "\n",
418
        "os.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\") or \"YOUR_API_KEY\""
419
      ]
420
    },
421
    {
422
      "attachments": {},
423
      "cell_type": "markdown",
424
      "metadata": {
425
        "id": "mt0oF0_ScKl9"
426
      },
427
      "source": [
428
        "Now we can create embeddings like so:"
429
      ]
430
    },
431
    {
432
      "cell_type": "code",
433
      "execution_count": 12,
434
      "metadata": {
435
        "id": "4fLOSa59cKl9"
436
      },
437
      "outputs": [],
438
      "source": [
439
        "import openai\n",
440
        "\n",
441
        "embed_model_id = \"text-embedding-ada-002\"\n",
442
        "\n",
443
        "res = openai.Embedding.create(\n",
444
        "    input=[\n",
445
        "        \"We would have some text to embed here\",\n",
446
        "        \"And maybe another chunk here too\"\n",
447
        "    ], engine=embed_model_id\n",
448
        ")"
449
      ]
450
    },
451
    {
452
      "cell_type": "code",
453
      "execution_count": 13,
454
      "metadata": {
455
        "colab": {
456
          "base_uri": "https://localhost:8080/"
457
        },
458
        "id": "rgzqzHM3cKl9",
459
        "outputId": "4dd0786c-3810-4304-94da-694e3473f82f"
460
      },
461
      "outputs": [
462
        {
463
          "data": {
464
            "text/plain": [
465
              "dict_keys(['object', 'data', 'model', 'usage'])"
466
            ]
467
          },
468
          "execution_count": 13,
469
          "metadata": {},
470
          "output_type": "execute_result"
471
        }
472
      ],
473
      "source": [
474
        "res.keys()"
475
      ]
476
    },
477
    {
478
      "attachments": {},
479
      "cell_type": "markdown",
480
      "metadata": {
481
        "id": "6KbZRU2XcKl9"
482
      },
483
      "source": [
484
        "Inside the response `res` we will find a JSON like object containing our new embeddings within the `data` field:"
485
      ]
486
    },
487
    {
488
      "cell_type": "code",
489
      "execution_count": 14,
490
      "metadata": {
491
        "colab": {
492
          "base_uri": "https://localhost:8080/"
493
        },
494
        "id": "0UhfV2PWcKl9",
495
        "outputId": "ec81d686-3acf-49a1-ab22-9d6e8d3fd547"
496
      },
497
      "outputs": [
498
        {
499
          "data": {
500
            "text/plain": [
501
              "2"
502
            ]
503
          },
504
          "execution_count": 14,
505
          "metadata": {},
506
          "output_type": "execute_result"
507
        }
508
      ],
509
      "source": [
510
        "len(res['data'])"
511
      ]
512
    },
513
    {
514
      "cell_type": "code",
515
      "execution_count": 15,
516
      "metadata": {
517
        "colab": {
518
          "base_uri": "https://localhost:8080/"
519
        },
520
        "id": "3PRI7VdRcKl9",
521
        "outputId": "7f0a2e9a-d93d-44e4-b4e4-706d5dfeac91"
522
      },
523
      "outputs": [
524
        {
525
          "data": {
526
            "text/plain": [
527
              "(1536, 1536)"
528
            ]
529
          },
530
          "execution_count": 15,
531
          "metadata": {},
532
          "output_type": "execute_result"
533
        }
534
      ],
535
      "source": [
536
        "len(res['data'][0]['embedding']), len(res['data'][1]['embedding'])"
537
      ]
538
    },
539
    {
540
      "attachments": {},
541
      "cell_type": "markdown",
542
      "metadata": {
543
        "id": "4K0d29jDcKl9"
544
      },
545
      "source": [
546
        "Each embedding has a dimensionality of `1536`, as this is the embedding dimensionality of the `text-embedding-ada-002` model. We will apply this same embedding logic to the dataset we downloaded before, but before doing so we must create a vector DB index where we can store those embeddings."
547
      ]
548
    },
549
    {
550
      "attachments": {},
551
      "cell_type": "markdown",
552
      "metadata": {
553
        "id": "iQ0MFH5ncKl9"
554
      },
555
      "source": [
556
        "## Creating the Knowledge Base"
557
      ]
558
    },
559
    {
560
      "attachments": {},
561
      "cell_type": "markdown",
562
      "metadata": {
563
        "id": "5n8R94bCcKl-"
564
      },
565
      "source": [
566
        "Now we need a place to store these embeddings and enable a efficient vector search through them all. To do that we use Pinecone, we can get a [free API key](https://app.pinecone.io/) and enter it below where we will initialize our connection to Pinecone and create a new index."
567
      ]
568
    },
569
    {
570
      "cell_type": "code",
571
      "execution_count": 16,
572
      "metadata": {
573
        "colab": {
574
          "base_uri": "https://localhost:8080/"
575
        },
576
        "id": "kl7WXGFxcKl-",
577
        "outputId": "ff1c0e90-78bb-460d-8ff6-0e8b4d50cdd9"
578
      },
579
      "outputs": [
580
        {
581
          "data": {
582
            "text/plain": [
583
              "WhoAmIResponse(username='c78f2bd', user_label='default', projectname='9a4fbb6')"
584
            ]
585
          },
586
          "execution_count": 16,
587
          "metadata": {},
588
          "output_type": "execute_result"
589
        }
590
      ],
591
      "source": [
592
        "from pinecone import Pinecone\n",
593
        "\n",
594
        "# initialize connection to pinecone (get API key at app.pinecone.io)\n",
595
        "api_key = os.getenv(\"PINECONE_API_KEY\") or \"YOUR_API_KEY\"\n",
596
        "# find your environment next to the api key in pinecone console\n",
597
        "env = os.getenv(\"PINECONE_ENVIRONMENT\") or \"YOUR_ENV\"\n",
598
        "\n",
599
        "pc = Pinecone(api_key=api_key)\n",
600
        "pinecone.whoami()"
601
      ]
602
    },
603
    {
604
      "attachments": {},
605
      "cell_type": "markdown",
606
      "metadata": {
607
        "id": "as8ZZQsUcKl-"
608
      },
609
      "source": [
610
        "Create the index:"
611
      ]
612
    },
613
    {
614
      "cell_type": "code",
615
      "execution_count": 17,
616
      "metadata": {
617
        "id": "e35m6GFHcKl-"
618
      },
619
      "outputs": [],
620
      "source": [
621
        "index_name = \"nemo-guardrails-rag-with-actions\""
622
      ]
623
    },
624
    {
625
      "cell_type": "code",
626
      "execution_count": 18,
627
      "metadata": {
628
        "colab": {
629
          "base_uri": "https://localhost:8080/"
630
        },
631
        "id": "gVh3zecHcKl-",
632
        "outputId": "d1627bea-aeca-4926-9975-6f70bb0c5165"
633
      },
634
      "outputs": [
635
        {
636
          "data": {
637
            "text/plain": [
638
              "{'dimension': 1536,\n",
639
              " 'index_fullness': 0.0,\n",
640
              " 'namespaces': {'': {'vector_count': 4838}},\n",
641
              " 'total_vector_count': 4838}"
642
            ]
643
          },
644
          "execution_count": 18,
645
          "metadata": {},
646
          "output_type": "execute_result"
647
        }
648
      ],
649
      "source": [
650
        "import time\n",
651
        "\n",
652
        "# check if index already exists (it shouldn't if this is first time)\n",
653
        "if index_name not in pinecone.list_indexes().names():\n",
654
        "    # if does not exist, create index\n",
655
        "    pinecone.create_index(\n",
656
        "        index_name,\n",
657
        "        dimension=len(res['data'][0]['embedding']),\n",
658
        "        metric='cosine'\n",
659
        "    )\n",
660
        "    # wait for index to be initialized\n",
661
        "    while not pinecone.describe_index(index_name).status['ready']:\n",
662
        "        time.sleep(1)\n",
663
        "\n",
664
        "# connect to index\n",
665
        "index = pinecone.Index(index_name)\n",
666
        "# view index stats\n",
667
        "index.describe_index_stats()"
668
      ]
669
    },
670
    {
671
      "attachments": {},
672
      "cell_type": "markdown",
673
      "metadata": {
674
        "id": "Nj16rfrLcKl-"
675
      },
676
      "source": [
677
        "We can see the index is currently empty with a `total_vector_count` of `0`. We can begin populating it with OpenAI `text-embedding-ada-002` embeddings like so:"
678
      ]
679
    },
680
    {
681
      "cell_type": "code",
682
      "execution_count": 19,
683
      "metadata": {
684
        "colab": {
685
          "base_uri": "https://localhost:8080/",
686
          "height": 49,
687
          "referenced_widgets": [
688
            "378b3bf651324e8a89cc35dbc5deb75e",
689
            "4cd31f35980c4adb929c98e01736acfb",
690
            "12dc71fc026c439bbb35ce783348dde8",
691
            "4b4fb5bd0ad945c48ca73f7710f9407d",
692
            "68991c91e49b485791239bb279ead5e5",
693
            "1da9a2e3371347928189ac6d0dfa47bd",
694
            "44a1fc7a38de443899e2d44e5153a386",
695
            "0c24194eeef94e81803cb33ffbc88e18",
696
            "930220685b6747b7b7a9c6138ad6cf1e",
697
            "b8d1ddbb52034a2db5c1aa573cda3ab2",
698
            "a988f01ffb1f41949d0390aba59624bd"
699
          ]
700
        },
701
        "id": "m68xo07QcKl-",
702
        "outputId": "ec1b29f5-757b-46fc-e752-18d2c59d7512"
703
      },
704
      "outputs": [
705
        {
706
          "data": {
707
            "application/vnd.jupyter.widget-view+json": {
708
              "model_id": "378b3bf651324e8a89cc35dbc5deb75e",
709
              "version_major": 2,
710
              "version_minor": 0
711
            },
712
            "text/plain": [
713
              "  0%|          | 0/49 [00:00<?, ?it/s]"
714
            ]
715
          },
716
          "metadata": {},
717
          "output_type": "display_data"
718
        }
719
      ],
720
      "source": [
721
        "from tqdm.auto import tqdm\n",
722
        "\n",
723
        "batch_size = 100  # how many embeddings we create and insert at once\n",
724
        "\n",
725
        "for i in tqdm(range(0, len(data), batch_size)):\n",
726
        "    # find end of batch\n",
727
        "    i_end = min(len(data), i+batch_size)\n",
728
        "    batch = data[i:i_end]\n",
729
        "    # get ids\n",
730
        "    ids_batch = batch['uid'].to_list()\n",
731
        "    # get texts to encode\n",
732
        "    texts = batch['chunk'].to_list()\n",
733
        "    # create embeddings\n",
734
        "    res = openai.Embedding.create(input=texts, engine=embed_model_id)\n",
735
        "    embeds = [record['embedding'] for record in res['data']]\n",
736
        "    # create metadata\n",
737
        "    metadata = [{\n",
738
        "        'chunk': x['chunk'],\n",
739
        "        'source': x['source']\n",
740
        "    } for _, x in batch.iterrows()]\n",
741
        "    to_upsert = list(zip(ids_batch, embeds, metadata))\n",
742
        "    # upsert to Pinecone\n",
743
        "    index.upsert(vectors=to_upsert)"
744
      ]
745
    },
746
    {
747
      "attachments": {},
748
      "cell_type": "markdown",
749
      "metadata": {},
750
      "source": [
751
        "## RAG Functions for Guardrails"
752
      ]
753
    },
754
    {
755
      "attachments": {},
756
      "cell_type": "markdown",
757
      "metadata": {
758
        "id": "-ZbgLMxAcKl-"
759
      },
760
      "source": [
761
        "Now that we've added all of our text data to the index let's create a \"retrieve function\" that will allow us to take a user query, retrieve relevant records, and return them for use by our LLM.\n",
762
        "\n",
763
        "_Note: all functions defined and used with Guardrails `generate_async` must also be async functions._"
764
      ]
765
    },
766
    {
767
      "cell_type": "code",
768
      "execution_count": 71,
769
      "metadata": {
770
        "id": "ulmrh-RucKl-"
771
      },
772
      "outputs": [],
773
      "source": [
774
        "async def retrieve(query: str) -> list:\n",
775
        "    # create query embedding\n",
776
        "    res = openai.Embedding.create(input=[query], engine=embed_model_id)\n",
777
        "    xq = res['data'][0]['embedding']\n",
778
        "    # get relevant contexts from pinecone\n",
779
        "    res = index.query(xq, top_k=5, include_metadata=True)\n",
780
        "    # get list of retrieved texts\n",
781
        "    contexts = [x['metadata']['chunk'] for x in res['matches']]\n",
782
        "    return contexts"
783
      ]
784
    },
785
    {
786
      "attachments": {},
787
      "cell_type": "markdown",
788
      "metadata": {
789
        "id": "_FrhDqricKl-"
790
      },
791
      "source": [
792
        "We will create another function to perform the actual retrieval augmented generation (RAG) step given a particular query and contexts."
793
      ]
794
    },
795
    {
796
      "cell_type": "code",
797
      "execution_count": 72,
798
      "metadata": {
799
        "id": "7JVPD749cKl-"
800
      },
801
      "outputs": [],
802
      "source": [
803
        "async def rag(query: str, contexts: list) -> str:\n",
804
        "    print(\"> RAG Called\")  # we'll add this so we can see when this is being used\n",
805
        "    context_str = \"\\n\".join(contexts)\n",
806
        "    # place query and contexts into RAG prompt\n",
807
        "    prompt = f\"\"\"You are a helpful assistant, below is a query from a user and\n",
808
        "    some relevant contexts. Answer the question given the information in those\n",
809
        "    contexts. If you cannot find the answer to the question, say \"I don't know\".\n",
810
        "\n",
811
        "    Contexts:\n",
812
        "    {context_str}\n",
813
        "\n",
814
        "    Query: {query}\n",
815
        "\n",
816
        "    Answer: \"\"\"\n",
817
        "    # generate answer\n",
818
        "    res = openai.Completion.create(\n",
819
        "        engine=\"text-davinci-003\",\n",
820
        "        prompt=prompt,\n",
821
        "        temperature=0.0,\n",
822
        "        max_tokens=100\n",
823
        "    )\n",
824
        "    return res['choices'][0]['text']"
825
      ]
826
    },
827
    {
828
      "attachments": {},
829
      "cell_type": "markdown",
830
      "metadata": {},
831
      "source": [
832
        "## Guardrails"
833
      ]
834
    },
835
    {
836
      "attachments": {},
837
      "cell_type": "markdown",
838
      "metadata": {},
839
      "source": [
840
        "We now need to initialize our configs for Rails:"
841
      ]
842
    },
843
    {
844
      "cell_type": "code",
845
      "execution_count": null,
846
      "metadata": {},
847
      "outputs": [],
848
      "source": [
849
        "yaml_content = \"\"\"\n",
850
        "models:\n",
851
        "- type: main\n",
852
        "  engine: openai\n",
853
        "  model: text-davinci-003\n",
854
        "\"\"\"\n",
855
        "\n",
856
        "rag_colang_content = \"\"\"\n",
857
        "# define limits\n",
858
        "define user ask politics\n",
859
        "    \"what are your political beliefs?\"\n",
860
        "    \"thoughts on the president?\"\n",
861
        "    \"left wing\"\n",
862
        "    \"right wing\"\n",
863
        "\n",
864
        "define bot answer politics\n",
865
        "    \"I'm a shopping assistant, I don't like to talk of politics.\"\n",
866
        "    \"Sorry I can't talk about politics!\"\n",
867
        "\n",
868
        "define flow politics\n",
869
        "    user ask politics\n",
870
        "    bot answer politics\n",
871
        "    bot offer help\n",
872
        "\n",
873
        "# define RAG intents and flow\n",
874
        "define user ask llama\n",
875
        "    \"tell me about llama 2?\"\n",
876
        "    \"what is large language model\"\n",
877
        "    \"where did meta's new model come from?\"\n",
878
        "    \"how to llama?\"\n",
879
        "    \"have you ever meta llama?\"\n",
880
        "\n",
881
        "define flow llama\n",
882
        "    user ask llama\n",
883
        "    $contexts = execute retrieve(query=$last_user_message)\n",
884
        "    $answer = execute rag(query=$last_user_message, contexts=$contexts)\n",
885
        "    bot $answer\n",
886
        "\"\"\""
887
      ]
888
    },
889
    {
890
      "attachments": {},
891
      "cell_type": "markdown",
892
      "metadata": {
893
        "id": "MKb5yYPZcKl_"
894
      },
895
      "source": [
896
        "Note how we have created a user message (`user ask llama`) and flow (`llama`) for handling user queries if they ask about anything Llama 2 / LLM related:\n",
897
        "\n",
898
        "```colang\n",
899
        "define user ask llama\n",
900
        "    \"tell me about llama 2?\"\n",
901
        "    \"what is large language model\"\n",
902
        "    \"where did meta's new model come from?\"\n",
903
        "    \"how to llama?\"\n",
904
        "    \"have you ever meta llama?\"\n",
905
        "\n",
906
        "define flow llama\n",
907
        "    user ask llama\n",
908
        "    $contexts = execute retrieve(query=$last_user_message)\n",
909
        "    $answer = execute rag(query=$last_user_message, contexts=$contexts)\n",
910
        "    bot $answer\n",
911
        "```\n",
912
        "\n",
913
        "It executes the `retrieve` action using the `$last_user_message` to get our `$contexts`, we then pass the `$last_user_message` and `$contexts` to our `rag` action. We initialize our RAG-enabled rails with this Colang setup:"
914
      ]
915
    },
916
    {
917
      "cell_type": "code",
918
      "execution_count": 74,
919
      "metadata": {
920
        "id": "nTMYxfWqcKl_"
921
      },
922
      "outputs": [],
923
      "source": [
924
        "from nemoguardrails import LLMRails, RailsConfig\n",
925
        "\n",
926
        "# initialize rails config\n",
927
        "config = RailsConfig.from_content(\n",
928
        "    colang_content=rag_colang_content,\n",
929
        "    yaml_content=yaml_content\n",
930
        ")\n",
931
        "# create rails\n",
932
        "rag_rails = LLMRails(config)"
933
      ]
934
    },
935
    {
936
      "attachments": {},
937
      "cell_type": "markdown",
938
      "metadata": {
939
        "id": "DjhDEGxbcKl_"
940
      },
941
      "source": [
942
        "Remember! We need to register any actions that are used in the Colang config file, otherwise our rails have no idea how to `execute retrieve` or `execute rag`. We register both like so:"
943
      ]
944
    },
945
    {
946
      "cell_type": "code",
947
      "execution_count": 75,
948
      "metadata": {
949
        "id": "suCXG0JfcKl_"
950
      },
951
      "outputs": [],
952
      "source": [
953
        "rag_rails.register_action(action=retrieve, name=\"retrieve\")\n",
954
        "rag_rails.register_action(action=rag, name=\"rag\")"
955
      ]
956
    },
957
    {
958
      "attachments": {},
959
      "cell_type": "markdown",
960
      "metadata": {
961
        "id": "bIj2QY0vcKl_"
962
      },
963
      "source": [
964
        "Now let's try out our RAG agent."
965
      ]
966
    },
967
    {
968
      "cell_type": "code",
969
      "execution_count": 76,
970
      "metadata": {
971
        "colab": {
972
          "base_uri": "https://localhost:8080/",
973
          "height": 35
974
        },
975
        "id": "aNL1xLEQcKl_",
976
        "outputId": "e2fcdeee-f458-4c72-ad3f-47302ce64d95"
977
      },
978
      "outputs": [
979
        {
980
          "data": {
981
            "application/vnd.google.colaboratory.intrinsic+json": {
982
              "type": "string"
983
            },
984
            "text/plain": [
985
              "'Hi there! How can I help you today?'"
986
            ]
987
          },
988
          "execution_count": 76,
989
          "metadata": {},
990
          "output_type": "execute_result"
991
        }
992
      ],
993
      "source": [
994
        "await rag_rails.generate_async(prompt=\"hello\")"
995
      ]
996
    },
997
    {
998
      "cell_type": "code",
999
      "execution_count": 77,
1000
      "metadata": {
1001
        "colab": {
1002
          "base_uri": "https://localhost:8080/",
1003
          "height": 104
1004
        },
1005
        "id": "CZ-fKq5eeUCh",
1006
        "outputId": "96328b30-43d3-44f7-ff02-4cba3e214f5a"
1007
      },
1008
      "outputs": [
1009
        {
1010
          "name": "stdout",
1011
          "output_type": "stream",
1012
          "text": [
1013
            "> RAG Called\n"
1014
          ]
1015
        },
1016
        {
1017
          "data": {
1018
            "application/vnd.google.colaboratory.intrinsic+json": {
1019
              "type": "string"
1020
            },
1021
            "text/plain": [
1022
              "' Llama 2 is a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. They are optimized for dialogue use cases and outperform open-source chat models on most benchmarks. They are also on par with some closed-source models, at least on the human evaluations performed. They are intended for commercial and research use in English and can be adapted for a variety of natural language generation tasks.'"
1023
            ]
1024
          },
1025
          "execution_count": 77,
1026
          "metadata": {},
1027
          "output_type": "execute_result"
1028
        }
1029
      ],
1030
      "source": [
1031
        "await rag_rails.generate_async(prompt=\"tell me about llama 2\")"
1032
      ]
1033
    },
1034
    {
1035
      "attachments": {},
1036
      "cell_type": "markdown",
1037
      "metadata": {
1038
        "id": "Unj5KOEzECC7"
1039
      },
1040
      "source": [
1041
        "We can see from the printed statement of `> RAG Called` that the RAG pipeline was used in our second prompt. However, it'd be interesting to see what the effect of this pipeline is on our actual answer. To do that, let's initialize a new Rails instance that doesn't include the call to RAG, so we can compare the results:"
1042
      ]
1043
    },
1044
    {
1045
      "cell_type": "code",
1046
      "execution_count": null,
1047
      "metadata": {},
1048
      "outputs": [],
1049
      "source": [
1050
        "no_rag_colang_content = \"\"\"\n",
1051
        "# define limits\n",
1052
        "define user ask politics\n",
1053
        "    \"what are your political beliefs?\"\n",
1054
        "    \"thoughts on the president?\"\n",
1055
        "    \"left wing\"\n",
1056
        "    \"right wing\"\n",
1057
        "\n",
1058
        "define bot answer politics\n",
1059
        "    \"I'm a shopping assistant, I don't like to talk of politics.\"\n",
1060
        "    \"Sorry I can't talk about politics!\"\n",
1061
        "\n",
1062
        "define flow politics\n",
1063
        "    user ask politics\n",
1064
        "    bot answer politics\n",
1065
        "    bot offer help\n",
1066
        "\"\"\""
1067
      ]
1068
    },
1069
    {
1070
      "cell_type": "code",
1071
      "execution_count": 78,
1072
      "metadata": {
1073
        "id": "nH2pdzYRECC7"
1074
      },
1075
      "outputs": [],
1076
      "source": [
1077
        "# initialize rails config\n",
1078
        "config = RailsConfig.from_content(\n",
1079
        "    colang_content=no_rag_colang_content,\n",
1080
        "    yaml_content=yaml_content\n",
1081
        ")\n",
1082
        "# create rails\n",
1083
        "no_rag_rails = LLMRails(config)"
1084
      ]
1085
    },
1086
    {
1087
      "attachments": {},
1088
      "cell_type": "markdown",
1089
      "metadata": {
1090
        "id": "9OECaqz4ECC7"
1091
      },
1092
      "source": [
1093
        "Let's ask the same question:"
1094
      ]
1095
    },
1096
    {
1097
      "cell_type": "code",
1098
      "execution_count": 79,
1099
      "metadata": {
1100
        "colab": {
1101
          "base_uri": "https://localhost:8080/",
1102
          "height": 70
1103
        },
1104
        "id": "4dAYiJrKOFXi",
1105
        "outputId": "a60500b0-8138-4cb9-b0a1-e012552104dd"
1106
      },
1107
      "outputs": [
1108
        {
1109
          "data": {
1110
            "application/vnd.google.colaboratory.intrinsic+json": {
1111
              "type": "string"
1112
            },
1113
            "text/plain": [
1114
              "\"Llama 2 is a text-to-speech software developed by NVIDIA. It is designed to generate natural sounding speech from text and is used in a variety of applications such as virtual assistants, chatbots, and automated customer service. The software is powered by NVIDIA's AI platform and uses a deep learning model to generate the audio output.\""
1115
            ]
1116
          },
1117
          "execution_count": 79,
1118
          "metadata": {},
1119
          "output_type": "execute_result"
1120
        }
1121
      ],
1122
      "source": [
1123
        "await no_rag_rails.generate_async(prompt=\"tell me about llama 2\")"
1124
      ]
1125
    },
1126
    {
1127
      "attachments": {},
1128
      "cell_type": "markdown",
1129
      "metadata": {
1130
        "id": "gGDEQ-6JOMXs"
1131
      },
1132
      "source": [
1133
        "Hmm, not the Llama 2 we were looking for — let's ask some more questions and compare RAG vs. no-RAG answers."
1134
      ]
1135
    },
1136
    {
1137
      "cell_type": "code",
1138
      "execution_count": 80,
1139
      "metadata": {
1140
        "colab": {
1141
          "base_uri": "https://localhost:8080/",
1142
          "height": 52
1143
        },
1144
        "id": "uIGRkG_2OrQR",
1145
        "outputId": "7a3f38cd-1d4b-433e-8008-fbb529391f32"
1146
      },
1147
      "outputs": [
1148
        {
1149
          "data": {
1150
            "application/vnd.google.colaboratory.intrinsic+json": {
1151
              "type": "string"
1152
            },
1153
            "text/plain": [
1154
              "'Red teaming was used in Llama 2 training to simulate enemy tactics and techniques. This allowed the trainees to practice dealing with realistic threats and build strategies to counter them.'"
1155
            ]
1156
          },
1157
          "execution_count": 80,
1158
          "metadata": {},
1159
          "output_type": "execute_result"
1160
        }
1161
      ],
1162
      "source": [
1163
        "# no RAG\n",
1164
        "await no_rag_rails.generate_async(\n",
1165
        "    prompt=\"what was red teaming used for in llama 2 training?\"\n",
1166
        ")"
1167
      ]
1168
    },
1169
    {
1170
      "attachments": {},
1171
      "cell_type": "markdown",
1172
      "metadata": {
1173
        "id": "Mr3znSs_O7Lj"
1174
      },
1175
      "source": [
1176
        "Our no RAG rails provide an interesting, but completely wrong answer. Let's try the same with our RAG-enabled rails:"
1177
      ]
1178
    },
1179
    {
1180
      "cell_type": "code",
1181
      "execution_count": 81,
1182
      "metadata": {
1183
        "colab": {
1184
          "base_uri": "https://localhost:8080/",
1185
          "height": 70
1186
        },
1187
        "id": "jqQf0nA2OHLj",
1188
        "outputId": "06898bea-1340-48a4-d95f-e9b0e51fd591"
1189
      },
1190
      "outputs": [
1191
        {
1192
          "name": "stdout",
1193
          "output_type": "stream",
1194
          "text": [
1195
            "> RAG Called\n"
1196
          ]
1197
        },
1198
        {
1199
          "data": {
1200
            "application/vnd.google.colaboratory.intrinsic+json": {
1201
              "type": "string"
1202
            },
1203
            "text/plain": [
1204
              "' Red teaming was used to identify risks and to measure the robustness of the model with respect to a red teaming exercise executed by a set of experts. It was also used to provide qualitative insights to recognize and target specific patterns in a more comprehensive way.'"
1205
            ]
1206
          },
1207
          "execution_count": 81,
1208
          "metadata": {},
1209
          "output_type": "execute_result"
1210
        }
1211
      ],
1212
      "source": [
1213
        "# with RAG\n",
1214
        "await rag_rails.generate_async(\n",
1215
        "    prompt=\"what was red teaming used for in llama 2 training?\"\n",
1216
        ")"
1217
      ]
1218
    },
1219
    {
1220
      "attachments": {},
1221
      "cell_type": "markdown",
1222
      "metadata": {
1223
        "id": "iAVLtEXSPI3E"
1224
      },
1225
      "source": [
1226
        "A perfect answer! Clearly, our RAG-enabled rail is far more capable of answering questions, while only calling the RAG action when required as set in our `actions.co` config file. We can confirm by asking more questions, we should see the printed statement `\"> RAG Called\"` will not appear unless the question is Llama 2 / LLM related:"
1227
      ]
1228
    },
1229
    {
1230
      "cell_type": "code",
1231
      "execution_count": 82,
1232
      "metadata": {
1233
        "colab": {
1234
          "base_uri": "https://localhost:8080/",
1235
          "height": 35
1236
        },
1237
        "id": "W-9XBJNVPFEs",
1238
        "outputId": "510d89ce-3444-4ec2-893d-a13cbfbf3890"
1239
      },
1240
      "outputs": [
1241
        {
1242
          "data": {
1243
            "application/vnd.google.colaboratory.intrinsic+json": {
1244
              "type": "string"
1245
            },
1246
            "text/plain": [
1247
              "'The sky is typically blue, but can change depending on atmospheric conditions and time of day.'"
1248
            ]
1249
          },
1250
          "execution_count": 82,
1251
          "metadata": {},
1252
          "output_type": "execute_result"
1253
        }
1254
      ],
1255
      "source": [
1256
        "await rag_rails.generate_async(\n",
1257
        "    prompt=\"what color is the sky?\"\n",
1258
        ")"
1259
      ]
1260
    },
1261
    {
1262
      "attachments": {},
1263
      "cell_type": "markdown",
1264
      "metadata": {
1265
        "id": "WW0OR-goQHCk"
1266
      },
1267
      "source": [
1268
        "By using Guardrails for RAG in this way we manage to create a balance between the lightweight but naive approach of implementing [RAG with _every_ user call](https://www.pinecone.io/learn/series/langchain/langchain-retrieval-augmentation/) vs. the heavyweight and slow approach of implementing a [conversational agent with RAG tool access](https://www.pinecone.io/learn/series/langchain/langchain-agents/)."
1269
      ]
1270
    }
1271
  ],
1272
  "metadata": {
1273
    "colab": {
1274
      "provenance": []
1275
    },
1276
    "kernelspec": {
1277
      "display_name": "redacre",
1278
      "language": "python",
1279
      "name": "python3"
1280
    },
1281
    "language_info": {
1282
      "codemirror_mode": {
1283
        "name": "ipython",
1284
        "version": 3
1285
      },
1286
      "file_extension": ".py",
1287
      "mimetype": "text/x-python",
1288
      "name": "python",
1289
      "nbconvert_exporter": "python",
1290
      "pygments_lexer": "ipython3",
1291
      "version": "3.9.12"
1292
    },
1293
    "orig_nbformat": 4,
1294
    "widgets": {
1295
      "application/vnd.jupyter.widget-state+json": {
1296
        "0466c1a6801c4523898e736d7fd840e4": {
1297
          "model_module": "@jupyter-widgets/base",
1298
          "model_module_version": "1.2.0",
1299
          "model_name": "LayoutModel",
1300
          "state": {
1301
            "_model_module": "@jupyter-widgets/base",
1302
            "_model_module_version": "1.2.0",
1303
            "_model_name": "LayoutModel",
1304
            "_view_count": null,
1305
            "_view_module": "@jupyter-widgets/base",
1306
            "_view_module_version": "1.2.0",
1307
            "_view_name": "LayoutView",
1308
            "align_content": null,
1309
            "align_items": null,
1310
            "align_self": null,
1311
            "border": null,
1312
            "bottom": null,
1313
            "display": null,
1314
            "flex": null,
1315
            "flex_flow": null,
1316
            "grid_area": null,
1317
            "grid_auto_columns": null,
1318
            "grid_auto_flow": null,
1319
            "grid_auto_rows": null,
1320
            "grid_column": null,
1321
            "grid_gap": null,
1322
            "grid_row": null,
1323
            "grid_template_areas": null,
1324
            "grid_template_columns": null,
1325
            "grid_template_rows": null,
1326
            "height": null,
1327
            "justify_content": null,
1328
            "justify_items": null,
1329
            "left": null,
1330
            "margin": null,
1331
            "max_height": null,
1332
            "max_width": null,
1333
            "min_height": null,
1334
            "min_width": null,
1335
            "object_fit": null,
1336
            "object_position": null,
1337
            "order": null,
1338
            "overflow": null,
1339
            "overflow_x": null,
1340
            "overflow_y": null,
1341
            "padding": null,
1342
            "right": null,
1343
            "top": null,
1344
            "visibility": null,
1345
            "width": null
1346
          }
1347
        }
1348
      }
1349
    }
1350
  },
1351
  "nbformat": 4,
1352
  "nbformat_minor": 0
1353
}
1354

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.