txtai

Форк
0
/
44_Prompt_templates_and_task_chains.ipynb 
429 строк · 13.3 Кб
1
{
2
  "nbformat": 4,
3
  "nbformat_minor": 0,
4
  "metadata": {
5
    "colab": {
6
      "provenance": []
7
    },
8
    "kernelspec": {
9
      "name": "python3",
10
      "display_name": "Python 3"
11
    },
12
    "accelerator": "GPU",
13
    "gpuClass": "standard"
14
  },
15
  "cells": [
16
    {
17
      "cell_type": "markdown",
18
      "metadata": {
19
        "id": "vwELCooy4ljr"
20
      },
21
      "source": [
22
        "# Prompt templates and task chains\n",
23
        "\n",
24
        "txtai has long had support for workflows. Workflows connect the input and outputs of machine learning models together to create powerful transformation and processing functions.\n",
25
        "\n",
26
        "There has been a recent surge in interest in \"model prompting\", which is the process of building a natural language description of a task and passing it to a large language model (LLM). txtai has recently improved support for task templating, which builds string outputs from a set of parameters.\n",
27
        "\n",
28
        "This notebook demonstrates how txtai workflows can be used to apply prompt templates and chain those tasks together."
29
      ]
30
    },
31
    {
32
      "cell_type": "markdown",
33
      "metadata": {
34
        "id": "ew7orE2O441o"
35
      },
36
      "source": [
37
        "# Install dependencies\n",
38
        "\n",
39
        "Install `txtai` and all dependencies."
40
      ]
41
    },
42
    {
43
      "cell_type": "code",
44
      "metadata": {
45
        "id": "LPQTb25tASIG"
46
      },
47
      "source": [
48
        "%%capture\n",
49
        "!pip install git+https://github.com/neuml/txtai#egg=txtai[api]"
50
      ],
51
      "execution_count": null,
52
      "outputs": []
53
    },
54
    {
55
      "cell_type": "markdown",
56
      "metadata": {
57
        "id": "_YnqorRKAbLu"
58
      },
59
      "source": [
60
        "# Prompt workflow\n",
61
        "\n",
62
        "First, we'll look at building a workflow with a series of model prompts. This workflow creates a conditional translation using a statement and target language. Another task reads that output text and detects the language.\n",
63
        "\n",
64
        "This workflow uses a sequences pipeline. The sequences pipeline loads a Hugging Face sequence to sequence model for inference, in this case [FLAN-T5](https://huggingface.co/google/flan-t5-base). The sequences pipeline takes a prompt as input and outputs the model inference result.\n",
65
        "\n",
66
        "It's important to note that a pipeline is simply a callable function. It can easily be replaced with a call to an external API."
67
      ]
68
    },
69
    {
70
      "cell_type": "code",
71
      "metadata": {
72
        "id": "OUc9gqTyAYnm",
73
        "colab": {
74
          "base_uri": "https://localhost:8080/"
75
        },
76
        "outputId": "83300311-736c-47c8-bc16-ec0303274054"
77
      },
78
      "source": [
79
        "from txtai.pipeline import Sequences\n",
80
        "from txtai.workflow import Workflow, TemplateTask\n",
81
        "\n",
82
        "# Create sequences pipeline\n",
83
        "sequences = Sequences(\"google/flan-t5-large\")\n",
84
        "\n",
85
        "# Define workflow or chaining of tasks together.\n",
86
        "workflow = Workflow([\n",
87
        "    TemplateTask(\n",
88
        "        template=\"Translate '{statement}' to {language} if it's English\",\n",
89
        "        action=sequences\n",
90
        "    ),\n",
91
        "    TemplateTask(\n",
92
        "        template=\"What language is the following text? {text}\",\n",
93
        "        action=sequences\n",
94
        "    )\n",
95
        "])\n",
96
        "\n",
97
        "inputs = [\n",
98
        "    {\"statement\": \"Hello, how are you\", \"language\": \"French\"},\n",
99
        "    {\"statement\": \"Hallo, wie geht's dir\", \"language\": \"French\"}\n",
100
        "]\n",
101
        "\n",
102
        "print(list(workflow(inputs)))"
103
      ],
104
      "execution_count": null,
105
      "outputs": [
106
        {
107
          "output_type": "stream",
108
          "name": "stdout",
109
          "text": [
110
            "['French', 'German']\n"
111
          ]
112
        }
113
      ]
114
    },
115
    {
116
      "cell_type": "markdown",
117
      "source": [
118
        "Let's recap what happened here. The first workflow task conditionally translates text to a language if it's English.\n",
119
        "\n",
120
        "The first statement is `Hello, how are you` with a target language of French. So the statement is translated to French.\n",
121
        "\n",
122
        "The second statement is German, so it's not converted to French.\n",
123
        "\n",
124
        "The next step asks the model what the language is and it correctly prints `French` and `German`."
125
      ],
126
      "metadata": {
127
        "id": "_zz4Do8BV-Lk"
128
      }
129
    },
130
    {
131
      "cell_type": "markdown",
132
      "source": [
133
        "# Prompt Workflow as YAML\n",
134
        "\n",
135
        "The same workflow above can be created with YAML configuration."
136
      ],
137
      "metadata": {
138
        "id": "iXDAKP4CX0W9"
139
      }
140
    },
141
    {
142
      "cell_type": "code",
143
      "source": [
144
        "%%writefile workflow.yml\n",
145
        "\n",
146
        "sequences:\n",
147
        "  path: google/flan-t5-large\n",
148
        "\n",
149
        "workflow:\n",
150
        "  chain:\n",
151
        "    tasks:\n",
152
        "      - task: template\n",
153
        "        template: Translate '{statement}' to {language} if it's English\n",
154
        "        action: sequences\n",
155
        "      - task: template\n",
156
        "        template: What language is the following text? {text}\n",
157
        "        action: sequences"
158
      ],
159
      "metadata": {
160
        "colab": {
161
          "base_uri": "https://localhost:8080/"
162
        },
163
        "id": "GwV5A9xRYtYs",
164
        "outputId": "ffe6ee65-95a7-46c6-e6b9-5324eab26ca8"
165
      },
166
      "execution_count": null,
167
      "outputs": [
168
        {
169
          "output_type": "stream",
170
          "name": "stdout",
171
          "text": [
172
            "Writing workflow.yml\n"
173
          ]
174
        }
175
      ]
176
    },
177
    {
178
      "cell_type": "code",
179
      "source": [
180
        "from txtai.app import Application\n",
181
        "\n",
182
        "app = Application(\"workflow.yml\")\n",
183
        "print(list(app.workflow(\"chain\", inputs)))"
184
      ],
185
      "metadata": {
186
        "colab": {
187
          "base_uri": "https://localhost:8080/"
188
        },
189
        "id": "dr7Lv5S5X98e",
190
        "outputId": "d6ac0427-671d-4525-aa21-664430109af3"
191
      },
192
      "execution_count": null,
193
      "outputs": [
194
        {
195
          "output_type": "stream",
196
          "name": "stdout",
197
          "text": [
198
            "['French', 'German']\n"
199
          ]
200
        }
201
      ]
202
    },
203
    {
204
      "cell_type": "markdown",
205
      "source": [
206
        "As expected, the same result! This is a matter of preference on how you want to create a workflow. One advantage of YAML workflows is that an API can easily be created from the workflow file."
207
      ],
208
      "metadata": {
209
        "id": "EGqiV45fYVse"
210
      }
211
    },
212
    {
213
      "cell_type": "markdown",
214
      "source": [
215
        "# Prompt Workflow via an API call\n",
216
        "\n",
217
        "Let's say you want the workflow to be available via an API call. Well good news, txtai has a built in API mechanism using FastAPI. "
218
      ],
219
      "metadata": {
220
        "id": "9PqMU0bNYinf"
221
      }
222
    },
223
    {
224
      "cell_type": "code",
225
      "source": [
226
        "# Start an API service\n",
227
        "!CONFIG=workflow.yml nohup uvicorn \"txtai.api:app\" &> api.log &\n",
228
        "!sleep 60"
229
      ],
230
      "metadata": {
231
        "id": "vDxQj1ZIYsz3"
232
      },
233
      "execution_count": null,
234
      "outputs": []
235
    },
236
    {
237
      "cell_type": "code",
238
      "source": [
239
        "import requests\n",
240
        "\n",
241
        "# Run API request\n",
242
        "requests.post(\"http://localhost:8000/workflow\", json={\"name\": \"chain\", \"elements\": inputs}).json()"
243
      ],
244
      "metadata": {
245
        "colab": {
246
          "base_uri": "https://localhost:8080/"
247
        },
248
        "id": "R1o08SVtZW7h",
249
        "outputId": "99875acd-18a8-4c2c-ead3-cb6975a4b2d2"
250
      },
251
      "execution_count": null,
252
      "outputs": [
253
        {
254
          "output_type": "execute_result",
255
          "data": {
256
            "text/plain": [
257
              "['French', 'German']"
258
            ]
259
          },
260
          "metadata": {},
261
          "execution_count": 5
262
        }
263
      ]
264
    },
265
    {
266
      "cell_type": "markdown",
267
      "source": [
268
        "Just like the previous steps, except through an API call. Let's run via cURL for good measure."
269
      ],
270
      "metadata": {
271
        "id": "B88mCrGFl5W-"
272
      }
273
    },
274
    {
275
      "cell_type": "code",
276
      "source": [
277
        "%%bash\n",
278
        "\n",
279
        "curl -s -X POST \"http://localhost:8000/workflow\" \\\n",
280
        "     -H \"Content-Type: application/json\" \\\n",
281
        "     --data @- << EOF\n",
282
        "{\n",
283
        "  \"name\": \"chain\",\n",
284
        "  \"elements\": [\n",
285
        "    {\"statement\": \"Hello, how are you\", \"language\": \"French\"},\n",
286
        "    {\"statement\": \"Hallo, wie geht's dir\", \"language\": \"French\"}\n",
287
        "  ]\n",
288
        "}\n",
289
        "EOF"
290
      ],
291
      "metadata": {
292
        "colab": {
293
          "base_uri": "https://localhost:8080/"
294
        },
295
        "id": "hRUyh0cQl_P2",
296
        "outputId": "9db8481d-0b6e-4a31-bdf6-5443df5f768a"
297
      },
298
      "execution_count": null,
299
      "outputs": [
300
        {
301
          "output_type": "stream",
302
          "name": "stdout",
303
          "text": [
304
            "[\"French\",\"German\"]"
305
          ]
306
        }
307
      ]
308
    },
309
    {
310
      "cell_type": "markdown",
311
      "source": [
312
        "One last time, the same output is shown.\n",
313
        "\n",
314
        "If your primary development environment isn't Python, txtai does have API bindings for [JavaScript](https://github.com/neuml/txtai.js), [Rust](https://github.com/neuml/txtai.rs), [Go](https://github.com/neuml/txtai.go) and [Java](https://github.com/neuml/txtai.java).\n",
315
        "\n",
316
        "More information on the API is available [here](https://neuml.github.io/txtai/api/)."
317
      ],
318
      "metadata": {
319
        "id": "W0zL93WPoaCo"
320
      }
321
    },
322
    {
323
      "cell_type": "markdown",
324
      "source": [
325
        "# Conversational chain\n",
326
        "\n",
327
        "Conversational search is another big area of focus in 2023. [txtchat](https://github.com/neuml/txtchat) is a framework for building conversational search applications. It relies heavily on txtai. Let's see a conversational example."
328
      ],
329
      "metadata": {
330
        "id": "q9WiFG6fpzw5"
331
      }
332
    },
333
    {
334
      "cell_type": "code",
335
      "source": [
336
        "%%writefile search.yml\n",
337
        "\n",
338
        "writable: false\n",
339
        "cloud:\n",
340
        "  provider: huggingface-hub\n",
341
        "  container: neuml/txtai-intro\n",
342
        "\n",
343
        "extractor:\n",
344
        "  path: google/flan-t5-large\n",
345
        "  output: reference\n",
346
        "\n",
347
        "workflow:\n",
348
        "  search:\n",
349
        "    tasks:\n",
350
        "      - task: extractor\n",
351
        "        template: |\n",
352
        "          Answer the following question using only the context below. Give a detailed answer.\n",
353
        "          Say 'I don't have data on that' when the question can't be answered.\n",
354
        "          Question: {text}\n",
355
        "          Context: \n",
356
        "        action: extractor\n",
357
        "      - task: template\n",
358
        "        template: \"{answer}\\n\\nReference: {reference}\"\n",
359
        "        rules:\n",
360
        "          answer: I don't have data on that"
361
      ],
362
      "metadata": {
363
        "colab": {
364
          "base_uri": "https://localhost:8080/"
365
        },
366
        "id": "rM3Y551LqF-J",
367
        "outputId": "85623785-c15f-4996-9460-0644f69cf5bf"
368
      },
369
      "execution_count": null,
370
      "outputs": [
371
        {
372
          "output_type": "stream",
373
          "name": "stdout",
374
          "text": [
375
            "Overwriting search.yml\n"
376
          ]
377
        }
378
      ]
379
    },
380
    {
381
      "cell_type": "code",
382
      "source": [
383
        "app = Application(\"search.yml\")\n",
384
        "print(list(app.workflow(\"search\", [\"Tell me something about North America\"])))"
385
      ],
386
      "metadata": {
387
        "colab": {
388
          "base_uri": "https://localhost:8080/"
389
        },
390
        "id": "1Elb8JANqpwX",
391
        "outputId": "b1f1ffa1-6c47-4d90-b6f1-8098d4dc45f8"
392
      },
393
      "execution_count": null,
394
      "outputs": [
395
        {
396
          "output_type": "stream",
397
          "name": "stdout",
398
          "text": [
399
            "[\"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\\n\\nReference: 1\"]\n"
400
          ]
401
        }
402
      ]
403
    },
404
    {
405
      "cell_type": "markdown",
406
      "source": [
407
        "The first thing the code above does is run an embeddings search to build a conversational context. That context is then used to build a prompt and inference is run against the LLM. \n",
408
        "\n",
409
        "The next task formats the outputs with a reference to the best matching record. In this case, it's only an id of 1. But this can be much more useful if the id is a URL or there is logic to format the id back to a unique reference string.\n",
410
        "\n",
411
        "The [txtchat](https://github.com/neuml/txtchat) project has much more on this, check it out!"
412
      ],
413
      "metadata": {
414
        "id": "4r49V4c9s5nf"
415
      }
416
    },
417
    {
418
      "cell_type": "markdown",
419
      "source": [
420
        "# Wrapping up\n",
421
        "\n",
422
        "This notebook covered how to build prompt templates and task chains through a series of results. txtai has long had a robust and efficient workflow framework for connecting models together. This can be small and simple models and/or prompting with large models. Go ahead and give it a try!"
423
      ],
424
      "metadata": {
425
        "id": "KqfvCXp2B3li"
426
      }
427
    }
428
  ]
429
}

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.